Rockford Lhotka

 Tuesday, June 8, 2004

My recent .NET Rocks! interview was fun and covered a lot of ground. One technology that we discussed was the Whitehorse designers - the SOA designer and Class Designer - that are slated for Visual Studio 2005. In fact, Brian Randell and I co-authored an article on these designers that will come out in the July issue of MSDN Magazine.

Since the Class Designer is an extension of the UML static class diagram concept, the conversation looped briefly through a discussion of the value and history of UML. At least one blog discussion was spurred by this brief interlude.

I use UML on a regular basis, primarily in Visio. I use it as a design tool and for documentation. I don't use it for code-generation at all, because I have yet to find a UML tool that generates code I consider to be acceptable. I think UML is a decent high level modeling tool for software, and I strongly recommend that anyone doing object-oriented design or programming use UML to design and document their system. Having a decent OO diagram is totally comparable to having an ER diagram for your database. You shouldn't be caught without either one!

At the same time, I think UML has become rather dated. There are concepts in both Java and .NET that just can't be expressed in UML diagrams in a reasonable manner. For instance, component-level scoping (Friend in VB, internal in C#) just isn't there. Nor is the concept of a property as opposed to a field. Nor can you really express events or delegates.

If you are going to even try to do code-gen from a diagram, the diagram must include all the concepts of your platform/language. Otherwise you'll never be able to generate the kind of code that a programmer would actually write. In other words, you can use UML to provide abstract concept diagrams for .NET programs, but you can't use UML to express the details because UML simply doesn't have what it takes.

Recognizing this, Microsoft has created this new Class Designer tool for Visual Studio 2005. It is very comparable to a static class diagram. It uses the same symbology and notation as UML. However, it extends UML to include the concepts of field vs property, events, component-level scoping and more. In short, the Class Diagram tool allows us to create UML static class diagrams that can express all the details of a VB or C# program in .NET.

If you are familiar with UML static class diagrams and with .NET, learning the Class Designer notation will take you all of 10 seconds or less. Seriously - it is just UML with a handful of concepts we've all been wishing for over the past several years.

But the real benefit is the real-time code synchronization. The Class Designer is in your project and is directly generated from your code. It is not generated from meta-data, but instead is generated from your VB or C# (or J#) code directly. This means that changes to the diagram are changes to your code. Changes to your code are changes to the diagram. The two are totally linked, because the source for the diagram is the code.

This is not reverse-engineering like we have today with Visio or other tools, this is directly using the VB/C# code as the 'meta-data' for the diagram.

(To be fair, there is an XML file for the diagram as well. This file contains layout information about how you've positioned the graphical elements of the diagram, but it doesn't include anything about your types, methods, properties, fields, etc. - that all comes from the code.)

My only reservation about the Class Designer is that I've mostly moved beyond doing such simple code-gen. Since coming out with my CSLA .NET">VB.NET and C# Business Objects books, I've really started to appreciate the power of code generators that generate complete business classes, including basic validation logic, data access logic and more. Readers of my books have created CodeSmith templates for my CSLA .NET framework, Kathleen Dollard wrote an XSL-based generator in her code-gen book and Magenic has a really powerful one we use when building software for our clients.

While the Class Designer is nice for simple situations, for enterprise development the reality is that you'll want to code-gen a lot more than the basic class skeleton...

Tuesday, June 8, 2004 9:16:04 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, June 6, 2004

I am very happy to report that I'll be giving a full-day workshop on distributed object-oriented application design in .NET this fall at VS Live Orlando. If you are interested in this topic in general, or CSLA .NET specifically, then here's an opportunity to get a full day brain-dump as part of a great overall conference!

Over the past year I've given some 2 hour sessions on this topic as part of the main VS Live conference, and I've had several people come up after the session asking why it isn't a full-day workshop. After all, it is a very big topic to cover in just a couple hours!

Fawcette agrees, and so has decided to do a pre-con for the Orlando conference in September on the topic.

While I'll certainly cover some basic concepts and ideas around distributed object-oriented systems, the fact is that I'll be focusing mostly on how these concepts shaped CSLA .NET and how CSLA .NET addresses many of the issues we face when designing and building such systems. I'll discuss serialization, remoting, n-level undo, why reflection is sometimes very useful, abstraction of business logic and encapsulation of business data and more. I'll also discuss the creation of Windows, Web and Web services interfaces to the business objects.

I'll also dive into some of the more advanced issues that have come up on the CSLA .NET email forum - including items such as concurrency, near real-time notification of simultaneous edits, centralization of business logic (the RuleManager functionality), object-relational mapping issues (including dealing with inheritance) and some thoughts on batch processing and reporting.

Finally, I also plan to spend a bit of time discussing what .NET 2.0 means to object-oriented programming. I'll discuss the (very cool) new data binding enhancements in Windows Forms, and the (not quite as cool) enhancements in Web Forms. I'll also talk about what Indigo is likely to mean for distributed object-oriented systems - including a discussion about why using remoting today is not at all a bad thing!

 

Sunday, June 6, 2004 8:08:00 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, June 2, 2004

Most people (including me) don’t regularly Dispose() their Command objects when doing data access with ADO.NET. The Connection and DataReader objects have Close() methods, and people are very much in the habit (or should be) of ensuring that either Close() or Dispose() or both are called on Connection and DataReader objects.

 

But Command objects do have a Dispose() method even though they don’t have a Close() method. Should they be disposed?

 

I posed this question to some of the guys on the ADO.NET team at Microsoft. After a few emails bounced around I got an answer: “Sometimes it is important”

 

It turns out that the reason Command objects have a Dispose method is because they inherit from Component, which implements IDisposable. The reason Command objects inherit from Component is so that they can be easily used in the IDE on designer surfaces.

 

However, it also turns out that some Command objects really do have non-managed resources that need to be disposed. Some don’t. How do you know which do and which don’t? You need to ask the dev that wrote the code.

 

It turns out that SqlCommand has no un-managed resources, which is why most of us have gotten away with this so far. However, OleDbCommand and OdbcCommand do have un-managed resources and must be disposed to be safe. I don’t know about OracleCommand – as that didn’t come up in the email discussions.

 

Of course that’s not a practical answer, so the short answer to this whole thing is that you should always Dispose() your Command objects just to be safe.

 

So, follow this basic pattern in VB.NET 2002/2003 (pseudo-code):

 

Dim cn As New Connection("…")

cn.Open()

Try

  Dim cm As Command = cn.CreateCommand()

  Try

    Dim dr As DataReader = cm.ExecuteReader()

    Try

      ' do data reading here

 

    Finally

      dr.Close() ' and/or Dispose() – though Close() and Dispose() both work

    End Try

 

  Finally

    cm.Dispose()

  End Try

 

Finally

  cn.Close() ' and/or Dispose() – though Close() and Dispose() both work

End Try

 


And in C# 1.0 and 2.0 (pseudo-code):

 

using(Connection cn = new Connection("…"))

{

  cn.Open()

  using(Command cm = cn.CreateCommand())

  {

    using(DataReader dr = cm.ExecuteReader())

    {

      // do data reading here

    }

  }

}

 

And in VB 8 (VB.NET 2005) (pseudo-code):

 

Using cn As New Connection("…")

  cn.Open()

  Using cm As Command = cn.CreateCommand()

    Using dr As DataReader = cm.ExecuteReader()

      ' do data reading here

    End Using

  End Using

End Using

 

 

Wednesday, June 2, 2004 10:33:20 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, May 31, 2004

A few weeks ago I posted an entry about a solution to today's problem with serializing objects that declare events. It was pointed out that there's a better way to handle the list of delegates, and so here's a better version of the code.

  _
  Private mNonSerializable As EventHandler
  Private mSerializable As EventHandler

  Public Custom Event NameChanged As System.ComponentModel.EventHandler
    AddHandler(ByVal value As System.ComponentModel.EventHandler)
      If value.Target.GetType.IsSerializable Then
        mSerializable = CType([Delegate].Combine(mSerializable, value), EventHandler)

      Else
        mNonSerializable = CType([Delegate].Combine(mNonSerializable, value), EventHandler)
      End If
    End AddHandler

    RemoveHandler(ByVal value As System.ComponentModel.EventHandler)
      If value.Target.GetType.IsSerializable Then
        mSerializable = CType([Delegate].Remove(mSerializable, value), EventHandler)

      Else
        mNonSerializable = CType([Delegate].Remove(mNonSerializable, value), EventHandler)
      End If
    End RemoveHandler

    RaiseEvent(ByVal sender As Object, ByVal e As System.ComponentModel.EventArgs)
      If mNonSerializable IsNot Nothing Then mNonSerializable(sender, e)
      If mSerializable IsNot Nothing Then mSerializable(sender, e)
    End RaiseEvent
  End Event

Monday, May 31, 2004 9:37:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, May 29, 2004

This week at Tech Ed in San Diego I got an opportunity to talk to David Chappell and trade some thoughts about SOA, Biztalk and Web services. I always enjoy getting an opportunity to talk to David, as he is an incredibly intelligent and thoughtful person. Better still, he always brings a fresh perspective to topics that I appreciate greatly.

 

In this case we got to talking about his Biztalk presentation, which he delivered at Tech Ed. Unfortunately I missed the talk due to other commitments, but I did get some of the salient bits from our conversation.

 

David put forth that the SOA world is divided into three things: services, objects and data. Services interact with each other following SOA principles of loose coupling and message-based communication. Services are typically composed of objects. Objects interact with data – typically stored in databases.

 

All of this makes sense to me, and meshes well with my view of how services fit into the world at large. Unfortunately we had to cut the conversation short, as I had a Cabana session to deliver with Ted Neward, Chris Kinsman and Keith Pleas. Interestingly enough, Ted and I got into some great discussions with the attendees of the session about SOA, the role of objects and where (or if) object-relational mapping has any value.

 

But back to my SOA conversation with David. David’s terminology scheme is nice, because it avoids the use of the word ‘application’. Trying to apply this word to an SOA world is problematic because it becomes rapidly overloaded. Let’s explore that a bit based on my observations regarding the use of the word ‘application’.

 

Applications expose services, thus implying that applications are “service hosts”, or that services are a type of interface to an application. This makes sense to most people.

 

However, we are now talking about creating SOA applications. These are applications that exist only because they call services and aggregate data and functionality from those services. Such applications may or may not host services in and of themselves.

 

If that wasn’t enough, there’s the idea being that tiers within an application should expose services for use by other tiers in the application. It is often postulated that these tier-level services should be public for use by other applications as well as the ‘hosting’ application.

 

So now things are very complex. An application is a service. An application hosts services. An application is composed of services. An application can contain services that may be exposed to other parts of itself and to others. Oy!

 

Thus, the idea of avoiding the word ‘application’ strikes me as being a Good Thing™.

The use of the trademark symbol for this phrase flows, as far as I know, from J. Michael Straczynski (jms). It may be a bit of Babylon 5 insider trivia, but every time I use the phrase, the ™ goes through my head...

Later in the conference, David and I met up again and talked further. In the meantime I’d been thinking that something was missing from David’s model. Specifically, it is the people. Not only users, but other people that might interact with the overall SOA system at large. David put forth that there’s a couple other things in the mix – specifically user interfaces and workflow processes, and that this is where the people fit in.

 

However, I think there’s a bit more to it than this. I think that for an SOA model to be complete, the people need to truly fit into the model itself. In other words, they must become active participants in the model in the same way that services, objects and data are active participants. Rather than saying that a user interface is another element of the model, I propose that the human become another element of the model.

 

As an aside, I realize that there’s something a bit odd or even scary about treating human beings as just another part of a system model, but I think there’s some value in this exercise. This is no different than what business process analysts do when they are looking at manufacturing processes. Humans are put at the same level as machines that are part of the process and are assigned run rates, costs, overhead and so forth. From that perspective, applying this same view to software seems quite valid to me.

 

So, the question then becomes whether a human is a new entity in the model, or is a sub-type of service, object or data. The way to solve this is to see if a human can fit the role of one of the three pre-existing model concepts.

 

My first instinct was to view a human as a service. After all, services interact with other services, and the human interacts with our system and visa versa. However, services interact using loose coupled, message-based communication. Typically our systems interact with users through a Windows or Web interface. This is not message-based, but rather is really quite tightly coupled. We give the user specific, discrete bits of data and we get back specific, discrete bits of data.

 

Certainly there are examples of message-based interaction with humans. For instance, some systems send humans an email or a fax, and the human later responds in some fashion. Even in this case, however, the human’s response is typically tightly coupled rather than message-based.

 

Ultimately, then, I don’t think we can view a human as a type of service due to the violation of SOA communication requirements.

 

How about having a human be an object? Other than the obvious puns about objectification of humans (women, men or otherwise), this falls apart pretty quickly. Objects live inside of services and interact with other objects within that service. While it is certainly true that objects use tightly coupled communication, and so do humans, I think it is hard to argue that a human is contained within a given service.

 

Only one left. Are humans data? Data is used by objects within a service. More generally, data is an external resource that our system uses by calling a service, which uses its objects to interact with this external resource. Typically this is done by calling stored procedures, where we send the resource a set of discrete data or parameters, and the resource returns a set of discrete data.

 

This sounds terribly similar to how our system interacts with humans. If we turn the idea of a user interface on its head, we could view our system as the actor, and the human as a resource. In such a case, when we display data on a screen, we are basically providing discrete data parameters to a resource (the human) in much the same way we provide parameters to a database via a stored procedure call. The human then responds to our system by providing a set of results in the form of discrete data. This could easily be viewed as equivalent to the results of a stored procedure call.

 

Following this train of thought, a human really can be viewed as a resource very much like a data source. Any user interfaces we implement are really nothing more than a fancy data access layer within a service. A bit more complex than ADO.NET perhaps, but still the same basic concept. Our system needs to retrieve or alter data and we’re calling out to an external resource to get it.

 

I put this idea forward to David, who thought it sounded interesting, but didn’t see where it helped anything. I can’t say that I’m convinced that it helps anything either, but intuitively I think there’s value here.

 

I think the value may lie in how we describe our overall system. And this is important, because SOA is all about the overall systems in our organizations.

 

Nothing makes this more evident than Biztalk 2004 and its orchestration engine. This tool is really all about orchestrating services together into some meaningful process to get work done. However, people are part of almost any real business process, which implies that somewhere in most Biztalk diagrams there should be people. But there aren’t. Maybe that’s because people aren’t services, so Biztalk can’t interact with them directly. Instead, Biztalk needs to interact with services that in turn interact with the people as a resource – asking for them to provide data, or to perform some external action and report the result (which is again, just data).

 

Then again, maybe my thought processes are just totally randomized after being on the road for four weeks straight and this last week being Tech Ed, which is just a wee bit busy (day and night)… I guess only time will tell. When we start seeing humans show up in our system diagrams we’ll get a better idea how and where we, as a species, fit into this new SOA world order.

Saturday, May 29, 2004 8:53:54 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, May 15, 2004

In .NET 1.x there's a problem serializing objects that raise events when those events are handled by a non-serializable object (like a Windows Form). In .NET 2.0 there's at least one workaround in the form of Event Accessors.

The issue in question is as follows.

I have a serializable object, say Customer. It raises an event, say NameChanged. A Windows Form handles that event, which means that behind the scenes there's a delegate reference from the Customer object to the Form. This delegate that is behind the event is called a backing field. It is the field that backs up the event and actually makes it work.

When you try to serialize the Customer object using the BinaryFormatter or SoapFormatter, the serialization automatically attempts to serialize any objects referenced by Customer - including the Windows Form. Of course Windows Form objects are not serializable, so serialization fails and throws a runtime exception.

Normal variables can be marked with the NonSerialized attribute to tell the serializer to ignore that variable during serialization. Unfortunately, an event is not a normal variable. We don't actually want to prevent the event from being serialized, we want to prevent the target of the event delegate (the Windows Form in this example) from being serialized. The NonSerialized attribute can't be applied to targets of delegates, and so we have a problem.

In C# it is possible to use the field: target on an attribute to tell the compiler to apply the attribute to the backing field rather than the actual variable. This means we can use [field: NonSerialized()] to declare an event, which will cause the backing delegate field to be marked with the NonSerialized attribute. This is a bit of a hack, but does provide a solution to the problem. Unfortunately VB.NET doesn't support the field: target for attributes, so VB.NET doesn't have a solution to the problem in .NET 1.x.

Though there is a solution in C#, the solution is not an elegant solution, so in both VB.NET and C# we really need a better answer. I have spent a lot of time talking with the folks in charge of the VB compiler, and hope they come up with an elegant solution for VB 2005. In the meantime, here’s an answer that will work in either language in .NET 2.0.

In VB 2005 we’ll have the ability to declare an event using a “long form” using a concept called an Event Accessor. Rather than declaring an event using one of the normal options like:

Public Event NameChanged()

or

Public Event NameChanged As EventHandler

where the backing field is managed automatically, you’ll be able to declare an event in a way that you manage the backing field:

  Public Custom Event NameChanged As EventHandler

    AddHandler(ByVal value As EventHandler)

    End AddHandler

    RemoveHandler(ByVal value As EventHandler)

    End RemoveHandler

    RaiseEvent(ByVal sender As Object, ByVal e As EventArgs)

    End RaiseEvent

  End Event

In this model we have direct control over management of each event target. When some code wants to handle our event, the AddHandler block is invoked. When they detach from our event the RemoveHandler block is invoked. When we raise the event (using the normal RaiseEvent keyword), the RaiseEvent block is invoked.

This means we can declare our backing field to be NonSerialized if we so desire. Better yet, we can have two backing fields – one for targets that can be serialized, and another for targets that can’t be serialized:

  _

  Private mNonSerializableHandlers As New Generic.List(Of EventHandler)

  Private mSerializableHandlers As New Generic.List(Of EventHandler)

Then we can look at the type of the target (the object handling our event) and see if it is serializable or not, and put it in the appropriate list:

  Public Custom Event NameChanged As EventHandler

    AddHandler(ByVal value As EventHandler)

      If value.Target.GetType.IsSerializable Then

        mSerializableHandlers.Add(value)

      Else

        If mNonSerializableHandlers Is Nothing Then

          mNonSerializableHandlers = _

            New Generic.List(Of EventHandler)()

        End If

        mNonSerializableHandlers.Add(value)

      End If

    End AddHandler

    RemoveHandler(ByVal value As EventHandler)

      If value.Target.GetType.IsSerializable Then

        mSerializableHandlers.Remove(value)

      Else

        mNonSerializableHandlers.Remove(value)

      End If

    End RemoveHandler

    RaiseEvent(ByVal sender As Object, ByVal e As EventArgs)

      For Each item As EventHandler In mNonSerializableHandlers

        item.Invoke(sender, e)

      Next

      For Each item As EventHandler In mSerializableHandlers

        item.Invoke(sender, e)

      Next

    End RaiseEvent

  End Event

The end result is that we have declared an event that doesn’t cause problems with serialization, even if the target of the event isn’t serializable.

This is better than today’s C# solution with the field: target on the attribute, because we maintain events for serializable target objects, and only block serialization of target objects that can’t be serialized.

Saturday, May 15, 2004 10:52:16 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, April 8, 2004

Totally unrelated to my previous blog entry, I received an email asking me about the ‘future of remoting’ as it relates to web services and SOA.

It seems that the sender of the email got some advice from Microsoft indicating that the default choice for all network communication should be web services, and that instead of using conventional tiers in applications, that we should all be using SOA concepts instead. Failing that, we should use Enterprise Services (and thus presumably DCOM (shudder!)), and should only use remoting as a last resort.

Not surprisingly, I disagree. My answer follows…

Frankly, I flat out disagree with what you've been told. Any advice that starts with "default choices" is silly. There is no such thing - our environments are too complex for such simplistic views on architecture.

Unless you are willing to rearchitect your system to be a network of linked _applications_ rather than a set of tiers, you have no business applying SOA concepts. Applying SOA is a big choice, because there is a very high cost to its application (in terms of complexity and performance).

SOA is very, very cool, and is a very good thing when appropriately applied. Attempting to change classic client/server scenarios into SOA scenarios is typically the wrong approach.

While SOA is a much bigger thing than web services, it turns out that web services are mostly only useful for SOA models. This is because they employ an open standard, so exposing your 'middle tier' as a set of web services really means you have exposed the 'middle tier' to any app on your network. This forces the 'middle tier' to become an application (because you have created a trust boundary that didn't previously exist). As soon as you employ web services you have created a separate application, not a tier. This is a HUGE architectural step, and basically forces you to employ SOA concepts - even if you would prefer to use simpler and faster client/server concepts.

Remoting offers superior features and performance when compared to web services. It is the most appropriate client/server technology available in .NET today. If you aren't crossing trust boundaries (and don't want to artificially impose them with web services), then remoting is the correct technology to use.

Indigo supports the _concepts_ of remoting (namely the ability to pass .NET types with full fidelity, and the ability to have connection-based objects like remoting's activated objects). Indigo has to support these concepts, because it is supposed to be the future of both the open web service technology and the higher performance client/server technology for .NET.

The point I'm making, is that using remoting today is good if you are staying within trust boundaries. It doesn't preclude moving to Indigo in the future, because Indigo will also provide solutions for communication within trust boundaries, with full .NET type fidelity (and other cool features).

Web services force the use of SOA, even if it isn't appropriate. They should be approached with caution and forethought. Specifically, you should give serious thought to where you want or need to have trust boundaries in your enterprise and in your application design. Trust boundaries have a high cost, so inserting them between what should have been tiers of an application is typically a very, very bad idea.

On the other hand, using remoting between applications is an equally bad idea. In such cases you should always look at SOA as a philosophy, and potentially web services as a technical solution (though frankly, today the better SOA technology is often queuing (MQ Series or MSMQ), rather than web services).

 

Thursday, April 8, 2004 7:13:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

A number of people have asked me about my thoughts on SOA (service-oriented architecture). I have a great deal of interest in SOA, and a number of fairly serious thoughts and concerns about it. But my first response is always this.

 

The S in SOA is actually a dollar sign ($OA).

 

It is a dollar sign because SOA is going to cost a lot of people a lot of money, and it will make a lot of other people a lot of money.

 

It will make money for product vendors, as they slap the SOA moniker on all sorts of tools and products around web service creation, consumption and management. And around things like XML translators and transformers, and queue bridging software and transaction monitors and so forth. Nothing particularly new, but with the SOA label, these relatively old concepts become fresh and worthy of high price tags.

 

It will make money for service vendors (consultants) in a couple ways.

 

The first are the consultants that convince clients that SOA is the way to go – and that it is big and scary and that the only way to make it work is to pay lots of consulting dollars. SOA, in this world-view, involves business process reengineering, rip-and-replace of applications, massive amounts of new infrastructure (planning and implementation) and so forth.

 

The second are the consultants who come in to clean up after the first group make a mess of everything. This second group will also get to clean even after many clients who didn’t spend on big consulting dollars, but bought into the product vendors’ hype around SOA and their quasi-related products.

 

Who loses money? The clients who buy the hyped products, or fall for the huge consulting price tags due to FUD or the overall hype of SOA.

 

Since I work in consulting, SOA means dollars flowing toward me, so you’d think I’d be thrilled. But I am not. This is a big train wreck just waiting to derail what should be a powerful and awesome architectural concept (namely SOA).

 

So, in a (most likely vain) effort to head this off, here are some key thoughts.

 

  1. SOA is not about products, or even about web services. It is an architectural philosophy and concept that should drive enterprise and application architecture.

  2. SOA is not useful INSIDE applications. It is only useful BETWEEN applications.

    If you think you can replace your TIERS with SOA concepts, you are already in deep trouble. Unless you turn your tiers into full-blown applications, you have no business applying SOA between them.

  3. Another way to look at #2 is to discuss trust boundaries. Tiers within an application trust each other. That’s why they are tiers. Applications don’t trust each other. They can’t, since there’s no way to know if different applications implement the same rules the same ways.

    SOA is an idea philosophy and concept to help cross trust boundaries. If you don’t cross a trust boundary, you don’t want SOA in there causing complexity. But if you do cross a trust boundary, you want SOA in there helping you to safely manage those trust and connectivity issues.

  4. To turn #3 around, if you are using SOA concepts, then you are crossing a trust boundary. Even if you didn’t plan for, or intend to insert a trust boundary at that location in your design, you just did.

    The side-effect of this is that anywhere that you use SOA concepts between bits of code, those bits of code must become applications, and must stop trusting each other. Each bit of code must implement its own set of business rules, processing, etc.

    Thus, sticking SOA between the UI and middle tiers of an application, means that you now have two applications – one that interacts with the user, and one that interacts with other applications (including, but obviously not limited to your UI application). Because you’ve now inserted a trust boundary, the UI app can’t trust the service app, nor can the service app trust the UI app. Both apps must implement all your validation, calculation, data manipulation and authorization code, because you can’t trust that either one has it right.

    More importantly, you can’t trust that FUTURE applications (UI or service) will have it right, and SOA will allow those other application to tap into your current applications essentially without warning. This, fundamentally, is why inserting SOA concepts means you have inserted a trust boundary. If not today, in the future you WILL have untrusted applications interacting with your current applications and they’ll try to do things wrong. If you aren’t defensive from the very start, you’ll be in for a world of hurt.

 

In the end, I think the trust boundary mindset is most valuable guide for SOA. Using SOA is expensive (in terms of complexity and performance). It is not a price you should be willing to pay to have your UI talk to a middle tier. There are far simpler, cheaper and faster philosophies, concepts and technologies available for such interaction.

 

On the other hand, crossing trust boundaries is expensive and complex all by itself. This is because we have different applications interacting with each other. It will always be expensive. In this case, SOA offers a very nice way to look at this type of inter-application communication across trust boundaries, and can actually decrease the overall cost and complexity of the effort.

 

Thursday, April 8, 2004 9:27:35 AM (Central Standard Time, UTC-06:00)  #    Disclaimer