Rockford Lhotka

 Monday, October 9, 2006

For better or worse, SOA (service-oriented architecture) continues to be the current industry fad. As SOA continues along the “hype curve” (a term I’m borrowing from Gartner), more and more people are starting to realize that SOA isn’t a silver bullet, and that it doesn’t actually replace n-tier client/server or object-orientation.


What will most likely happen over the next couple years, is that SOA will fall into the “pit of disillusionment” (part of the hype curve, that I think of as the “pit of despair”), and many people will decide, as a result, that it is totally useless. This will happen, not in small part, because some organizations are investing way too much money into SOA now, when it is overly hyped – and they’ll feel betrayed when “reality” sets in.


After a period of disrepute, SOA may then rise to a “plateau of productivity”, where it will finally be used to solve the problems it is actually good at solving.


Some technologies don’t live through the “despair” part of the process. Sometimes the harsh light of reality is too bright, and the technology can’t hold up. Other times, a competing technology or concept hits the top of its hype curve, derailing a previous technology. Over the next very few years, we’ll see if SOA holds up to the despair or not.


This is a pattern Gartner has observed for virtually all technologies over many, many years. If you think about any technology introduced over the past 20 years or more, almost all of them have following this pattern: over-hyping, over-reacting-to-reality and finally used-as-a-real-solution.


My colleague and mentor, David Chappell, recently blogged about some of the realities people are discovering as they actually move beyond the hype and try to apply SOA. It turns out, not surprisingly, that achieving real benefits in terms of reuse is much harder than the SOA evangelists would have anyone believe.


I think this is because SOA focuses on only one part of the problem: syntactic coupling. SOA, or at least service-oriented design and programming, is very much centered around rules for addressing and binding to services, and around clear definition of syntactic contracts for the API and message data sent to and from services.


And that’s all good! Minimizing coupling at the syntactic level is absolutely critical, and SOA has moved us forward in this space, picking up where EAI (enterprise application integration) left off in the 90’s.


Unfortunately, syntactic coupling is the easy part. Semantic coupling is the harder part of the problem, and SOA does little or nothing to address this challenging issue.


Semantic coupling refers to the behavioral dependencies between components or services. There’s actual meaning to the interaction between a consumer and a service.


Every service implements some tangible behavior. A consumer calls the service, thus becoming coupled to that service, at both a syntactic and semantic level. At the syntactic level, the consumer must use the address, binding and contract defined by the service – all of which are forms of coupling. But the consumer also expects some specific behavior from the service – which is a form of semantic coupling.


And this is where things get very complex. The broader the expected behavior, the tighter the coupling.


As an example, a service that does something trivial, like adding two numbers, is relatively easy to replace with an equivalent. Such a service can even be enhanced to support other numeric data types with virtually no chance of breaking existing consumers. So the semantic coupling between a consumer and such a service is relatively light.


Another example is credit card verification. Obviously the internal implementation of this behavior is much more complex, but the external expectations of behavior remain very limited. Like adding two numbers, verifying a credit card is a behavior that accepts very little data, and returns a very simple result (yes/no).


Contrast this with many other possible business services, such as shipping an order, or generating manufacturing documentation. In these (quite common) scenarios, the service performs, or is expected to perform, a relatively broad set of behaviors. The result is a whole group of effects and side-effects – all of which should be considered as black-box effects by any caller. But the more a service does, the less “black-box” it can be to its callers, and the tighter the coupling.


And this leaves us in a serious quandary. There’s a high cost to calling a service. There’s a lot of overhead to creating a message, serializing it into text (XML), routing it through some communications stack onto the wire, getting the electrons across the wire through some protocol (probably TCP) and all the attendant hardware involved, picking it up off the wire on the server, routing it through another communications stack, deserializing the text (XML) back into a meaningful message and finally interpreting the message. Only then can the service actually act on the message to do real work.


Worse, that’s only half the story, because most people are creating synchronous request/response services, and so that whole overhead cost must be paid again to get the result back to the caller!


Before going further, let me expand on this “overhead cost” concept to be more precise.


I worked for many years in manufacturing. In that industry there’s the concept of cost accounting – people make their living at tracking costs. They divide costs into overhead, setup and run (there are other models, but this one’s pretty standard).


To make this somewhat more clear, I’ll use the metaphor of baking cookies.


Overhead cost are all the salaried people, the buildings, equipment and so forth. Costs that are paid whether widgets are manufactured or not. When baking cookies, this is the cost of having a kitchen, a stove, electricity, natural gas, and of course the person doing the baking. In most homes these costs exist regardless of whether cookies are baked or not.


Setup costs are applied overhead. They are costs that are required to build a set of widgets, but they are only incurred when widgets are being manufactured. These costs include setting up machines, programming devices, getting organized, printing documents, etc. When baking cookies, this is the cost (in terms of time) of getting out the various ingredients, bowls, spoons and other implements. It is also the cost of cleaning up after the baking is done – all the washing, drying and putting-away-of-implements that follows. These costs are directly applied to the process, but are pretty much the same whether you bake one dozen or ten dozen cookies.


Run costs are those costs that are incurred on a per-widget basis to make a widget. This includes the hourly rate of the workers manning the assembly line, the materials that go into the widget and so forth. When baking cookies, this is the time spent by the baker, the cost of the flour, eggs and other ingredients consumed in the process. Ideally it would include the amount of electricity or natural gas used to run the stove as well. Obviously detailed run costs can be hard to determine in some cases!


When calculating the cost of your cookies, each of these three costs is added together. The run rate is easy, as it is per-cookie by definition. The setup rate is variable – the more cookies you make in a batch the lower the relative setup cost, and the fewer cookies the higher the relative setup cost. Overhead is typically aggregated – the annual overhead cost is known, and is divided by the number of cookies (and other things) made over a year’s time. Obviously there’s lots of wiggle room in this last number.


For my purposes, in discussing services, the overhead rate isn’t all that meaningful. In our industry this is the cost of the IT staff, the servers, the server room, electricity and cooling and so forth.


But the setup rate and run rate become very meaningful when talking about services.


Calling a service, as I noted earlier, incurs a lot of overhead. This overhead is relatively constant: you pay about the same whether you send 1 byte or 1024 bytes to or from the service.


The run rate is the actual work done by the service. Once the message is parsed and available to the service, then the service does real, valuable work. This is the run rate for the service.


In manufacturing it is always important to manage the overhead and setup costs – they are a “pure cost”. The run rate cost must also be managed, but it is directly applicable to a product, and so that cost can be factored into the price you charge. Perhaps more importantly, your competitors typically have a comparable run rate (materials and labor cost about the same), but the overhead can vary radically.


To switch industries just a bit, this is why Walmart does so well (and is so feared). They have managed their overhead and setup costs to such a degree that they actually do focus on reducing their run rate (in their case, the per-unit acquisition cost of items).


Coming back to services, we face the same issue. Typically we deal with this using intuition rather than thinking it through, but the core problem is very tangible.


Would you call a service to add two numbers? Of course not! The setup/overhead cost would outweigh the run cost to such a degree that this makes no sense at all.


Would you call a service to ship an order, with all the surrounding activities that implies? This makes much more sense. The setup/overhead cost becomes trivial when compared to the run cost for such a service.


And yet coupling has the opposite effect. Which of those services can be more loosely coupled? The addition service of course, because it performs a very narrow, discrete, composable behavior.


Do you even know what the ship-an-order service might do? Of course not, it is too big and vague. Will it trigger invoicing? Will it contact the customer? Will it print pick lists for inventory? Will it update the customer’s sales history?


I would hope it does all these things, but very few of us would be willing to blindly assume it does them. And so we are forced to treat ship-an-order as something other than a black box. At best it is gray, but probably downright clear. We’ll require that the service’s actual behaviors be documented. And then we’ll fill in the gaps for what it does not provide, or doesn’t provide in a way we like.


(Or, failing to get adequate documentation, we’ll experiment with the service, probing to find its effects and side-effects and limitations. And then we’ll fill in the gaps for the bits we don’t like. Sadly, this is the more common scenario…)


At this point we (the caller of the service) have become so coupled to the service, that any change to the service will almost certainly require a change to our code. And at this point we’ve lost the primary goal/benefit of SOA.


Why? How can this be, when we’re using all the blessed standards for SOA communication? Maybe we’re even using an Enterprise Service Bus, or Biztalk Server or whatever the latest cool technology might be. And yet this coupling occurs!


This is because I am describing semantic coupling. Yes, all the cool, whiz-bang SOA technologies help solve the syntactic coupling issues. But without a solution to the semantic, or behavioral, coupling it really doesn’t get us very far…


What’s even scarier, is that the vision of the future portrayed by the SOA evangelists is one where we build services (systems) that aggregate other services together to provide higher-level functionality. Like assembling simple blocks into more complex creations, that in turn can be assembled into more complex creations or used as-is.


Except that each level of aggregation creates a service that provides broader behaviors – and by extension tighter coupling to any callers (though the setup vs run costs become more and more favorable at the top level).


To bring this (rather long) post to a close, I want to return to the beginning. SOA is heading down the steep slope into the pit of disillusionment. You can head this off for yourself and your organization by realizing ahead of time, right now, that SOA only addresses syntactic issues. You must address the much harder semantic issues yourself.


And the tools exist. They have for a long time. Good procedural design, use of flow charts, data flow diagrams, control diagrams, state diagrams: these are all very valid tools that can help you manage the semantic coupling. Unfortunately the majority of people with expertise in these tools are nearing retirement (or have retired) – but the tools and techniques are there if you can find some old, dusty books on procedural design. Just remember to include the setup/overhead cost vs run cost in your decisions on whether to make each procedure into a "service".


SOA solves some serious and important issues, but it is overhyped. Fortunately the hype is fading, and so we can look forward (perhaps 18 or 36 months) to a time when we can, with any luck, start focusing on the “next big thing”. Maybe, just maybe, that “big thing” will be some new and interesting way of addressing semantic coupling.

Monday, October 9, 2006 5:07:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, September 30, 2006

CSLA .NET version 2.1 is now available for download from

Please make sure to read the Change log for this release, as there are substantial changes to validation, authorization and other parts of the framework. There are a limited number of breaking changes that may affect your code, so please review the document before attempting to upgrade from 2.0 to 2.1.

The vast majority of the changes in version 2.1 are due to the community feedback from the great members of the online forum. I would like to thank everyone who provided feedback and input into the process!

Saturday, September 30, 2006 11:20:17 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, September 29, 2006

This post is in response to some comments in my previous post, which linked to Steve Yegge’s blog entry about Good Agile and Bad Agile.

One commenter, JH, makes a perfectly valid observation: criticizing Waterfall and Agile without offering an alternative isn’t very productive. So…

Actually, truth be known, I used Waterfall and several variations of it for many years - with success. We collected requirements, analyzed them, did our design, our development, testing - all the stuff you are supposed to do. And you know, the projects worked out pretty darn well.

Why? I don't honestly think it was Waterfall that did it. It was the fact that I was working with a motivated team of reasonably talented developers who really cared about doing a good job. In order to do a good job building software it is so amazingly clear that you need to understand the need (requirements/analysis), translate them to software (design/development) and make sure the result actually works (testing).

MSF, by and large, is just Waterfall in a spiral, with some different documents and techniques. Agile is just Waterfall in random order, and with a few different techniques and tools. In the end it all comes down to gathering requirements, analyzing them, translating that into a design, doing some coding and some testing and away you go. You can rearrange the pieces on the board, but they are the same pieces.

Every consulting company I’ve worked for has had its own Methodology (or Framework, or whatever they chose to call it). Each of them was some merger of concepts, techniques and tools from Waterfall, MSF, RUP, Agile and various other Methodologies with which the creators were familiar. And each of them worked just fine.

The primary issue I have with Methodology (capital “M”) is that following the Methodology ceases being a means to an end, and becomes blind Doctrine. The steps in the process cease being tools used for a reason, and become mere Ritual.

“Why are you doing X?”

“Because the Methodology said to!”

“But there’s a better approach. If you just used this technique …”

“You aren’t questioning the infallibility of our Methodology are you? This Methodology was written by the Great PooBah, and has been handed down unchanged through time!”

“What? No, I was just suggesting…”

“HERETIC! Burn him at the stake!”

If you can somehow keep your Methodology from becoming Doctrine and Ritual, then you can be successful – with almost any Methodology, I think. Unfortunately it seems that human nature has an intrinsic desire to conform to Doctrine and Ritual – you can see it in the dogmatic positions of so many Methodologists out there.

As Clemens points out, within a given team, assuming reasonably sized teams, I think you can pick almost any Methodology and do quite well. With the right people, even the Cowboy approach works great in most cases, and it is quite clear that Agile can work fine too. So does Waterfall by the way.

But across teams? You must have some level of formal structure. Some level of formalized communication and process and coordination and authority. I know, that all sounds very ugly, and not at all fun, but it is unavoidable. Conflicting schedules, overlapping dependencies and competing political interests require a higher level of structure in these cases.

Traditional Waterfall presupposes lengthy blocks of time for each step, and isn’t practical in today’s world – even (especially?) across teams. However, a modified Waterfall, along the lines of the MSF spiral, works really well in my experience. Assembling a cross-project team, composed of representatives from the more tactical teams (regardless of their Methodology of choice) for the purposes of coordination and mediation is absolutely critical.

So my “Methodology” of choice? At a team level, pick the techniques and tools that work within your specific environment, with your specific team members – with all their varied talents and capabilities. Take the best from Waterfall, MSF, RUP, Agile and anything else you can find. Some techniques work in one setting and fail horribly in another.

At the level of a large project, or cross-project effort, some variation of Waterfall or one of its modern descendents really is, in my view, the right answer. Few other Methodologies provide sufficient structure and formalization of process to make such efforts manageable.

Friday, September 29, 2006 8:28:48 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, September 28, 2006

A colleague of mine (customer of Magenic) forwarded me a link to Steve Yegge's wonderful blog post about Agile methodologies (good and bad).

I think my favorite bit in the whole thing (and there are a lot of good bits!) is this:

Bad Agile seems for some reason to be embraced by early risers. I think there's some mystical relationship between the personality traits of "wakes up before dawn", "likes static typing but not type inference", "is organized to the point of being anal", "likes team meetings", and "likes Bad Agile". I'm not quite sure what it is, but I see it a lot.

Like Steve, I am not a morning person, and this probably explains my general antipathy towards Agile Methodologies.

This antipathy is something I've struggled with at length.

I am a strong believer in responsibility-driven, behavioral, CRC OO design - and that is very compatible with the concepts of Agile. So how can I believe so firmly in organic OO design, and yet find Agile/XP/SCRUM to be so...wrong...?

I think it is because the idea of taking a set of practices developed by very high-functioning people, and cramming them into a Methodology (notice the capital M!), virtually guarantees mediocrity. That, and some of the Agile practices really are just plain silly in many contexts.

Rigorous unit testing? Slam dunk!
Writing the tests before writing your code? Totally optional.

Pair programming for intense, multi-threaded framework code? Absolutely!
Pair programming to retrieve a DataSet using ADO.NET? Yeah right…

See that’s the thing. Formalize a set of practices into a Methodology and the practices lose their meaning. Each practice in Agile really is good – in a specific place and for a specific purpose. But when wrapped in a Methodology they become “good at all times and in all places” – at least to most practitioners.

This isn’t Agile’s fault by the way. I’m not entirely convinced that Waterfall Methodology isn’t rooted in a set of perfectly good practices, which were subverted and corrupted by becoming a Methodology.

Nah, on second though Waterfall really is just plain bad.

Thursday, September 28, 2006 2:41:01 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 26, 2006
Dunn Training continues to ready its official CSLA .NET training course, with plans for a beta of the course to be taught in November.

If you are in need of some classroom training on CSLA .NET, November is your first chance, probably followed by a class in January.
Tuesday, September 26, 2006 8:33:13 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, September 22, 2006
Thanks to a lot of work by Chris Russi, there is now an official CSLA .NET logo graphic:

and a corresponding "Powered by" logo for web sites or applications that are built on CSLA .NET:

Note that the CSLA .NET license does not require any public display of this logo! But if you would like to use it to let people know that your application or site is built using CSLA .NET, that's wonderful! :)

Chris very kindly created a range of different sizes, and you can download the entire set in one zip file.
Friday, September 22, 2006 3:05:34 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
Jason Bock, a colleague of mine at Magenic, is organizing a code camp in the Twin Cities for November 11, 2006. Click here for details.

I'll be speaking at the event, though I haven't decided on a topic yet.
Friday, September 22, 2006 8:51:51 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 12, 2006

I just updated the download page ( to make the BETA version of CSLA .NET 2.1 available.

Make sure you read the Change Log document before going too far. I've tried to list all breaking changes as a summary at the top, with my estimate on the likely impact of each one. You can find more details further into the document.

There will be no more functional changes to 2.1, so this download is complete. There obviously could be bugs, but enough people have been helping me find issues with the last couple pre-releases that I think it is pretty stable at this point.

Barring anyone finding a major issue, I am still planning to release 2.1 by the end of September, so if you have any near-term plans to use 2.1 I strongly recommend trying this beta version on a non-production copy of your project over the next 10 days or so. That will allow you to provide me with any bug reports in enough time that I can try and address them.

Tuesday, September 12, 2006 8:18:19 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, September 7, 2006

I get many questions about CSLA .NET, and whether it is a good fit for various projects and needs. Many times I put the answers on my web site - building a FAQ of sorts. Here are some links to good articles there, and a copy of a Q&A I recently did for a prospective Magenic client.

What is CSLA .NET?

CSLA .NET technical requirements

Why move from CSLA .NET 1.x to 2.x?

CSLA .NET Roadmap

CSLA .NET 2.0 download page (which includes a summary of breaking changes)

CSLA .NET 2.0 change log (detailed list of changes from version 1.5)

Now for the recent Q&A:

What are the drawbacks to using the CSLA?

This question is too vague to be answered directly. No framework is perfect for all types of application, and CSLA .NET is no exception.

It is not particularly useful for creating non-interactive batch processing code - the .NET 3.0 WF technology is a better bet there. (though the activities in a WF can be created using CSLA, and that's a great model that the WF team really likes)

It is also not particularly useful for reporting. A good reporting tool like SQL Reporting Services is the right answer.

CSLA does require the use of good OO design. Where the DataSet works well (best?) when using VB3 style code-in-the-form architecture, CSLA really requires the application of OO design to build a true business layer. For some developers this can be a hurdle, because they are not used to doing OO design.

CSLA also has some technical requirements. Depending on the transaction model you use, the DTC may be required. In all cases CSLA requires FullTrust code access security, so it can't be used in a partial trust environment (like on most commercially hosted web servers).

Where do the CSLA and the Enterprise Library overlap?

They don't. They are complimentary, see this article for details.

What is the roadmap for the CSLA?

CSLA .NET is a framework for creating a reusable, object-oriented business layer for .NET applications. As such, it continues to evolve based on the evolution of the .NET platform. This means its roadmap generally follows Microsoft's lead, with some exceptions around enhancements to CSLA itself. At the moment the roadmap looks like this:

Version 2.0      - available now for .NET 2.0 and VS 2005

Version 2.1      - available Q3-06 with performance and feature enhancements

Version 3.0 (?) - available Q1-07 with full support for .NET 3.0 (WCF, WPF, WF)

Version 3.x      - available Q?-07 with support for .NET 3.5 (LINQ technologies)

Obviously the 3.x schedules are subject to change based on Microsoft's schedule.

(updated road map can be found here)

What is the model for integrating the CSLA with .NET 3.0\Windows Communication Foundation?

I am working closely with the Connected Systems group in Redmond on this, and related technologies. Additionally, when I build CSLA .NET 2.0 I specifically designed it to allow for near-seamless integration of WCF.

Right now you can get an early version of a data portal channel for WCF from here.

This allows existing applications using Remoting, Web Services or Enterprise Services to transition to WCF with NO CODE CHANGES. This is a very compelling feature of CSLA in my view.

Unfortunately that's not the whole story, and that's why a new version of CSLA will be required in the .NET 3.0 timeframe. CSLA uses the BinaryFormatter for some scenarios (n-level undo, cloning). To support the DataContract model, the NetDataContractSerializer is required instead, so the formatter needs to be pluggable. Again, I've been working with the PM who owns serialization, and this is not a major change to CSLA, and other than there being a new configuration switch this change should have NO IMPACT ON ANY APPLICATION CODE.

How is the CSLA supported in production environments?

CSLA is not a "product". I don't sell it, nor do I sell support for it. CSLA is part of my Expert VB/C# 2005 Business Objects books, and is covered under a very liberal, essentially open-source, license

As such, support in a production environment is the responsibility of the organization using the framework.

What is the typical model for extending\enhancing the CSLA so that one can take advantages of new versions and migrate the custom enhancements easily to the new CSLA runtime?

CSLA is designed to include numerous extensibility points, allowing organizations to customize the behavior of key functionality without altering the core framework. This is most often handled through inheritance, where an organization inherits from a CSLA base class to create their own custom base class that extends or alters the core CSLA behavior. In other cases a provider model is used to allow organizations to replace pre-supplied providers with their own custom providers.

What Application Service Provider and\or 24x7x365 customer references exist for CSLA implementations?

Magenic can provide various case studies around their implementation of projects using CSLA .NET.

For my part, since I don't sell CSLA, I can't fund the process necessary to go through the legal hoops at most organizations to release their names as references. I do maintain this list of organizations who use CSLA. I can say that the framework is used in a wide range of applications, from small to very large and mission critical.

Also, here are some unsolicited quotes from CSLA .NET users:

"A day doesn't go by without me marveling at how elegant this CSLA is."

"Rocky, I learned so much from your books. They had enlightened me on true OO practice, including David West's Object Thinking that you recommend. Thank you."

"First, a huge thank you for an awesome business object framework. I've learned so much just by stepping through your code."

"Having single handedly completed a rather major WinForms app, which has now been in operation for over a year at several sites, I can tell you that you absolutely cannot go wrong choosing CSLA."

"I'd like to say that you did a great job on CSLA. I've found it very helpful in the creation of standardly developed and maintainable code among our teams."

Are there any Visual Studio 2005 project templates available for using the CSLA within VS2005?

CSLA includes a set of snippets and class templates for each object stereotype. Even more importantly, in my view, there are several code-generation templates available for CodeSmith and other code generators, and Kathleen Dollard's book on .NET code generation also includes a CSLA code generator. Additionally, Magenic has a very powerful and extensible code generator that we often use on CSLA projects.

While project and file templates offer some short-term productivity, properly implemented code generation provides increased productivity over the lifetime of the entire project and should be a requirement for any large initiative (CSLA based or not).

Thursday, September 7, 2006 9:31:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer