Rockford Lhotka

 Monday, November 20, 2006
I am the track chair for Vista Live!, one of the sub-conferences of VS Live! San Francisco. This conference will be held March 25-29, 2007.

As track chair, it is my job to recruit speakers and help select sessions around software development and Windows Vista and .NET 3.0.

Windows Vista has some major impacts on software development. Perhaps most notably, having users (and developers) run in a non-Admin account affects how both development tools and end-user applications install and run. And then there are the new shell features, and integrated RSS support. Add to this .NET 3.0, with WCF, WPF, WF and WCS, and there's a lot of ground to cover.

If you would like to speak at VS Live in March, on a Windows Vista related topic, please use this online proposal submission form to submit your idea.

Monday, November 20, 2006 11:04:58 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, November 16, 2006

 CSLA .NET 2.1.1 is now available for download from the download page. This is a bug fix release, which addresses some issues in version 2.1. See the change log for details.

Thursday, November 16, 2006 4:04:09 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, October 17, 2006
A while back I blogged about where I thought the EU was overreaching and harming consumers. I still think that is the case, but I also believe in giving credit where credit is due, so here goes.

If the analysis in this article about a standards-based XPS format is correct, then the EU really may be helping consumers - whether intentionally, or through an unintended consequence. If the article is right, and Microsoft follows through with the changes to XPS licensing, the EU might have created a serious competitor to PDF in the form of XPS.

In my view, it is almost certainly the case that Microsoft would have otherwise been too restrictive with the licensing, and XPS would have become a mere footnote in the long list of would-be PDF competitors. It isn't like Microsoft hasn't tried before to compete with PDF, and simply failed by being too closed. It looked to me like they were headed down the exact same road with XPS - until the EU intervened (ironically at the behest of Adobe).

The end result is that the EU might have (presumably unintentionally) increased competition for PDF, creating a more open market in which consumers benefit. The fact that Microsoft will likely gain indirect benefit will undoubtably rankle many people. But frankly, those people have lost sight of the real goal, which is to ensure fairness to consumers, not fairness to Microsoft's competitors (or Microsoft itself for that matter).

Of course the proof is in the pudding. It does remain to be seen whether Microsoft follows through on their XPS promises. But if they do follow through, and we start seeing open source XPS readers for Wndows, Mac and Linux, there really may be a viable alternative to the PDF monopoly of today.

Tuesday, October 17, 2006 10:12:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
Dunn Training is now offering the first ever formal training for CSLA .NET. The beta class is November 20-22, and it is being held in Chattanooga, TN. The advantage to the beta class, is that it is heavily discounted, because it is the first public run of the materials, so this is the one chance to get heavily discounted CSLA .NET classroom training.

The plan is for regular classes to start in early 2007.

I've been working with Miguel Castro as he builds the content for the course, and this training should be a great way to get rolling fast with CSLA .NET.

Tuesday, October 17, 2006 9:20:09 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, October 16, 2006
I recently participated in a panel discussion on open source software, which was recorded as a MSDN Webcast.

Scott Hanselman even has a picture of the event.

Though people might argue whether CSLA .NET is really "open source" (because it has a non-viral license), it is certainly the case that its source code is open for all to see and use with almost no restrictions. If you listen to the panel, you'll find that there's a strong belief by many of us in this space that open source is an important and valuable part of the software industry; and you'll find that many of us also believe that custom software development should lead (directly or indirectly) to being able to make a living. This leaves a delicate middle ground, which can be challenging in many ways.

Monday, October 16, 2006 6:50:36 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, October 9, 2006

For better or worse, SOA (service-oriented architecture) continues to be the current industry fad. As SOA continues along the “hype curve” (a term I’m borrowing from Gartner), more and more people are starting to realize that SOA isn’t a silver bullet, and that it doesn’t actually replace n-tier client/server or object-orientation.


What will most likely happen over the next couple years, is that SOA will fall into the “pit of disillusionment” (part of the hype curve, that I think of as the “pit of despair”), and many people will decide, as a result, that it is totally useless. This will happen, not in small part, because some organizations are investing way too much money into SOA now, when it is overly hyped – and they’ll feel betrayed when “reality” sets in.


After a period of disrepute, SOA may then rise to a “plateau of productivity”, where it will finally be used to solve the problems it is actually good at solving.


Some technologies don’t live through the “despair” part of the process. Sometimes the harsh light of reality is too bright, and the technology can’t hold up. Other times, a competing technology or concept hits the top of its hype curve, derailing a previous technology. Over the next very few years, we’ll see if SOA holds up to the despair or not.


This is a pattern Gartner has observed for virtually all technologies over many, many years. If you think about any technology introduced over the past 20 years or more, almost all of them have following this pattern: over-hyping, over-reacting-to-reality and finally used-as-a-real-solution.


My colleague and mentor, David Chappell, recently blogged about some of the realities people are discovering as they actually move beyond the hype and try to apply SOA. It turns out, not surprisingly, that achieving real benefits in terms of reuse is much harder than the SOA evangelists would have anyone believe.


I think this is because SOA focuses on only one part of the problem: syntactic coupling. SOA, or at least service-oriented design and programming, is very much centered around rules for addressing and binding to services, and around clear definition of syntactic contracts for the API and message data sent to and from services.


And that’s all good! Minimizing coupling at the syntactic level is absolutely critical, and SOA has moved us forward in this space, picking up where EAI (enterprise application integration) left off in the 90’s.


Unfortunately, syntactic coupling is the easy part. Semantic coupling is the harder part of the problem, and SOA does little or nothing to address this challenging issue.


Semantic coupling refers to the behavioral dependencies between components or services. There’s actual meaning to the interaction between a consumer and a service.


Every service implements some tangible behavior. A consumer calls the service, thus becoming coupled to that service, at both a syntactic and semantic level. At the syntactic level, the consumer must use the address, binding and contract defined by the service – all of which are forms of coupling. But the consumer also expects some specific behavior from the service – which is a form of semantic coupling.


And this is where things get very complex. The broader the expected behavior, the tighter the coupling.


As an example, a service that does something trivial, like adding two numbers, is relatively easy to replace with an equivalent. Such a service can even be enhanced to support other numeric data types with virtually no chance of breaking existing consumers. So the semantic coupling between a consumer and such a service is relatively light.


Another example is credit card verification. Obviously the internal implementation of this behavior is much more complex, but the external expectations of behavior remain very limited. Like adding two numbers, verifying a credit card is a behavior that accepts very little data, and returns a very simple result (yes/no).


Contrast this with many other possible business services, such as shipping an order, or generating manufacturing documentation. In these (quite common) scenarios, the service performs, or is expected to perform, a relatively broad set of behaviors. The result is a whole group of effects and side-effects – all of which should be considered as black-box effects by any caller. But the more a service does, the less “black-box” it can be to its callers, and the tighter the coupling.


And this leaves us in a serious quandary. There’s a high cost to calling a service. There’s a lot of overhead to creating a message, serializing it into text (XML), routing it through some communications stack onto the wire, getting the electrons across the wire through some protocol (probably TCP) and all the attendant hardware involved, picking it up off the wire on the server, routing it through another communications stack, deserializing the text (XML) back into a meaningful message and finally interpreting the message. Only then can the service actually act on the message to do real work.


Worse, that’s only half the story, because most people are creating synchronous request/response services, and so that whole overhead cost must be paid again to get the result back to the caller!


Before going further, let me expand on this “overhead cost” concept to be more precise.


I worked for many years in manufacturing. In that industry there’s the concept of cost accounting – people make their living at tracking costs. They divide costs into overhead, setup and run (there are other models, but this one’s pretty standard).


To make this somewhat more clear, I’ll use the metaphor of baking cookies.


Overhead cost are all the salaried people, the buildings, equipment and so forth. Costs that are paid whether widgets are manufactured or not. When baking cookies, this is the cost of having a kitchen, a stove, electricity, natural gas, and of course the person doing the baking. In most homes these costs exist regardless of whether cookies are baked or not.


Setup costs are applied overhead. They are costs that are required to build a set of widgets, but they are only incurred when widgets are being manufactured. These costs include setting up machines, programming devices, getting organized, printing documents, etc. When baking cookies, this is the cost (in terms of time) of getting out the various ingredients, bowls, spoons and other implements. It is also the cost of cleaning up after the baking is done – all the washing, drying and putting-away-of-implements that follows. These costs are directly applied to the process, but are pretty much the same whether you bake one dozen or ten dozen cookies.


Run costs are those costs that are incurred on a per-widget basis to make a widget. This includes the hourly rate of the workers manning the assembly line, the materials that go into the widget and so forth. When baking cookies, this is the time spent by the baker, the cost of the flour, eggs and other ingredients consumed in the process. Ideally it would include the amount of electricity or natural gas used to run the stove as well. Obviously detailed run costs can be hard to determine in some cases!


When calculating the cost of your cookies, each of these three costs is added together. The run rate is easy, as it is per-cookie by definition. The setup rate is variable – the more cookies you make in a batch the lower the relative setup cost, and the fewer cookies the higher the relative setup cost. Overhead is typically aggregated – the annual overhead cost is known, and is divided by the number of cookies (and other things) made over a year’s time. Obviously there’s lots of wiggle room in this last number.


For my purposes, in discussing services, the overhead rate isn’t all that meaningful. In our industry this is the cost of the IT staff, the servers, the server room, electricity and cooling and so forth.


But the setup rate and run rate become very meaningful when talking about services.


Calling a service, as I noted earlier, incurs a lot of overhead. This overhead is relatively constant: you pay about the same whether you send 1 byte or 1024 bytes to or from the service.


The run rate is the actual work done by the service. Once the message is parsed and available to the service, then the service does real, valuable work. This is the run rate for the service.


In manufacturing it is always important to manage the overhead and setup costs – they are a “pure cost”. The run rate cost must also be managed, but it is directly applicable to a product, and so that cost can be factored into the price you charge. Perhaps more importantly, your competitors typically have a comparable run rate (materials and labor cost about the same), but the overhead can vary radically.


To switch industries just a bit, this is why Walmart does so well (and is so feared). They have managed their overhead and setup costs to such a degree that they actually do focus on reducing their run rate (in their case, the per-unit acquisition cost of items).


Coming back to services, we face the same issue. Typically we deal with this using intuition rather than thinking it through, but the core problem is very tangible.


Would you call a service to add two numbers? Of course not! The setup/overhead cost would outweigh the run cost to such a degree that this makes no sense at all.


Would you call a service to ship an order, with all the surrounding activities that implies? This makes much more sense. The setup/overhead cost becomes trivial when compared to the run cost for such a service.


And yet coupling has the opposite effect. Which of those services can be more loosely coupled? The addition service of course, because it performs a very narrow, discrete, composable behavior.


Do you even know what the ship-an-order service might do? Of course not, it is too big and vague. Will it trigger invoicing? Will it contact the customer? Will it print pick lists for inventory? Will it update the customer’s sales history?


I would hope it does all these things, but very few of us would be willing to blindly assume it does them. And so we are forced to treat ship-an-order as something other than a black box. At best it is gray, but probably downright clear. We’ll require that the service’s actual behaviors be documented. And then we’ll fill in the gaps for what it does not provide, or doesn’t provide in a way we like.


(Or, failing to get adequate documentation, we’ll experiment with the service, probing to find its effects and side-effects and limitations. And then we’ll fill in the gaps for the bits we don’t like. Sadly, this is the more common scenario…)


At this point we (the caller of the service) have become so coupled to the service, that any change to the service will almost certainly require a change to our code. And at this point we’ve lost the primary goal/benefit of SOA.


Why? How can this be, when we’re using all the blessed standards for SOA communication? Maybe we’re even using an Enterprise Service Bus, or Biztalk Server or whatever the latest cool technology might be. And yet this coupling occurs!


This is because I am describing semantic coupling. Yes, all the cool, whiz-bang SOA technologies help solve the syntactic coupling issues. But without a solution to the semantic, or behavioral, coupling it really doesn’t get us very far…


What’s even scarier, is that the vision of the future portrayed by the SOA evangelists is one where we build services (systems) that aggregate other services together to provide higher-level functionality. Like assembling simple blocks into more complex creations, that in turn can be assembled into more complex creations or used as-is.


Except that each level of aggregation creates a service that provides broader behaviors – and by extension tighter coupling to any callers (though the setup vs run costs become more and more favorable at the top level).


To bring this (rather long) post to a close, I want to return to the beginning. SOA is heading down the steep slope into the pit of disillusionment. You can head this off for yourself and your organization by realizing ahead of time, right now, that SOA only addresses syntactic issues. You must address the much harder semantic issues yourself.


And the tools exist. They have for a long time. Good procedural design, use of flow charts, data flow diagrams, control diagrams, state diagrams: these are all very valid tools that can help you manage the semantic coupling. Unfortunately the majority of people with expertise in these tools are nearing retirement (or have retired) – but the tools and techniques are there if you can find some old, dusty books on procedural design. Just remember to include the setup/overhead cost vs run cost in your decisions on whether to make each procedure into a "service".


SOA solves some serious and important issues, but it is overhyped. Fortunately the hype is fading, and so we can look forward (perhaps 18 or 36 months) to a time when we can, with any luck, start focusing on the “next big thing”. Maybe, just maybe, that “big thing” will be some new and interesting way of addressing semantic coupling.

Monday, October 9, 2006 5:07:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, September 30, 2006

CSLA .NET version 2.1 is now available for download from

Please make sure to read the Change log for this release, as there are substantial changes to validation, authorization and other parts of the framework. There are a limited number of breaking changes that may affect your code, so please review the document before attempting to upgrade from 2.0 to 2.1.

The vast majority of the changes in version 2.1 are due to the community feedback from the great members of the online forum. I would like to thank everyone who provided feedback and input into the process!

Saturday, September 30, 2006 11:20:17 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, September 29, 2006

This post is in response to some comments in my previous post, which linked to Steve Yegge’s blog entry about Good Agile and Bad Agile.

One commenter, JH, makes a perfectly valid observation: criticizing Waterfall and Agile without offering an alternative isn’t very productive. So…

Actually, truth be known, I used Waterfall and several variations of it for many years - with success. We collected requirements, analyzed them, did our design, our development, testing - all the stuff you are supposed to do. And you know, the projects worked out pretty darn well.

Why? I don't honestly think it was Waterfall that did it. It was the fact that I was working with a motivated team of reasonably talented developers who really cared about doing a good job. In order to do a good job building software it is so amazingly clear that you need to understand the need (requirements/analysis), translate them to software (design/development) and make sure the result actually works (testing).

MSF, by and large, is just Waterfall in a spiral, with some different documents and techniques. Agile is just Waterfall in random order, and with a few different techniques and tools. In the end it all comes down to gathering requirements, analyzing them, translating that into a design, doing some coding and some testing and away you go. You can rearrange the pieces on the board, but they are the same pieces.

Every consulting company I’ve worked for has had its own Methodology (or Framework, or whatever they chose to call it). Each of them was some merger of concepts, techniques and tools from Waterfall, MSF, RUP, Agile and various other Methodologies with which the creators were familiar. And each of them worked just fine.

The primary issue I have with Methodology (capital “M”) is that following the Methodology ceases being a means to an end, and becomes blind Doctrine. The steps in the process cease being tools used for a reason, and become mere Ritual.

“Why are you doing X?”

“Because the Methodology said to!”

“But there’s a better approach. If you just used this technique …”

“You aren’t questioning the infallibility of our Methodology are you? This Methodology was written by the Great PooBah, and has been handed down unchanged through time!”

“What? No, I was just suggesting…”

“HERETIC! Burn him at the stake!”

If you can somehow keep your Methodology from becoming Doctrine and Ritual, then you can be successful – with almost any Methodology, I think. Unfortunately it seems that human nature has an intrinsic desire to conform to Doctrine and Ritual – you can see it in the dogmatic positions of so many Methodologists out there.

As Clemens points out, within a given team, assuming reasonably sized teams, I think you can pick almost any Methodology and do quite well. With the right people, even the Cowboy approach works great in most cases, and it is quite clear that Agile can work fine too. So does Waterfall by the way.

But across teams? You must have some level of formal structure. Some level of formalized communication and process and coordination and authority. I know, that all sounds very ugly, and not at all fun, but it is unavoidable. Conflicting schedules, overlapping dependencies and competing political interests require a higher level of structure in these cases.

Traditional Waterfall presupposes lengthy blocks of time for each step, and isn’t practical in today’s world – even (especially?) across teams. However, a modified Waterfall, along the lines of the MSF spiral, works really well in my experience. Assembling a cross-project team, composed of representatives from the more tactical teams (regardless of their Methodology of choice) for the purposes of coordination and mediation is absolutely critical.

So my “Methodology” of choice? At a team level, pick the techniques and tools that work within your specific environment, with your specific team members – with all their varied talents and capabilities. Take the best from Waterfall, MSF, RUP, Agile and anything else you can find. Some techniques work in one setting and fail horribly in another.

At the level of a large project, or cross-project effort, some variation of Waterfall or one of its modern descendents really is, in my view, the right answer. Few other Methodologies provide sufficient structure and formalization of process to make such efforts manageable.

Friday, September 29, 2006 8:28:48 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, September 28, 2006

A colleague of mine (customer of Magenic) forwarded me a link to Steve Yegge's wonderful blog post about Agile methodologies (good and bad).

I think my favorite bit in the whole thing (and there are a lot of good bits!) is this:

Bad Agile seems for some reason to be embraced by early risers. I think there's some mystical relationship between the personality traits of "wakes up before dawn", "likes static typing but not type inference", "is organized to the point of being anal", "likes team meetings", and "likes Bad Agile". I'm not quite sure what it is, but I see it a lot.

Like Steve, I am not a morning person, and this probably explains my general antipathy towards Agile Methodologies.

This antipathy is something I've struggled with at length.

I am a strong believer in responsibility-driven, behavioral, CRC OO design - and that is very compatible with the concepts of Agile. So how can I believe so firmly in organic OO design, and yet find Agile/XP/SCRUM to be so...wrong...?

I think it is because the idea of taking a set of practices developed by very high-functioning people, and cramming them into a Methodology (notice the capital M!), virtually guarantees mediocrity. That, and some of the Agile practices really are just plain silly in many contexts.

Rigorous unit testing? Slam dunk!
Writing the tests before writing your code? Totally optional.

Pair programming for intense, multi-threaded framework code? Absolutely!
Pair programming to retrieve a DataSet using ADO.NET? Yeah right…

See that’s the thing. Formalize a set of practices into a Methodology and the practices lose their meaning. Each practice in Agile really is good – in a specific place and for a specific purpose. But when wrapped in a Methodology they become “good at all times and in all places” – at least to most practitioners.

This isn’t Agile’s fault by the way. I’m not entirely convinced that Waterfall Methodology isn’t rooted in a set of perfectly good practices, which were subverted and corrupted by becoming a Methodology.

Nah, on second though Waterfall really is just plain bad.

Thursday, September 28, 2006 2:41:01 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 26, 2006
Dunn Training continues to ready its official CSLA .NET training course, with plans for a beta of the course to be taught in November.

If you are in need of some classroom training on CSLA .NET, November is your first chance, probably followed by a class in January.
Tuesday, September 26, 2006 8:33:13 PM (Central Standard Time, UTC-06:00)  #    Disclaimer