Rockford Lhotka

 Friday, September 22, 2006
Thanks to a lot of work by Chris Russi, there is now an official CSLA .NET logo graphic:

and a corresponding "Powered by" logo for web sites or applications that are built on CSLA .NET:

Note that the CSLA .NET license does not require any public display of this logo! But if you would like to use it to let people know that your application or site is built using CSLA .NET, that's wonderful! :)

Chris very kindly created a range of different sizes, and you can download the entire set in one zip file.
Friday, September 22, 2006 3:05:34 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
Jason Bock, a colleague of mine at Magenic, is organizing a code camp in the Twin Cities for November 11, 2006. Click here for details.

I'll be speaking at the event, though I haven't decided on a topic yet.
Friday, September 22, 2006 8:51:51 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 12, 2006

I just updated the download page ( to make the BETA version of CSLA .NET 2.1 available.

Make sure you read the Change Log document before going too far. I've tried to list all breaking changes as a summary at the top, with my estimate on the likely impact of each one. You can find more details further into the document.

There will be no more functional changes to 2.1, so this download is complete. There obviously could be bugs, but enough people have been helping me find issues with the last couple pre-releases that I think it is pretty stable at this point.

Barring anyone finding a major issue, I am still planning to release 2.1 by the end of September, so if you have any near-term plans to use 2.1 I strongly recommend trying this beta version on a non-production copy of your project over the next 10 days or so. That will allow you to provide me with any bug reports in enough time that I can try and address them.

Tuesday, September 12, 2006 8:18:19 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, September 7, 2006

I get many questions about CSLA .NET, and whether it is a good fit for various projects and needs. Many times I put the answers on my web site - building a FAQ of sorts. Here are some links to good articles there, and a copy of a Q&A I recently did for a prospective Magenic client.

What is CSLA .NET?

CSLA .NET technical requirements

Why move from CSLA .NET 1.x to 2.x?

CSLA .NET Roadmap

CSLA .NET 2.0 download page (which includes a summary of breaking changes)

CSLA .NET 2.0 change log (detailed list of changes from version 1.5)

Now for the recent Q&A:

What are the drawbacks to using the CSLA?

This question is too vague to be answered directly. No framework is perfect for all types of application, and CSLA .NET is no exception.

It is not particularly useful for creating non-interactive batch processing code - the .NET 3.0 WF technology is a better bet there. (though the activities in a WF can be created using CSLA, and that's a great model that the WF team really likes)

It is also not particularly useful for reporting. A good reporting tool like SQL Reporting Services is the right answer.

CSLA does require the use of good OO design. Where the DataSet works well (best?) when using VB3 style code-in-the-form architecture, CSLA really requires the application of OO design to build a true business layer. For some developers this can be a hurdle, because they are not used to doing OO design.

CSLA also has some technical requirements. Depending on the transaction model you use, the DTC may be required. In all cases CSLA requires FullTrust code access security, so it can't be used in a partial trust environment (like on most commercially hosted web servers).

Where do the CSLA and the Enterprise Library overlap?

They don't. They are complimentary, see this article for details.

What is the roadmap for the CSLA?

CSLA .NET is a framework for creating a reusable, object-oriented business layer for .NET applications. As such, it continues to evolve based on the evolution of the .NET platform. This means its roadmap generally follows Microsoft's lead, with some exceptions around enhancements to CSLA itself. At the moment the roadmap looks like this:

Version 2.0      - available now for .NET 2.0 and VS 2005

Version 2.1      - available Q3-06 with performance and feature enhancements

Version 3.0 (?) - available Q1-07 with full support for .NET 3.0 (WCF, WPF, WF)

Version 3.x      - available Q?-07 with support for .NET 3.5 (LINQ technologies)

Obviously the 3.x schedules are subject to change based on Microsoft's schedule.

(updated road map can be found here)

What is the model for integrating the CSLA with .NET 3.0\Windows Communication Foundation?

I am working closely with the Connected Systems group in Redmond on this, and related technologies. Additionally, when I build CSLA .NET 2.0 I specifically designed it to allow for near-seamless integration of WCF.

Right now you can get an early version of a data portal channel for WCF from here.

This allows existing applications using Remoting, Web Services or Enterprise Services to transition to WCF with NO CODE CHANGES. This is a very compelling feature of CSLA in my view.

Unfortunately that's not the whole story, and that's why a new version of CSLA will be required in the .NET 3.0 timeframe. CSLA uses the BinaryFormatter for some scenarios (n-level undo, cloning). To support the DataContract model, the NetDataContractSerializer is required instead, so the formatter needs to be pluggable. Again, I've been working with the PM who owns serialization, and this is not a major change to CSLA, and other than there being a new configuration switch this change should have NO IMPACT ON ANY APPLICATION CODE.

How is the CSLA supported in production environments?

CSLA is not a "product". I don't sell it, nor do I sell support for it. CSLA is part of my Expert VB/C# 2005 Business Objects books, and is covered under a very liberal, essentially open-source, license

As such, support in a production environment is the responsibility of the organization using the framework.

What is the typical model for extending\enhancing the CSLA so that one can take advantages of new versions and migrate the custom enhancements easily to the new CSLA runtime?

CSLA is designed to include numerous extensibility points, allowing organizations to customize the behavior of key functionality without altering the core framework. This is most often handled through inheritance, where an organization inherits from a CSLA base class to create their own custom base class that extends or alters the core CSLA behavior. In other cases a provider model is used to allow organizations to replace pre-supplied providers with their own custom providers.

What Application Service Provider and\or 24x7x365 customer references exist for CSLA implementations?

Magenic can provide various case studies around their implementation of projects using CSLA .NET.

For my part, since I don't sell CSLA, I can't fund the process necessary to go through the legal hoops at most organizations to release their names as references. I do maintain this list of organizations who use CSLA. I can say that the framework is used in a wide range of applications, from small to very large and mission critical.

Also, here are some unsolicited quotes from CSLA .NET users:

"A day doesn't go by without me marveling at how elegant this CSLA is."

"Rocky, I learned so much from your books. They had enlightened me on true OO practice, including David West's Object Thinking that you recommend. Thank you."

"First, a huge thank you for an awesome business object framework. I've learned so much just by stepping through your code."

"Having single handedly completed a rather major WinForms app, which has now been in operation for over a year at several sites, I can tell you that you absolutely cannot go wrong choosing CSLA."

"I'd like to say that you did a great job on CSLA. I've found it very helpful in the creation of standardly developed and maintainable code among our teams."

Are there any Visual Studio 2005 project templates available for using the CSLA within VS2005?

CSLA includes a set of snippets and class templates for each object stereotype. Even more importantly, in my view, there are several code-generation templates available for CodeSmith and other code generators, and Kathleen Dollard's book on .NET code generation also includes a CSLA code generator. Additionally, Magenic has a very powerful and extensible code generator that we often use on CSLA projects.

While project and file templates offer some short-term productivity, properly implemented code generation provides increased productivity over the lifetime of the entire project and should be a requirement for any large initiative (CSLA based or not).

Thursday, September 7, 2006 9:31:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 5, 2006

Several people have asked me about my thoughts on the Microsoft .NET 3.0 Workflow Foundation (WF) technology.


My views certainly don’t correspond with Microsoft's official line. But the “official line” comes from the WF marketing team, and they'll tell you that WF is the be-all-and-end-all, and that's obviously silly. Microsoft product teams are always excited about their work, which is good and makes sense. We all just need to apply an "excitement filter" to anything they say, bring it back to reality and decide what really works for us. ;)


Depending on who you talk to, WF should be used to do almost anything and everything. It can drive your UI, replace your business layer, orchestrate your processes and workflow, manage your data access and solve world hunger…


My view on WF is a bit more modest:


Most applications have a lot of highly interactive processes - where users edit, view and otherwise interact with the system. These applications almost always also have some non-interactive processes - where the user initiates an action, but then a sequence of steps are followed without the user's input, and typically without even telling the user about each step.


Think about an inventory system. There's lots of interaction as the user adds products, updates quantities, moves inventory around, changes cost/price data, etc. Then there's almost always a point at which a "pick list" gets generated so someone can go into the warehouse and actually get the stuff so it can be shipped or used or whatever. Generating a pick list is a non-trivial task, because it requires looking at demand (orders, etc.), evaluating what products to get, where they are and ordering the list to make the best use of the stock floor personnel's time. This is a non-interactive process.


Today we all write these non-interactive processes in code. Maybe with a set of objects working in concert, but more often as a linear or procedural set of code. If a change is needed to the process, we have to alter the code itself, possibly introducing unintended side-effects, because there's little isolation between steps.


Personally I think this is where WF fits in. It is really good at helping you create and manage non-interactive processes.


Yes, you have to think about those non-interactive processes in a different way to use WF. But it is probably worth it, because in the end you'll have divided each process into a set of discrete, autonomous steps. WF itself will invoke each step in order, and you have the pleasure (seriously!) of creating each step as an independent unit of code.


From an OO design perspective it is almost perfect, because each step is a use case, that can be designed and implemented in isolation - which is a rare and exciting thing!


Note that getting to this point really does require rethinking of the non-interactive process. You have to break the process down into a set of discrete steps, ensuring that each step has very clearly defined inputs and outputs, and the implementation of each step must arbitrarily ensure any prerequisites are met, because it can't know in what order things will eventually occur.


The great thing (?) about this design process is that the decomposition necessary to pull it off is exactly the same stuff universities were teaching 25 years ago to COBOL and FORTRAN students. This is procedural programming "done right". To me though, the cool think is that each "procedure" now becomes a use case, and so we're finally in a position to exploit the power of procedural AND object-oriented design and programming! (and yes, I am totally serious)


So in the end, I think that most applications have a place for WF, because most applications have one or more of these non-interactive processes. The design effort is worth it, because the end result is a more flexible and maintainable process within your application.

Tuesday, September 5, 2006 11:24:24 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 10, 2006
Yesterday I recorded two more DNR TV shows on CSLA .NET with Carl Franklin, so watch for those to go online in the next few weeks. Carl does a nice job of editing the recordings and cleaning up the audio, so it takes some time between recording and "airing", but it is well worth it!

Thursday, August 10, 2006 1:57:00 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, August 5, 2006

People often ask me how business is going, or how Magenic is doing. I answer “good”. But that’s not really true.


It turns out that it is better than good. I’ve been in consulting, one way or another, for nearly 12 years now and there are a couple times of year when things always slow down: late summer and the end-of-year holidays.


Well this summer, for the first time in my recollection, summer has gotten more busy, not less!


So I’m posting this blog entry in an attempt to address Magenic’s primary issue right now: finding good .NET development resources. Yup, we just can’t find enough people with good .NET experience. This includes regular Web and Windows .NET development, middle-tier work and SQL Server. But it also includes Biztalk Server and SharePoint Server as well. Any peripheral technologies like content management are always welcome too.


I’ve been with Magenic for 6 years as of this month, and I’ve never regretted my choice to work for the company. Greg and Paul (the owners) have steered the company through the dot-com bubble and its subsequent crash with admirable skill. And while no one could say 2002-4 were fun, I can honestly say that I felt more secure at Magenic than my friends at other companies were feeling at the time.


Sure, consulting is consulting. If you want to come work for Magenic you need to realize right up front the realities of being a consultant. But given that, I find it hard to imagine a better consulting company to work for, especially if your focus is around Microsoft and .NET.


One of Magenic’s core tenants is to try and find “cool work”. Obviously that’s not always possible, because each individual defines “cool” differently, and there are business realities around consulting that simply can’t be ignored. But just the fact that this is one of the company’s core tenants says a lot!


So here’s the deal. If you live (or would like to live) in the Boston, Atlanta, Chicago, Minneapolis or San Francisco areas, and if you’ve got .NET skills, and if you want to escape the politics of IT in exchange for the life of a consultant then this is an excellent time to see if Magenic is the right place for you.


If you are interested, just send me an email and I’ll forward it on to one of our recruiters (kind of a fast-track offer :) ). Of you can contact recruiting directly through the web site.

Saturday, August 5, 2006 2:17:13 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, August 2, 2006

In a previous post I made a reference to the EU anti-trust case. Stefan asked me to clarify my statement:


Could you please explain more detailed what you mean by "will likely do serious harm to European consumers"? As I know the European antitrust suit against Microsoft is to strengthen consumer rights. The charge against Microsoft is that they use their monopoly in OS and undocumented functions in their OS to push their own products. Why should it do harm to European consumers if Microsoft has to offer an OS without Internet Explorer and without Mediaplayer, or to open undocumented functions?


Stefan, I realize that the purported purpose of the anti-trust action is to strengthen consumer rights. Yet for the most part, the real result is to protect the "rights" of Microsoft's competitors, not consumers. And perhaps you can't do one without the other - that could be the case.


But consider this. In the US anti-trust findings, Judge Jackson was very clear that computer operating systems are a "natural monopoly". That market forces exist that have, and will, create a monopoly in this space. You can read the findings yourself, just search for the term “monopoly” and you’ll get close to the relevant section.


The reason for the natural monopoly is sound: it is very expensive for organizations to support varied platforms. Thus they standardize on as few installation sets as possible - preferably one client and one server configuration. So an organization will standardize on Windows XP, or on Windows XP without IE or Media Player, but not both.


But then the effect spreads, and this is where consumers get hurt. ISVs build software for the broadest market segment, because that is the most cost-effective approach. So they look at the market and see that the majority of systems are real Windows XP and so they build for that. Only if there's enough outcry from the XPN users will they port their software to run on XPN. The XPN user base needs to be able to find the cost of porting, plus the normal profit the ISV would have made.


This causes a feedback loop. Users of XPN always feel like second-class citizens, because they always get software later than real XP users. So when they purchase their next OS, they are more likely to buy real XP rather than the version that gets less ISV support.


People don't buy computers for the OS after all - they buy computers for the apps they want to run, and those apps come from ISVs.


It is this same effect that causes comparatively slow adoption of the Mac or of Linux as a desktop. Compared to Windows, there’s a dearth of real applications for these other platforms, and the reason is economic.


So “getting rid of” the monopoly isn’t a viable goal. The only realistic approach is to manage the monopoly, because destroying it would merely result in some other company rising to take its place – purely due to these economic factors.


With that background, I’ll explain why I think the EU’s actions are likely to hurt consumers – just like some of the proposed US actions from 6 years ago would almost certainly have hurt consumers.


Restricting the inclusion of the IE and Media Player user interfaces is, I think, immaterial. But the EU is restricting Microsoft from including the underlying components on which they are built. These underlying components are also available to ISV’s, and most of us consider them simply part of the Windows operating system.


So Windows XP N, and future crippled versions of Windows mandated by the EU merely cause an artificial split in the operating system. One which any sane organization would ignore, because it is very clearly out of the mainstream (see the natural monopoly discussion above).


Of course the reality is that only a very few applications rely on the API exposed by Media Player’s libraries, so that isn’t such a big deal today. More applications rely on IE libraries and that can be more problematic.


The bigger issue is the precedent, because further bifurcation of the OS over time will merely make Windows Vista N (or whatever) more marginal.


But still, where’s the harm to consumers?


  1. There’s harm to the uneducated people who are naïve enough to buy the government-mandated, crippled versions of the OS. They’ll find themselves unable to run certain apps and won’t know why.
  2. There’s harm to all of us, because it costs Microsoft money to build these OS variants – and to test them, and to test all their other products against them (like Money, Visual Studio, SQL Server Express, and on and on – they all must be tested against this crippled version of the OS along with the real OS). Someone must pay for this development and testing.

    Microsoft has been criticized for selling XPN at the same price as XP. Really they should charge more for it, because it is merely a cost to Microsoft. The EU prevents that (as they need to), so it is sold at the same price. If Microsoft were to sell XPN at a lower price, they’d need to raise the price of regular XP to subsidize XPN in order to maintain the same profit margin they make today.

    Software is fundamentally different from physical goods like a car – having fewer options is not cheaper, it is more expensive…
  3. There’s harm to consumers of 3rd party apps that use any features missing from the crippled versions of the OS. They must spend extra time and money compensating for the lack of those features – either through installer UI support to get the user to reinstall the missing code, or by rewriting their app to use a non-OS equivalent API that they can install themselves. This extra cost to the ISV ultimately gets passed along to consumers as a higher price.

    Of course this presupposes that the XPN market is big enough for the ISV to actually care. If not, we’re back to the economic factors causing the natural monopoly in the first place. And even if the XPN market is big enough, XPN users will likely get their software later, and with less convenience during install and so they’ll be less likely to buy XPN in the future – again reinforcing the natural monopoly.


As a totally separate issue, the EU is also following the US courts in mandating that Microsoft release better documentation of its APIs. I don’t necessarily believe that this harms consumers (with one exception, so read on).


I also am not entirely convinced that forcing the release of documentation helps consumers either – at least not nearly as much as it helps Microsoft’s competitors. Only time will tell on this point.


Since browsers and media players are free, there can be no help for consumers in these spaces; at least from a cost-of-acquisition perspective. It may be that some competitor will create software that is more capable, easier to use, cheaper to maintain/support or something like that – in which case I’ll happily acknowledge a benefit to consumers.


But to date, it is very hard to find third party software that integrates as well as Microsoft’s set of software. Just try to copy-paste from Word into a rich text editor in FireFox as compared to IE to see what I mean… Both work, but IE tends to work substantially better.


Of course you can’t blame third parties for limiting their coupling to Microsoft’s APIs. And this is where things (to me) get puzzling. I work very hard to limit my exposure to Microsoft technologies, and I try to only use their APIs at the surface level. Why? Because Microsoft owns those APIs and changes them over time – and I don’t want to absorb the cost of compensating every time they change something.


This is why CSLA .NET, for instance, abstracts things like Enterprise Services and System.Transactions and Remoting and Web Services. Because these are all fluid APIs, and tightly coupling your business application to those APIs is costly. CSLA .NET offers a buffer between them, and the business code we all create and maintain.


This, I think, is also why FireFox has limited copy-paste capabilities when compared to IE. It is quite likely that they’d need to take a deeper dependency on a Windows API to have comparable functionality, and that’d make them vulnerable to Microsoft changing things over time.


And so here we arrive at the one possible cost to consumers of Microsoft providing more open documentation of its deeper APIs. Documentation doesn’t equate to stability. Just because they are documented doesn’t mean they stop changing. Any software that does accept a dependency on these APIs is even more subject to change than normal software – and we all know the pain of API changes to normal software.


So yes, third party apps may become more integrated than before – but they’ll almost certainly cost more, because that integration (coupling) has a high cost to the ISV creating the software, and they’ll need to pass that along to consumers.


My guess is that very few ISVs will actually use these deeper APIs. These companies are out to make a profit too, and so they’ll weight the cost of coupling/integration against the benefit of increased sales and determine, case by case, whether they can increase sales (or prices) enough to justify the costs of tighter coupling and its attendant vulnerability to change.


Yes, this is a weaker argument – and personally I think the more open Microsoft is in documenting its APIs the better. My real point here is that it isn’t just all goodness and light. There is a serious tradeoff to actually using the APIs that can’t be ignored – and which, I think, will ultimately negate any “benefit to consumers”.


So if you’ve stuck with me this far, I’ll summarize: I think the EU went too far by forcing the creation of a crippled OS. I think both the US and EU courts are doing a fine thing by forcing Microsoft to document more of its APIs, but I doubt that will actually help consumers as much as a few ISVs and some competitors.


I haven’t even touched the OEM market, and what the courts have changed there – generally for the betterment of consumers. With luck Microsoft is really learning to live within these boundaries (they have a 12 step program after all :) ), and consumers will continue to see benefits in this area going forward.


So to close, my primary criticism of the EU is around mandating the creation and support of a crippled OS. There is no upside for consumers there, just extra cost and pain.

Wednesday, August 2, 2006 10:52:43 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, August 1, 2006

Maintaining CSLA .NET, along with the ProjectTracker sample app, in both VB and C# is challenging. I've blogged before about how tedious it is converting C# to VB or visa versa, and how it slows down progress on the framework.

For CSLA .NET 2.0 I was fortunate enough to have the help of Brant Estes from Magenic, who put in amazing time and effort to keep the C# code in sync with the VB code through the development and writing process. There was no way I'd have completed the books and framework on schedule without his help!

But recently I've been playing with a set of tools from Tangible Software Solutions: Instant VB and Instant C#. By way of disclaimer, I got free licenses to these tools thanks to Tangible, and that's why I've been using them. But having used them, I would suggest that they are certainly worth the cost for anyone who needs to convert code one way or the other.

I have been very skeptical of language conversion tools. By and large the results don't look "right". Convert VB to C# and you get working code, but not the kind of code you'd write by hand. It is even worse when converting C# to VB - the results look like C# without the semicolons, and don't resemble real VB code hardly at all...

Now I'm not ready to say that Instant VB/C# are perfect, but they are pretty darn good. Better still, I have been providing feedback to Tangible about what I wish was different, and the speed at which they have responded by improving the tools is excellent! Seriously, I think they've given me 2-3 new builds just in the past couple weeks - each one making my life easier.

I'm using the tools in per-file mode, translating individual CSLA .NET source files from C# to VB and from VB to C#. The VB to C# conversions are quite good, and there are only a couple things I'm having to do to the resulting code (clean up the using (Imports) statements, and rename instance variables because I use a 'm' prefix in VB and a '_' prefix in C#).

The C# to VB isn't quite as smooth yet, but it is getting rapidly better. Tangible is working on an enhancement to translate several common string and type conversion patterns into VB equivalents (Len(), CInt(), etc.) I believe that'll be an option, so if you prefer the C# approach even in your VB code you don't need to worry. Once that is working, I expect to be pretty happy. Even so, I'll need to change instance variables because of my choice of different prefixes - but that's really my issue :)

The thing is, the time savings for me is tremendous. But what makes me write this blog entry is the fact that the resulting code really does look like the kind of code you'd write by hand - and for me that's the true test.

Tuesday, August 1, 2006 1:43:49 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, July 31, 2006

I submit that this is a good move by Microsoft: making the MSDN Library available for free download.

Some may argue that this devalues the MSDN subscription - but frankly that's silly. The vast majority of the Library is available online anyway, all Microsoft has done here is provided a more convenient way to access the data. It isn't like they decided to give away the software for free! Personally I haven't installed the Library on my machine for well over a year, because I find the web access more convenient.

Dollar per bit, an MSDN subscription is an unbeatable deal for a developer. The ability to get almost every OS, server and development tool for the purposes of development at just over the cost of Visual Studio alone is really quite amazing when you think about it.

Other people will likely argue that this is in response to government actions (the EU in particular). If so, then so be it. I think the EU is out of control and will likely do serious harm to European consumers, and maybe to Microsoft. But the upside for me is that I work for a consulting company, and the more variations on the OS the more time it takes us to build even simple software. Since we charge by the money, it merely means that software for use in the EU will make us more money that software for use in the US or elsewhere. So perhaps I should be rooting for the EU, because in some perverse way they're likely to make me more money?

Regardless, even if Microsoft is releasing the Library free to help mitigate some "openness" issues in the EU, that is only good news for developers who (for some reason) find it hard to get the content over the Internet.

My view is this: I've worked with IBM software, and the lack of an MSDN-equivalent is devastating to productivity. And I've worked with (and continue to work with) open source software, where the lack of decent documentation and organized support materials is infamous. The investment Microsoft has always made around supporting developer productivity through documentation and MSDN is one of its key success factors - at least in the development world. To me, this is just another small step in Microsoft's continuing support for developers on their platform.

Monday, July 31, 2006 7:31:25 PM (Central Standard Time, UTC-06:00)  #    Disclaimer