Rockford Lhotka

 Tuesday, February 7, 2006

A few people have asked how the book is coming along, so here’s an update.


I touch each chapter a minimum of four times: to write it (AU), to revise it based on tech review comments (TR), to revise it based on copyedit questions (CE) and to do a final review after layout/typesetting (TS).


I am writing both the VB and C# books “concurrently”. What this really means is that I’m writing the book with one language, then skimming back through to change all language keywords, code blocks and diagrams to match the other language.


To practice what I preach (which is that you should be competent, if not fluent, in at least two languages) I am doing the book in C# and then converting to VB. It takes around 8 hours per chapter to do that conversion, 12 if there are a lot of diagrams to convert (code is easy – the damn graphics are the hard part…).


So, here’s the status as of this evening:






AU done


Front matter

AU done



TS done

TR done


TS done



TS done






CE done



CE done



CE done



CE done



CE done



CE done



TR done



TR done



People have also asked how much I expect the CSLA .NET 2.0 public beta code to change between now and the book’s release at the end of March. Chapters 2-5 cover the framework, and as you can see those chapters are into the final editing stages. As such, I certainly don’t anticipate much change.


While I’ve made every effort to keep the VB and C# code in sync, there may be minor tweaks to the code as I roll through the VB chapters 2-5. But I’ve used both in projects and at conferences like VS Live last week, and both pass my unit tests, so those changes should be cosmetic, not functional.


In other words, the beta is pretty darn close to the final code that’ll be provided for download with the book.

Tuesday, February 7, 2006 9:05:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, February 2, 2006

It is fairly common for people to focus so much on the CSLA .NET “feature list” that they forget that the purpose of the framework is merely to enable the implementation of good OO designs. Many times I’ll get questions or comments asking why CSLA .NET “only” supports certain types of objects, and why it doesn’t support others.


The thing is, CSLA .NET doesn't dictate your object model. All it does is enable a set of 10 or so stereotypes (in CSLA .NET 2.0), providing base classes to simplify the implementation of those stereotypes:


·         Editable root

·         Editable child

·         Editable root collection

·         Editable child collection

·         Read-only root

·         Read-only child

·         Read-only root collection

·         Read-only child collection

·         Name-value list

·         Command

·         Process


CSLA .NET provides base classes to minimize the code a business developer must write to implement this set of common stereotypes. But the fact is that most systems will have objects that fit into other stereotypes, and that's great! CSLA .NET doesn’t stop you from implementing those objects, and it may help you.


If you have objects that don’t fit the stereotypes listed above, you may still find that specific CSLA .NET features are useful. For instance, if your object must move between the client and application server, the data portal is extremely valuable. This is true even for objects that don’t actually use “data” (as in a database), but may have other reasons for moving between client and server. In other cases, an object may benefit from the Security functionality in CSLA .NET, such as authorization.


The point is that you can pick and choose the features that are of use for your particular object stereotype. If you’ll be creating multiple objects that fit this new stereotype, I recommend creating your own base class to support their creation (not necessarily by altering the Csla project itself, as that complicates updates). As much as possible, I’ve tried to keep the framework open so you can extend it by creating your own base classes – either from scratch, or inheriting from something like Csla.Core.BusinessBase, or Csla.Core.ReadOnlyBindingList.


The key point, is that you should develop your object model based on behavioral object design principles. Then you should determine how (and if) those objects map into the existing CSLA .NET stereotypes, thus identifying where you can leverage the pre-existing base classes and tap into the functionality they provide. For objects that simply don’t fit into one of these stereotypes you can decide what (if any) CSLA .NET functionality those objects should use and you can go from there.


In short, the starting point should be your object model, not the “feature list” of CSLA .NET itself.

Thursday, February 2, 2006 2:06:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, January 30, 2006

A public beta of CSLA .NET 2.0 is now available at

The Expert C# 2005 Business Objects and Expert VB 2005 Business Objects books are still due out around the end of March. That is also the point at which the framework with "RTM".

The public beta should be reasonable stable. I am nearly done with technical reviews of the chapters, and so am basically done with the framework updates. The ProjectTracker sample application is still under review at this point.

Monday, January 30, 2006 2:44:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 24, 2006

Next week at VS Live in San Francisco I’ll release a public beta of CSLA .NET 2.0.


This will include both VB 2005 and C# 2005 editions of the framework code, along with a sample application in both languages, showing how the framework can be used to create Windows, Web and Web service interfaces on top of a common set of business objects.


Those business objects are substantially more powerful than their CSLA .NET 1.x counterparts, while preserving the same architectural concepts and benefits. These objects leverage new .NET 2.0 features such as generics. They also bring forward features from CSLA .NET 1.5, but in a more integrated and elegant manner. There are additional features as well, including support for authorization rules, a more flexible data portal implementation and more.


Watch for the release of the beta next week.


Click here to see Jim Fawcette's blog entry about the upcoming beta release.

Tuesday, January 24, 2006 1:43:25 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, January 23, 2006

[Warning, the following is a bit of a rant...]


Custom software development is too darn hard. As an industry, we spend more time discussing “plumbing” issues like Java vs .NET or .NET Remoting vs Web Services than we do discussing the design of the actual business functionality itself.


And even when we get to the business functionality, we spend most of our time fighting with the OO design tools, the form designer tools that “help” create the UI, the holes in data binding, the data access code (transactions, concurrency, field/column mapping) and the database design tools.


Does the user care, or get any benefit out of any of this? No. They really don’t. Oh sure, we convince ourselves that they do, or that they’d “care if they knew”. But they really don’t.


The users want a system that does something useful. I think Chris Stone gets it right in this editorial.


I think the underlying problem is that just building business software to solve users’ problems is ultimately boring. Most of us get into computers to express our creative side, not to assemble components into working applications.


If we wanted to do widget assembly, there’s a whole lot of other jobs that’d provide more visceral satisfaction. Carpentry, plumbing (like with pipes and water), cabinet making and many other jobs provide the ability to assembly components to build really nice things. And those jobs have a level of creativity too – just like assembly programs do in the computer world.


But they don’t offer the level of creativity and expression you get by building your own client/server, n-tier, SOA framework. Using data binding isn’t nearly as creative or fun as building a replacement for data binding…


I think this is the root of the issue. The computer industry is largely populated by people who want to build low-level stuff, not solve high-level business issues. And even if you go with conventional wisdom; that Mort (the business-focused developer) outnumbers the Elvis/Einstein framework-builders 5 to 1, there are still enough Elvis/Einstein types in every organization to muck things up for the Morts.


Have you tried building an app with VB3 (yes, Visual Basic 3.0) lately? A couple years ago I dusted that relic off and tried it. VB3 programs are fast. I mean blindingly fast! It makes sense, they were designed for the hardware of 1993… More importantly, a data entry screen built in VB3 is every bit as functional to the user as a data entry screen built with today’s trendy technologies.


Can you really claim that your Windows Forms 2.0, ASP.NET 2.0 or Flash UI makes the user’s data entry faster or more efficient? Do you save them keystrokes? Seconds of time?


While I think the primary fault lies with us, as software designers and developers, there’s no doubt that the vendors feed into this as well.


We’ve had remote-procedure-call technology for a hell of a long time now. But the vendors keep refining it every few years: got to keep the hype going. Web services are merely the latest incarnation of a very long sequence of technologies that all you to call a procedure/component/method/service on another computer. Does using Web services make the user more efficient than DCOM? Does it save them time or keystrokes? No.


But we justify all these things by saying it will save us time. They’ll make us more efficient. So ask your users. Do your users think you are doing a better job, that you are more efficient and responsive, that you are giving them better software than you were before Web Services? Before .NET? Before Java?


Probably not. My guess: users don’t have a clue that the technology landscape changed out from under them over the past 5 years. They see the same software, with the same mis-match against the business needs and the same inefficient data entry mechanisms they’ve seen for at least the past 15 years…


No wonder they offshore the work. We (at least in the US and Europe) have had a very long time to prove that we can do better work, that all this investment in tools and platforms will make our users’ lives better. Since most of what we’ve done hasn’t lived up to that hype, can it be at all surprising that our users feel they can get the same crappy software at a tiny fraction of the price by offshoring?


Recently my friend Kathleen Dollard made the comment that all of Visual Basic (and .NET in general) should be geared toward Mort. I think she’s right. .NET should be geared toward making sure that the business developer is exceedingly productive and can provide tremendous business value to their organizations, with minimal time and effort.


If our tools did what they were supposed to do, the Elvis/Einstein types could back off and just let the majority of developers use the tools – without wasting all that time building new frameworks. If our tools did what they were supposed to do, the Mort types would be so productive there’d be no value in offshoring. Expressing business requirements in software would be more efficiently done by people who work in the business, who understand the business, language and culture; and who have tools that allow them to build that software rapidly, cheaply and in a way that actually meets the business need.

Monday, January 23, 2006 11:41:13 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, January 19, 2006

Microsoft invites you to MIX, our 72 hour conversation live in Vegas, to discuss with industry leaders such as yourself high-fidelity commerce, media, services and security for the World Wide Web.  Join Bill Gates of Microsoft, Amazon, and web thought leaders such as Tim O’Reilly on March 20-22 at the Venetian hotel in Las Vegas to learn about the web’s next generation of content and commerce, plus the customer experience that is beyond the browser.  Registration is open!

The MIX conference is a LIVE conversation between web developers, designers and business leaders who create consumer-oriented web sites. Why is it called MIX?  The event is not only a place where you can Meet, Interact, and eXplore with Microsoft and others about the web, but we are MIXing things up by having a conference for tech geeks as well as business professionals who help make decisions about technologies and strategies for your company’s customer facing web sites.  When you attend MIX you’ll hear about Microsoft’s roadmap for the web, and learn the latest about Internet Explorer 7, Windows Media, Windows Live!, as well as “Atlas”, Microsoft’s new AJAX framework.  Register today and take advantage of the low price of $995, as well as the discounted conference hotel rate.

Thursday, January 19, 2006 4:25:32 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

In a previous post I discussed some issues I’ve been having with the ASP.NET Development Server (aka VS Host or Cassini).


I have more information direct from Microsoft on my issue. It turns out that it is “by design”, and a sad thing this is… VS Host is designed such that the thread on which your code runs can (and does) go between AppDomains.


Objects placed on the Thread object, such as the CurrentPrincipal, must be serializable and the assembly must be available to all AppDomains; even the primary AppDomain that isn’t running as part of your web site!


And this is the root of my problem. I create a custom IPrincipal object in an assembly (dll). I put it in the Bin directory and then use it – which of course means it ends up on the Thread object. Cassini then runs my code in an AppDomain for my web site and all is well until it switches out into another AppDomain that isn’t running my web site (but rather is just running Cassini itself). Boom!


Why boom? Well, that custom IPrincipal object on the thread is still on the thread. When the thread switches to the other AppDomain, objects directly attached to the thread (like CurrentPrincipal) are automatically serialized, the byte stream transferred to the new AppDomain, and deserialized into the new AppDomain. This means that the new AppDomain must have access to the assembly containing the custom IPrincipal class – but of course it doesn’t, because it isn’t running as part of the web site and thus doesn’t have access to the Bin directory.


What’s the answer? Either don’t use Cassini (which has been my answer), or install the assembly with the custom IPrincipal into the GAC. Technically the latter answer is the “right” one, but that has the ugly side-effect of preventing rapid custom application development. All of a sudden you can’t just change a bit of code and press F5 to test; instead you must build your code, update the GAC and then you can test. Nasty…


As an aside, this is exactly the same issue you’ll run into when using nunit to test code that uses a custom IPrincipal on the thread. Unlike nunit however, you can’t predict when Cassini will switch you to another AppDomain so you can’t work around the issue by clearing the CurrentPrincipal like you can with nunit (or at least I haven’t found the magic point at which to do it…).


What’s really scary is that it was implied that this could happen under IIS as well – but that flies in the face of years of experiential evidence to the contrary. I guess the safe thing to do is to treat IIS like Cassini, and put shared assemblies in the GAC. But I’m not sure I’m ready to advocate that yet, because that means complicating installs a whole lot, and I’ve never encountered this threading issue under IIS so I don’t think it is a real issue.

Thursday, January 19, 2006 9:34:17 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, January 18, 2006

Microsoft announced Go Live licenses this morning for WCF (Windows Communication Foundation / “Indigo”) and WF (Windows Workflow Foundation), which allow customers to use the January Go Live releases of WCF and WF in their deployment environments.  (Note that these are unsupported Go Lives.) 

More information about the Go Live program is at

There are also WCF and WF community sites, at:

The community sites are intended to give users everything needed to start using WCF and WF today.

Wednesday, January 18, 2006 1:13:41 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 17, 2006

In a recent entry there was a comment pointing to this article, which discusses the way ASP.NET uses threads – a fact that has serious ramifications on software design.


It made me do some serious thinking/research, because CSLA .NET uses various elements of the Thread object, including the CurrentPrincipal, thread-local storage and (in 2.0) the culture properties. My fear was that if the thread can switch “randomly” during page processing, then how can you count on any of these elements from the Thread object?


Scott Guthrie from Microsoft clarified this for me on a couple key points.


First, ASP.NET doesn’t change your thread at “random” or “without warning” (phrases I’d used in describing my worries). It changes it only in a clearly defined scenario. Specifically, when your thread performs an async IO operation. Scott points out that this is perhaps most common in a page that takes advantage of the new async page capability, or within a HttpModule that uses any async IO.


In other words, normal web pages really aren’t subject to this issue at all. It only occurs in relatively advanced scenarios. And you, as the developer, should know darn well when you invoke an async operation; thus you know when you open yourself up for thread switching.


Still, I wanted to make sure the Business Objects book didn’t go down a bad path with the Thread object. But Scott clarified further.


ASP.NET ensures that, even if they switch your thread, the CurrentPrincipal and culture properties from the original thread carry forward to the new thread. This is automatic, and you don’t need to worry about losing those values. Whew!


However, thread-local storage doesn’t carry forward. If you use that feature of the Thread object in ASP.NET code you could be in trouble. This did cause me to rework some code in the book and in CSLA .NET. Specifically CSLA .NET 2.0 now detects whether it is running in ASP.NET or not and uses either thread-local storage or HttpContext.Current.Items to maintain its context data.


I also went the rest of the way and created a Csla.ApplicationContext.User property that gets and sets the user’s principal on either the Thread or HttpContext based on whether the code is running in ASP.NET or not. This allows you to write code in your UI and objects and other libraries without worrying about whether it might run inside our outside ASP.NET at some point.


Certainly for mobile business objects this is very important! They can (and often do) run in both a smart client and ASP.NET environment within the same application.

Tuesday, January 17, 2006 2:48:29 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

In a recent email discussion a fellow Microsoft Regional Director, Patrick Hynds, drew an analogy comparing programming languages to tanks and A-10 attack planes:


I know when to use a Tank (plodding and durable lethality) and I know when to use a A-10 (fast, maneuverable and vulnerable lethality), but if you make tanks fly and add a few feet of armor on an A-10 then you get the same muddy water we have between C# and VB.Net.  Those that know me will forgive the military analogy ;)


I continued the analogy:


The problem we have today, in my opinion, is that C# is a flying tank and VB is a heavily armored attack plane.


Microsoft did wonderful things when creating .NET and these two languages - simply wonderful. But the end result is that no sane person would purchase either a tank or an A-10 now, because both features can be had in a single product. Well, actually two products that are virtually identical except for their heritage.


Of course both hold baggage from history. For instance, C# clings to the obsolete concept of case-sensitivity, and VB clings to the equally obsolete idea of line continuation characters.


Unfortunately the idea of creating a whole new language where the focus is on the compiler doing more work and the programmer doing less just isn't in the cards. It doesn't seem like there's any meaningful language innovation going on, nor has there been for many, many years...


(Even LINQ (cool though it is) doesn't count. We had most of LINQ on the VAX in FORTRAN and VAX Basic 15 years ago...)


The only possible area of interest here are DSLs (domain-specific languages), and I personally think they are doomed to be a repeat of CASE.


Tuesday, January 17, 2006 2:13:00 PM (Central Standard Time, UTC-06:00)  #    Disclaimer