Rockford Lhotka

 Wednesday, January 24, 2007

There's this "five things about me" tagging meme going around, and I've now been tagged by Bill, Craig and Andrea.

1. I grew up in the middle of Minnesota, surrounded by hundreds of acres of forest and lakes. Our nearest neighbor was a mile away. The nearest town 14 miles away, and the nearest real town over 30 miles away. The nearest city: a 2.5 hour drive. As a youth, I lived one of those lives of adventure you might read about from long in the past. I spent my time fishing, hunting, trapping, snowmobiling, boating, swimming and generally wandering through the woods and lakes of central Minnesota.

2. I love alternative metal and alternative rock. Rush, Queensryche, Godsmack, Linkin Park and so forth. Good rock, to me, requires thought-provoking lyrics, evocative sound and serious riffs!

3. I love speculative fiction (the fancy name for science fiction and fantasy). Tolkien, Asimov, Niven, Reynolds, Scalzi, McKillip, Norton and many others are on my favorites list. Between all these books, and my wife's collection of all the essays, letters and other writings of America's founding fathers (plus many random other books on many topics) we have an entire room set aside as a library.

4. My first overseas trip was to London, to attend a Babylon 5 convention to commemorate the conclusion of the show. My wife was (and technically still is) a moderator on the Babylon 5 moderated newsgroup (remember usenet? :) ).

5. I'm a would-be game developer. Years ago I wrote a MUD for the DEC VAX called Mordecai. It consumed all the resources the university VAX could muster back then, and I even ended up working with the Operations guys to build in capabilities to throttle the game so it wouldn't be playable during prime homework usage periods and that sort of thing. I've written bits and pieces of a much more ambitious .NET equivalent, but that project stalled a while back. The reality today is that game development is such a small part of building a game in total (graphics, physics engines and so forth are a much bigger part) that it is hard to get as excited about this stuff as I used to be...

Kindly enough, and because I don't have time to track down who's been tagged or not, I'm not tagging anyone, so this thread of execution ends here ;)

Wednesday, January 24, 2007 7:44:48 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, January 17, 2007

I recently had an email discussion where I was describing why I needed to solve the problem I described in this article for WPF and this article for Windows Forms.


In both cases the issue is that data binding doesn’t refresh the value from the data source after it updates the data source from the UI. This means that any changes to the value that occur in the property set code aren’t reflected in the UI.


The question he posed to me was whether it was a good idea to have a property set block actually change the value. In most programming models, goes the thought, assigning a property to a value can’t result in that property value changing. So any changes to the value that occur in the set block of a property are counter-intuitive, and so you simply shouldn’t change the value in the setter code.


Here’s my response:


The idea of a setter (which is really just a mutator method by another name) changing a value doesn't (or shouldn't) seem counter-intuitive at all.


If we were talking about assigning a value to a public field I’d agree entirely. But we are not. Instead we’re talking about assigning a value to a property, and that’s very different.


If all we wanted were public fields, we wouldn't need the concept of "property" at all. The concept of "property" is merely a formalization of the following:


  1. public fields are bad
  2. private fields are exposed through an accessor method
  3. private fields are changed through a mutator method
  4. creating and using accessor/mutator methods is awkward without a standard mechanism


So the concept of "property" exists to standardize and formalize the idea that we need controlled access to private fields, and a standard way to change their value through a mutator method.


Consider the business rule that says a document id must follow a certain form - like SOP433. The first three characters must be alpha and upper case, the last three must be numeric. This is an incredibly common scenario for document, product, customer and other user-entered id values.


Only a poor UI would force the user to actually enter upper case values. The user should be able to type what they want, and the software will fix it.


But putting the upper case rule in the UI is bad, because code in the UI isn't reusable, and tends to become obsolete very rapidly as technology and/or the UI design changes. There's nothing more expensive over the life of an application than a line of code in the UI. So while it is possible to implement this rule in a validation control, in JavaScript, in a button click event handler - none of those are good solutions to the real problem.


Yet if that rule is placed purely in the backend system, then the user can't get any sort of interactive response. The form must be "posted" or "transmitted" to the backend before the processing can occur. Users want to immediately see the value be upper case or they get nervous.


So then we're stuck. Many people implement the rule twice. Once in the UI to make the user happy, and once in the backend, which is the real rule implementation. And then they try to keep those rules in sync forever - the result being an expensive, unreliable and hard to maintain system.


I've watched this cycle occur for 20 years now, and it is the same time after time. And it sucks.


This, right here, is why VB got such a bad name through the 1990’s. The VB forms designer made it way too easy to write all the logic in the UI, and without any other clear alternative that's what happened. The resulting applications are very fragile and are impossible to upgrade to the next technology (like .NET). Today, as we talk, many thousands of lines of code are being written in Windows Forms and Web Forms in exactly the same way. Those poor people will have a hell of a time upgrading to WPF, because none of their code is reusable.


What's needed is one location for this rule. Business objects offer a workable solution here. If the object implements the rule, and the object runs on the client workstation, then (without code in the UI) the user gets immediate response and the rule is satisfied. And the rule is reusable, because the object is reusable - in a way that UI code never can be (or at least never has been).


That same object, with that same interactive rule, can be used behind Windows Forms, Web Forms, WPF and even a web services interface. The rule is always applied, because it is right there in the object. And for interactive UIs it is immediate, because it is in the field's mutator method (the property setter).


So in my mind the idea of changing a value in a setter isn't counter-intuitive at all - it is the obvious design purpose behind the property setter (mutator). Any other alternative is really just a ridiculously complex way of implementing public fields. And worse, it leaves us where we've been for 20+ years, with duplicate code and expensive, unreliable software.

Wednesday, January 17, 2007 3:45:30 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 9, 2007

Here's an issue from Windows Forms that appears to have crept into WPF as well – along with a solution (thanks to Sam Bent and Kevin Moore from the WPF team):


Consider a class with a property that enforces a business rule - such as that the value must be all upper case:


public class Test : INotifyPropertyChanged


  public event



  // ...


  private string _data;


  public string Data




      return _data;




      _data = value.ToUpper();






Bind this to a TextBox and type in a lower-case value. The user continues to see the lower-case value on the screen, even though the object obviously has an upper-case value. The PropertyChanged event is blissfully ignored by WPF data binding.


I believe this is the same "optimization" as in Windows Forms, where the assumption is that since the value was put into the object by data binding that it can’t be different from what's on the screen - so no refresh is needed. Obviously that is a unfortunate viewpoint, as it totally ignores the idea than an object might be used to centralize business logic or behavior...


In Windows Forms the solution to this issue is relatively simple: handle an event from the BindingSource and force the BindingSource to refresh the value. Bill McCarthy wrapped this solution into an extender control, which I included in CSLA .NET, making the workaround relatively painless.


In WPF the solution is slightly different, but also relatively painless.


It turns out that this optimization doesn’t occur if an IValueConverter is associated with the binding, and if the binding’s UpdateSourceTrigger is not PropertyChanged.


For the TextBox control the UpdateSourceTrigger is LostFocus, so it is good by default, but you’ll want to be aware of this property for other control types.


An IValueConverter object’s purpose is to format and parse the value as it flows to and from the target control and source data object. In my case however, I don’t want to convert the value at all, I just want to defeat this unfortunate “optimization”. What’s needed is an identity converter: a converter that does no conversion.


namespace Csla.Wpf


  public class IdentityConverter : IValueConverter


    #region IValueConverter Members


    public object Convert(

      object value, Type targetType,

      object parameter, System.Globalization.CultureInfo culture)


      return value;



    public object ConvertBack(

      object value, Type targetType,

      object parameter, System.Globalization.CultureInfo culture)


      return value;







Just configure this in your XAML:


<Page x:Class="PTWpf.ProjectEdit"




    Title="Project Edit"



    <csla:IdentityConverter x:Key="IdentityConverter" />



  <TextBox Text="{Binding Data, Converter={StaticResource IdentityConverter}}"></TextBox>



Just like that, it all works as expected and the value from the object is reflected in the UI.

Tuesday, January 9, 2007 3:08:23 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, January 3, 2007

I recently received an email that included this bit:


“You are killing me. I wrote a rather scathing review of your Professional Business Objects with C# on and on my own blog. However, recently I read a transcription of an ARCast with Ron Jacobs where you talked about business objects. I believe I agreed with everything you said. What are you trying to do to me?


I am just a regular shmoe developer, who preaches when listened to about the joys and benefits of OO design for the common business application. I feel too many develop every application like it is just babysitting a database. Every object’s purpose is for the CRUD of data in a table. I have developed great disdain for companies, development teams, and senior developers who perpetuate this problem. I felt Expert C# 2005 Business Object perpetuates this same kind of design, thus the 3 star rating on Amazon for your book.


In the ARCast you mentioned a new book coming out. I am hoping it is the book I have been looking for. If I wrote it myself it would be titled something like, “Business Objects in the Real World.” It would address the problems of data-centric design and how some objects truly are just for managing data and others conduct a business need explained by and expert. These two objects would be pretty different and possibly even use a naming convention to explicitly differentiate the two. For example I don’t want my “true” business objects with getters, setters, isDirty flags or anything else that might make them invalid and popping runtime errors when trying to conduct their business.


Anyway, I could ramble on (It’s my nature). However, I just want to drop you a line and say there is a real disconnect between us, but at the same time, I wanted to show everyone at my office what you were saying in your interview. You were backing me up! However, just months ago I was using your writings to explain what is wrong with software development. We seem to be on the same page, or close anyway, but maybe coming to different conclusions. I guess that’s what is bothering me.”


The following is my response, which I thought I’d share here on my blog because I ended up rambling on more than I’d planned, and I thought it might be interesting/useful to someone:


I would guess that the disconnect may flow from our experiences during our careers - what we've seen work and not work over time.


For my part, I've become very pragmatic. The greatest ideas in the world tend to fall flat in real life because people don't get them, or they have too high a complexity or cost barrier to gain their benefit. For a very long time OO itself fit this category. I remember exploring OO concepts in the late 80's and it was all a joke. The costs were ridiculously high, and the benefits entirely theoretical.


Get into the mid-90's and components show up, making some elements of OO actually useful in real life. But even then RAD was so powerful that the productivity barrier/differential between OO and RAD was ridiculously high. I spent untold amounts of time and effort trying to reconcile these two to allow the use of OO and RAD both - but with limited success. The tools and technologies simply didn't support both concepts - at least not without writing your own frameworks for everything and ignoring all the pre-existing (and established) RAD tools in existence.


Fortunately by this time I'd established both good contacts and a good reputation within key Microsoft product teams. Much of the data binding support in .NET (Windows Forms at least) is influenced by my constant pressure to treat objects as peers to recordset/resultset/dataset constructs. I continue to apply this pressure, because things are still not perfect - and with WPF they have temporarily slid backwards somewhat. But I know people on that team too, and I think they'll improve things as time goes on.


In the meantime Java popularizes the idea of ORM - but solutions exist for only the most basic scenarios - mapping table data into entity objects. While they claim to address the impedance mismatch problem, they really don't, because they aren't mapping into real OO designs, but rather into data-centric object models. Your disdain for today’s ORM tools must be boundless J


For better or worse, Microsoft is following that lead in the next version of ADO.NET - and I don't totally blame them; a real solution is complex enough that it is hard to envision, much less implement. However, here too I hold out hope, because the current crop of "ORM" tools are starting to create nice enough entity objects that it may become possible to envision a tool that maps from entity objects to OO objects using a metadata scheme between the two. This is an area I've been spending some time on of late, and I think there's some good potential here.


Through all this, I've been working primarily with mainstream developers (“Mort”). Developers who do this as a job, not as an all-consuming passion. Developers who want to go home to their families, their softball games, their real lives. Who don't want to master "patterns" and "philosophies" like Agile or TDD; but rather they just want to do their job with a set of tools that help them do the right thing.


I embrace that. This makes me diametrically opposed to the worldviews of a number of my peers who would prefer that the tools do less, so as to raise the bar and drive the mainstream developers out of the industry entirely. But that, imo, is silly. I want mainstream developers to have access to frameworks and tools that help guide them toward doing the right thing - even if they don't take the time to understand the philosophy and patterns at work behind the scenes.


I don't remember when I did the ARCast interview, but I was either referring to the 2005 editions of my business objects books which came out in April/May, or to the ebook I'm finishing now, which covers version 2.1 of my CSLA .NET framework. Odds are it is the former, and it isn't the book you are looking for - though you might enjoy Chapter 6.


In general I think you can take a couple approaches to thinking about objects.


One approach, and I think the right one, is to realize that all objects are behavior-driven and have a single responsibility. Sometimes that responsibility is to merely act as a data container (DTO or entity object). Other times it is to act as a rich binding source that implements validation, authorization and other behaviors necessary to enable the use of RAD tools. Yet other times it is to implement pure, non-interactive business behavior (though this last area is being partially overrun by workflow technologies like WF).


Another way to think about objects is to say there are different kinds of object, with different design techniques for each. So entity objects are designed quasi-relationally, process objects are designed somewhat like a workflow, etc. I personally think this is a false abstraction that misses the underlying truth, which is that all objects must be designed around responsibility and behavior as they fit into a use case and architecture.


But sticking with the pure responsibility/behavior concept, CSLA .NET helps address a gaping hole. ORM tools (and similar tools) help create entity objects. Workflow is eroding the need for pure process objects. But there remains this need for rich objects that support a RAD development experience for interactive apps. And CSLA .NET helps fill this gap by making it easier for a developer to create objects that implement rich business behaviors and also directly support Windows Forms, Web Forms and WPF interfaces – leveraging the existing RAD capabilities provided by .NET and Visual Studio.


Whether a developer (mis)uses CSLA .NET to create data-centric objects, or follows my advice and creates responsibility-driven objects is really their choice. But either way, I think good support for the RAD capabilities of the environment is key to attaining high levels of productivity when building interactive applications.

Wednesday, January 3, 2007 10:40:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

In November Dunn Training held the first ever official CSLA .NET three day class. It was a smashing success, and resulted in a lot of great feedback and comments. Here are a couple quotes from attendees of the November class:

"At first, I was not sold on the need for CSLA.  After attending this class and seeing the examples and proof, I'm on the bandwagon.  This class proved the usefulness of CSLA and sold me on giving up writing all the plumbing myself."

"Miguel and Mark have provided one of the best ways to get up to speed using CSLA."

"The best career enhancing training investment I have made in the last 10 years.  Rocky is lucky to have the DUNN team doing this training.  Great stuff - solid, professional and accurate."

The next class is coming up soon: January 29-31, in Atlanta, GA. If you are looking for three days of intense and practical CSLA .NET training this is your chance!

Wednesday, January 3, 2007 9:41:47 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 2, 2007

I you had a good holiday season and enjoyed the end of 2006!


For my part, I want to thank everyone who contributed to the CSLA .NET forums, and to the CSLAcontrib project. The time and energy you have all put in over the past few months has been a great help to the CSLA .NET community, and I know there are many people out there who are grateful for your efforts!


Most importantly though, I want to thank all the users of CSLA .NET and everyone who has purchased copies of my books. At the end of the year I received numerous emails thanking me for creating the framework (and I appreciate that), but I seriously want to thank all of you for making this a vibrant community. CSLA .NET is one of the most widely used development frameworks for .NET, and that is because each of you have taken the time to learn and use the framework. Thank you!


For me 2006 was a year of change. Starting with CSLA .NET 2.0 I've been viewing CSLA .NET as not just an offshoot of my books, but as a framework in its own right. Of course many people have been treating it that way for years now, but I hope it has been helpful to have me treat point releases a bit more formally over the past number of months.


This extends to version 2.1, which represents an even larger change for me. With version 2.1 I'm releasing my first self-published ebook to cover the changes. This ebook is not a standalone book, rather it is best thought of as a "sequel" to the 2005 book. However, it is well over 150 pages and covers both the changes to the framework itself, as well as how to use the changes in your application development. The ebook is undergoing technical review. That and the editing process should take 2-3 weeks, so the ebook will be available later this month.


Looking at the rest of 2007 it is clear that I'll be spending a lot of time around .NET 3.0 and 3.5.


I'll be merging the WcfChannel into CSLA .NET itself, as well as implementing support for the DataContract/DataMember concepts. This, possibly coupled with one WPF interface implementation for collections, will comprise CSLA .NET 3.0.


It is not yet clear to me what changes will occur due to .NET 3.5, but I expect them to be more extensive. Some of the new C#/VB language features, such as extension methods and lambda expressions, have the potential to radically change the way we think about interacting with objects and fields. When you can add arbitrary methods to any type (even sealed types like String) many interesting options become available.


Then there's the impact of LINQ itself, and integration with the ADO.NET Entity Framework in one manner or another.


ADO EF appears, at least on the surface, to be YAORM (yet another ORM). If that continues to be true, then it is a great way to get table data into data entities, but it doesn't really address mapping the data into objects designed around use cases and responsibility. If you search this forum for discussions on nHibernate you'll quickly see how ADO EF might fit into the CSLA .NET worldview just like nHibernate does today: as a powerful replacement for basic ADO.NET and/or the DAAB.


LINQ is potentially more interesting, yet more challenging. It allows you to run select queries across collections. At first glance you might think this eliminates the need for things like SortedBindingList or FilteredBindingList. I’m not sure that’s true though, because the result of any LINQ query is an IEnumerable<T>. This is the most basic type of list in .NET; so basic that the result must often be converted to a more capable list type.


Certainly when you start thinking about n-level undo this becomes problematic. BusinessBase (BB) and BusinessListBase (BLB) work together to implement the undo capabilities provided by CSLA .NET. Running a LINQ query across a BLB results in an IEnumerable<T>, where T is your BB-derived child type. At this point you’ve lost all n-level undo support, and data binding (Windows Forms, and any WPF grid) won’t work right either.


So at the moment, I’m looking at LINQ being most useful in the Data Access Layer, along with ADO EF, but time will tell.


The point of all this rambling is this: I didn’t rush CSLA .NET 1.0 or 2.0. They came out when I felt I had good understanding of the issues I wanted to address in .NET 1.0 and .20 respectively. And when I felt I had meaningful solutions or answers to those issues. I’m treating .NET 3.5 (and presumably CSLA .NET 3.5) the same way. I won’t rush CSLA .NET to meet an arbitrary deadline, and certainly not to match Microsoft’s release of .NET 3.5 itself. There’s no point coming out with version of CSLA .NET that misses the mark, or that provides poor solutions to key issues.


So in 2007 I’ll most certainly be releasing the version 2.1 ebook and CSLA .NET 3.0 (probably with another small ebook). Given that Microsoft’s vague plans are to have .NET 3.5 out near the end of 2007, I don’t expect CSLA .NET 3.5 to be done until sometime in 2008; but you can expect to see beta versions and/or my experiments around .NET 3.5 as the year goes on.


Of course I’ll be doing other things beyond CSLA .NET in 2007. I’m lined up to speak at the SD West and VS Live San Francisco conferences in March. I’m speaking in Denver and Boulder later in January, and I’ll be doing other speaking around the country and/or world as the year goes on. Click here for the page where I maintain a list of my current speaking engagements.


To close, thank you all for your support of the CSLA .NET community, and for your kind words over the past many months. I wish you all the best in 2007.


Code well, have fun!



Tuesday, January 2, 2007 9:52:48 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, December 15, 2006

Each holiday season my employer, Magenic, gives out a cool tech gift. Over the years we've received things like MP3 players, an XBox, Tivo units and an XBox 360. This year the gift is Microsoft's new Zune device.

For all that I'm a techno-geek, I am also quite conservative when it comes to spending money on devices. They become obsolete so fast, and they are so expensive when new that I just have a hard time spending the money. So I've been still using the Creative Nomad Jukebox 3 I got from Magenic some years ago. It is a nice enough device, with a 20 gig hard drive, good sound recording capabilities as well as perfectly acceptable playback. Creative's PC interface sucks, but I got a 3rd party product called Notmad that totally rocks, and all has been well for years.

But Windows Media Player 11, and the very nice Urge music service, don't support anything as old as my Nomad... So I was getting ready to find a new music device in any case when along comes Magenic with the Zune gift. Thank you Greg and Paul! :)

The Zune device is pretty decent. Plays music, pictures and video. Even with the small screen, video is pretty darn good due to the screen's high quality. And personally I like the radio feature, as I listen to MPR a lot, and listening live beats yesterday's news downloaded via podcast...

The drawback to any of these devices, Zune, Ipod or whatever, is that to get the most out of them you need a subscription to the service. I was hooked on Urge before this, so it is fortunate that Zune has the same basic backend and music selection. Better still, Magenic is covering the cost of the first year's subscription, so now I've got all the music I care to have (and more - they do have country music too, and I just can't stomach that stuff... ;) ).

So I have a playlist of several thousand songs - all the five star music in the MetalCore sub-genre - put it on shuffle and I'm as happy as can be.

Friday, December 15, 2006 2:57:49 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

The long-awaited release of VS 2005 SP1 is finally here.

Perhaps most importantly, SP1 rolls up a number of hotfixes that many people have been using for a long time to improve the stability and performance of Visual Studio 2005. I know this is one service pack I'm installing immediately!!

Friday, December 15, 2006 11:36:51 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

I was recently interviewed by Craig Shoemaker for, and that interview is now online for listening.

Friday, December 15, 2006 9:40:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 13, 2006

I thought it would be a good idea to give a quick update on the progress/status of the CSLA .NET version 2.1 ebook. Version 2.1 includes some substantial new features and changes as compared to version 2.0, and I am working on an ebook (about 150 pages) that I'll be selling through my web site in the near future. This ebook covers those changes to the framework, both from the framework development perspective and from the perspective of someone who just wants to use the new or changed features.

My original intent was for the ebook to be done in November. Obviously that hasn't happened, though I am very near completion of the book at this point - in the middle of December. There are many reasons for the delay, most notably some serious family health issues (which, unfortunately, are ongoing) and unexpected activities at work (I have a real job in addition to writing ebooks :) ). Those pushed things far enough into November that a number of other, planned, things impacted the schedule as well.

I really didn't expect this project to be this big - I was thinking 75 pages, but it is more like 150. And self-publishing turns out to be more work than I'd thought. Fortunately, a colleague at Magenic is helping to do the technical review and my very talented wife is doing all the non-technical editing. Another Magenic colleague is kindly setting up the online store. And I found out from my tax guy that I need to get a sales tax ID from Minnesota because I have to actually collect sales tax on the ebook - much to my surprise.

Regardless, what this means is that I now expect the ebook to be available for purchase within the first two weeks of 2007. I'm wrapping up the VB version this week, doing technical revisions and final editing and creating the C# version (swapping in different code bits) over the next couple weeks. This should mean the project is done by the end of the year so I can put it online very early in 2007.

Wednesday, December 13, 2006 9:35:06 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, November 30, 2006

I just learned that two active members of the computer industry and regional community died in a plane crash recently. Details at:

I knew both of these men, having spoken at the Heartland Developers Conference over the past couple years.

I sometimes think about this sort of thing. We live in an increasingly virtual world. While I, like most of us, still have friends that live near me, a great many of my closest friends and colleagues are scattered around the globe. I often interact more with people in Los Angeles, Boston, Europe and Argentina than I do with people where I actually live.

From what I know of Eric and Josh, I think both of them were well-grounded in their real, local world, so the people closest to them really are closest to them, and can mourn together and support each other. And that is a nice thing to consider.

But then I wonder, what about people who's closest friends aren't closest to them physically? How do they support each other in times like this?

I guess time will tell. We're in a period of transition, where the physical world seems to be less and less important relative to the virtual world, and only experience will dictate how we deal with issues like this. One thing is certain, it isn't the technology that matters, it is the people.

My deepest sympathies go to Eric and Josh's families and friends. It is hard to lose anyone, but it is especially hard to lose people in the prime of their lives. People with wives, fiances and children.

Thursday, November 30, 2006 1:44:12 AM (Central Standard Time, UTC-06:00)  #    Disclaimer