Rockford Lhotka

 Thursday, June 12, 2008

This is a cool site, that I hope takes off. In the words of its creator:

I wanted to drop you all a note about a project I’ve been working on called Community Megaphone. Community Megaphone is a web site I started building about a year ago to fill what I saw as a need in the local community: simple, low-friction promotion and discovery of developer events, particularly community-run events. Initially, the site was limited to a portion of the east coast, but this week I’ve opened the site up to events from the entire United States.

The need for some central location where you can find community events is high. While it is good that each event has its own web site for details and discussions, the value of a central index would be incredible! Please support Andrew's initiative!

Thursday, June 12, 2008 2:44:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

The MobileFormatter serializer being created for CSLA Light serializes objects that implement IMobileObject. CSLA Light and CSLA .NET classes will implement this interface so the business developer doesn't normally need to worry about it.

(the exception being when a business object uses private backing fields, in which case the business object developer will need to override OnGetState() and OnSetState() to get/set the private fields they've declared)

To organize the implementation, we are planning to introduce two new base classes to CSLA: MobileObject and MobileList<T>. The following diagram shows how the new inheritance hierarchy will work:


Note that Silverlight doesn't have a BindingList<T> type, and so CSLA Light will supply that type. CSLA .NET already uses BindingList<T> extensively, and so this will provide parity on both sides of the data portal.

In other words, this will be the inheritance hierarchy for both CSLA .NET and CSLA Light.

It is also the case that FieldDataManager, the object that manages all the managed backing field values in CSLA 3.5, must serialize itself as well. And so will the BrokenRulesCollection.


The overall hierarchy remains as it is in CSLA .NET, we're just inserting MobileObject and MobileList at the top of the hierarchy (or as high as possible in the case of .NET).

This will allow the MobileFormatter to interact with any CSLA-derived business class, automatically serializing all managed backing fields and the child objects contained in the FieldManager of a business object, or the inner list of any MobileList (BindingList<T>).

Thursday, June 12, 2008 2:28:48 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

WCF is very cool, but configuring WCF can virtually derail a project. Even relatively simple-seeming configurations can take hours or days to get working. It is frustrating! And the most complex part is getting security working.

The Microsoft Patterns and Practices group recently released beta guidance for WCF security (, and it is probably the single best resource for information about configuring WCF security you'll find anywhere.

Thursday, June 12, 2008 9:30:48 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

I've been engaged in a discussion around CSLA .NET on an MSDN architecture forum - you can see the thread here. I put quite a bit of time/effort into one of my replies, and I wanted to repost it here for broader distribution:

I don’t think it is really fair to relate CSLA .NET to CSLA Classic. I totally rewrote the framework for .NET (at least 4 times actually – trying different approaches/concepts), and the only way in which CSLA .NET relates to CSLA Classic is through some of the high level architectural goals around the use of mobile objects and the minimization of UI code.

The use of LSet in VB5/6, btw, was the closest approximation to the concept of a union struct from C or Pascal or a common block from FORTRAN possible in VB. LSet actually did a memcopy operation, and so wasn’t as good as a union struct, but was radically faster than any other serialization option available in VB at the time. So while it was far from ideal, it was the best option available back then.

Obviously .NET provides superior options for serialization through the BinaryFormatter and NetDataContractSerializer, and CSLA .NET makes use of them. To be fair though, a union struct would still be radically faster Smile

Before I go any further, it is very important to understand the distinction between ‘layers’ and ‘tiers’. Clarity of wording is important when having this kind of discussion. I discuss the difference in Chapter 1 of my Expert 2005 Business Objects book, and in several blog posts – perhaps this one is best:

The key thing is that a layer is a logical separation of concerns, and a tier directly implies a process or network boundary. Layers are a design constraint, tiers are a deployment artifact.

How you layer your code is up to you. Many people, including myself, often use assemblies to separate layers. But that is really just a crutch – a reminder to have discipline. Any clear separation is sufficient. But you are absolutely correct, in that a great many developers have trouble maintaining that discipline without the clear separation provided by having different code in different projects (assemblies).


CSLA doesn’t group all layers into a single assembly. Your business objects belong in one layer – often one assembly – and so all your business logic (validation, calculation, data manipulation, authorization, etc) are in that assembly.

Also, because CSLA encourages the use of object-oriented design and programming, encapsulation is important. And other OO concepts like data hiding are encouraged. This means that the object must manage its own fields. Any DAL will be working with data from the object’s fields. So the trick is to get the data into and out of the private fields of the business object without breaking encapsulation. I discussed the various options around this issue in my previous post.

Ultimately the solution in most cases is for the DAL to provide and consume the data through some clearly defined interface (ADO.NET objects or DTOs) so the business object can manage its own fields, and can invoke the DAL to handle the persistence of the data.

To be very clear then, CSLA enables separation of the business logic into one assembly and the data access code into a separate assembly.

However, it doesn’t force you to do this, and many people find it simpler to put the DAL code directly into the DataPortal_XYZ methods of their business classes. That’s fine – there’s still logical separation of concerns and logical layering – it just isn’t as explicit as putting that code in a separate assembly. Some people have the discipline to make that work, and if they do have that discipline then there’s nothing wrong with the approach imo.


I have no problem writing business rules in code. I realize that some applications have rules that vary so rapidly or widely that the only real solution is to use a metadata-driven rules engine, and in that case CSLA isn’t a great match.

But let’s face it, most applications don’t change that fast. Most applications consist of business logic written in C#/VB/Java/etc. CSLA simply helps formalize what most people already do, by providing a standardized approach for implementing business and validation rules such that they are invoked efficiently and automatically as needed.

Also consider that CSLA’s approach separates the concept of a business rule from the object itself. You then link properties on an object to the rules that apply to that object. This linkage can be dynamic – metadata-driven. Though the rules themselves are written as code, you can use a table-driven scheme to link rules to properties, allowing for SaaS scenarios, etc.


This is an inaccurate assumption. CSLA .NET requires a strong separation between the UI and business layers, and allows for a very clear separation between the business and data access layers, and you can obviously achieve separation between the data access and data storage layers.

This means that you can easily have UI specialists that know little or nothing about OO design or other business layer concepts. In fact, when using WPF it is possible for the UI to only have UI-specific code – the separation is cleaner than is possible with Windows Forms or Web Forms thanks to the improvements in data binding.

Also, when using ASP.NET MVC (in its present form at least), the separation is extremely clear. Because the CSLA-based business objects implement all business logic, the view and controller are both very trivial to create and maintain. A controller method is typically just the couple lines of code necessary to call the object’s factory and connect it to the view, or to call the MVC helper to copy data from the postback into the object and to have the object save itself. I’m really impressed with the MVC framework when used in conjunction with CSLA .NET.

And it means that you can have data access specialists that only focus on ADO.NET, LINQ to SQL, EF, nHibernate or whatever. In my experience this is quite rare – very few developers are willing to be pigeonholed into such a singularly uninteresting aspect of software – but perhaps your experiences have been different.

Obviously it is always possible to have database experts who design and implement physical and logical database designs.


I entirely agree that the DTO design pattern is incredibly valuable when building services. But no one pattern is a silver bullet and all patterns have both positive and negative consequences. It is the responsibility of professional software architects, designers and developers to use the appropriate patterns at the appropriate times to achieve the best end results.

CSLA .NET enables, but does not require, the concept of mobile objects. This concept is incredibly powerful, and is in use by a great many developers. Anyone passing disconnection ADO recordsets, or DataSets or hashtables/dictionaries/lists across the network uses a form of mobile objects. CSLA simply wraps a pre-existing feature of .NET and makes it easier for you to pass your own rich objects across the network.

Obviously only the object’s field values travel across the network. This means that a business object consumes no more network bandwidth than a DTO. But mobile objects provide a higher level of transparency in that the developer can work with essentially the same object model, and the same behaviors, on either side of the network.

Is this appropriate for all scenarios? No. Decisions about whether the pattern is appropriate for any scenario or application should be based on serious consideration of the positive and negative consequences of the pattern. Like any pattern, mobile objects has both types of consequence.

If you look at my blog over the past few years, I’ve frequently discussed the pros and cons of using a pure service-oriented approach vs an n-tier approach. Typically my n-tier arguments pre-suppose the use of mobile objects, and there are some discussions explicitly covering mobile objects.

The DTO pattern is a part of any service-oriented approach, virtually by definition. Though it is quite possible to manipulate your XML messages directly, most people find that unproductive and prefer to use a DTO as an intermediary – which makes sense for productivity even if it isn’t necessarily ideal for performance or control.

The DTO pattern can be used for n-tier approaches as well, but it is entirely optional. And when compared to other n-tier techniques involving things like the DataSet or mobile objects the DTO pattern’s weaknesses become much more noticeable.

The mobile object pattern is not useful for any true service-oriented scenario (note that I’m not talking about web services here, but rather true message-based SOA). This is because your business objects are your internal implementation and should never be directly exposed as part of your external contract. That sort of coupling between your external interface contract and your internal implementation is always bad – and is obviously inappropriate when using DTOs as well. DTOs can comprise part of your external contract, but should never be part of your internal implementation.

The mobile object pattern is very useful for n-tier scenarios because it enables some very powerful application models. Most notably, the way it is done in CSLA, it allows the application to switch between a 1-, 2- and 3-tier physical deployment merely by changing a configuration file. The UI, business and data developers do not need to change any code or worry about the details – assuming they’ve followed the rules for building proper CSLA-based business objects.

Thursday, June 12, 2008 8:58:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

Dunn Training is working with some people in Ireland to put together a CSLA .NET training class. If you are in Ireland, the UK or perhaps Europe, you may be interested. This CSLA forum post has some more information.

Thursday, June 12, 2008 8:14:43 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, June 10, 2008

I've been prototyping various aspects of CSLA Light (CSLA for Silverlight) for some time now. Enough to be confident that a decent subset of CSLA functionality will work just fine in Silverlight - which is very exciting!

The primary area of my focus is serialization of object graphs, and I've blogged about this before. This one issue is directly on the critical path, because a resolution is required for the data portal, object cloning and n-level undo.

And I've come to a final decision regarding object serialization: I'm not going to try and use reflection. Silverlight turns out to have some reasonable support for reflection - enough for Microsoft to create a subset of the WCF DataContractSerializer. Unfortunately it isn't enough to create something like the BinaryFormatter or NetDataContractSerializer, primarily due to the limitations around reflecting against non-public fields.

One option I considered is to say that only business objects with public read-write properties are allowed. But that's a major constraint on OO design, and still doesn't resolve issues around calling a property setter to load values into the object - because object setters typically invoke authorization and validation logic.

Another option I considered is to actually use reflection. I discussed this in a previous blog post - because you can make it work as long as you insert about a dozen lines of code into every class you write. But I've decided this is too onerous and bug-prone. So while reflection could be made to work, I think the cost is too high.

Another option is to require that the business developer create a DTO (data transfer object) for each business object type. And all field values would be stored in this DTO rather than in normal fields. While this is a workable solution, it imposes a coding burden not unlike that of using the struct concepts from my CSLA Classic days in the 1990's. I'm not eager to repeat that model...

Yet another option is to rely on the concept of managed backing fields that I introduced in CSLA .NET 3.5. In CSLA .NET 3.5 I introduced the idea that you could choose not to declare backing fields for your properties, and that you could allow CSLA to manage the values for you in something called the FieldManager. Conceptually this is similar to the concept of a DependencyProperty introduced by Microsoft for WF and WPF.

The reason I introduced managed backing fields is that I didn't expect Silverlight to have reflection against private fields at all. I was excited when it turned out to have a level of reflection, but now that I've done all this research and prototyping, I've decided it isn't useful in the end. So I'm returning to my original plan - using managed backing fields to avoid the use of reflection when serializing business objects.

The idea is relatively simple. The FieldManager stores the property values in a dictionary (it is actually a bit more complex than that for performance reasons, but conceptually it is a dictionary). Because of this, it is entirely possible to write code to loop through the values in the field manager and to copy them into a well-defined data contract (DTO). In fact, it is possible to define one DTO that can handle any BusinessBase-derived object, and other for any BusinessListBase-derived object and so forth. Basically one DTO per CSLA base class.

The MobileFormatter (the serializer I'm creating) can simply call Serialize() and Deserialize() methods on the CSLA base classes (defined by an IMobileObject interface that is implemented by BusinessBase, etc.) and the base class can get/set its data into/out of the DTO supplied by the MobileFormatter.

In the end, the MobileFormatter will have one DTO for each business object in the object graph, all in a single list of DTOs. The DataContractSerializer can then be used to convert that list of DTOs into an XML byte stream, as shown here:


The XML byte stream can later be deserialized into a list of DTOs, and then into a clone of the original object graph, as shown here:


Notice that the object graph shape is preserved (something the DataContractSerializer in Silverlight can't do at all), and that the object graph is truly cloned.

This decision does impose an important constraint on business objects created for CSLA Light, in that they must use managed backing fields. Private backing fields will not be supported. I prefer not to impose constraints, but this one seems reasonable because the alternatives are all worse than this particular constraint.

My goal is to allow you to write your properties, validation rules, business rules and authorization rules exactly one time, and to have that code run on both the Silverlight client and on your web/app server. To have that code compile into the Silverlight runtime and the .NET runtime. To have CSLA .NET and CSLA Light provide the same set of public and protected members so you get the same CSLA services in both environments.

By restricting CSLA Light to only support managed backing fields, I can accomplish that goal without imposing requirements for extra coding behind every business object, or the insertion of arcane reflection code into every business class.

Tuesday, June 10, 2008 8:01:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, May 30, 2008

I have hesitated to publicly discuss my experiences with Vista, acting under the theory that if you can't say something nice you shouldn't say anything at all. But at this point I have some nice things to say (though not all nice), and I think there's some value in sharing my experiences and thoughts.

On the whole, my experience with Vista has been decidedly mixed.

Vista is very pretty. It is clearly the future in many ways (especially around IIS 7 and WAS and security in general). And it has some nice usability features – like a far better replacement for ntbackup, and pre-enabled shadowing (so you can retrieve old files if you lose/overwrite them). And quite a few OS features are easier to find/use than in XP (once you get used to the changes).

However, it is slower and more resource-intensive than XP. So you can’t upgrade from XP and expect the same level of performance or responsiveness on the same hardware. If your hardware is more than a few months old, I really can't recommend an upgrade.

I upgraded my (now 2 year old) laptop when Vista came out, and have been generally displeased with the results. It is running Vista Business. However, as Vista has aged Microsoft has issued patches/fixes/updates that have helped with stability and performance. I would say that it is now tolerable, or at least I’ve learned to live with it. While it is workable, it isn't really satisfying - to me at least. My laptop is a dual core machine with 3 gigs of RAM and a low-end GPU.

One thing I'll note is that upgrading the laptop from 2 to 3 gigs of RAM made a huge difference in performance. Vista really likes memory, and the more you can get in your machine the happier you'll be.

I didn’t upgrade my desktop to Vista until I replaced my desktop machine. I do almost all my work on this machine, and wasn't about to deal with the performance issues on a constant basis.

My new desktop machine, which is running Vista Ultimate, is a quad core with 4 gigs of RAM and a high end GPU. I find that Vista runs quite adequately on this (admittedly high-end) machine. My current bottlenecks are memory speed (but DDR3 is too expensive) and disk IO (but 10k RPM disks are too loud – I’m one of those “silent computer” nuts).

I have colleagues who are running Vista 64 bit. Apparently that is faster and more stable (partially due to fewer iffy drivers, and because 64 bit gets all your 4+ gigs of RAM). But I'm a gamer, so I'm kind of stuck with 32 bit until there are 64 bit versions of Battlefield 2142, Supreme Commander, Sim City 4 and Civ IV :)

One thing that has really helped my Vista experience is the discovery of TeraCopy. Vista is notorious for slow file copies (especially when copying multiple files). It is a bit sad that Vista's slow file copies have enabled a product niche for something like a file copy utility (how 1990 is that!), but whatever, it works.

I have UAC turned on. I realize many devs turn it off. But if our users are to live with it, I think developers should too. And personally I think it should be illegal for Microsoft employees to turn it off – they should know what they are doing to their customers.

The thing is, I have not found UAC to be overly troublesome. Yes, there are some extra dialogs when installing software – but that’s not a big deal imo, and is an acceptable trade-off for the security. The bigger frustration with UAC are simpler things like trying to create a favorite in IE, or copy a shortcut to the Programs menu – both of which turn out to be really hard due to UAC.

I have done some work with VS 2005 under Vista. You have to run as admin to debug web apps, which means you can't double-click sln files. There may have been some other quirks too, I don't recall. Long ago I created a virtual machine with XP where I have VS 2005 installed, and any 2005 work is done there (for .NET 2.0/3.0 - primarily maintenance for CSLA .NET 3.0).

For months now I've been using VS 2008, largely under Vista. The experience is quite smooth. You do have to run 2008 as admin to debug a web app running in IIS, but not in the dev web server. It really isn’t a big deal to run as admin for VS if you need to debug a web app. This is the intended “escape hatch” for developers that need to do things a normal user should not be able to do. It is a little frustrating to not be able to double-click a sln file, but I can deal with that small issue.

And for WPF or Windows Forms work (and a lot of web work where the dev server can be used) then you don't need to run as admin at all.

In the final analysis, if you have a relatively new machine with high-end hardware and lots of RAM, then I think Vista is a fine OS, even for a developer. But if your machine is more than a few months old, has less than 3 gigs of RAM or has an older GPU, I'd hesitate to leave XP.

Friday, May 30, 2008 10:45:18 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, May 28, 2008
tech_summit_logo_sm Magenic is holding a full-day, two-track mini-conference on June 20.
That is just 3 weeks away, but there are still some open seats, so reserve yours now!

Our first keynote speaker is Jay Schmelzer, GPM of the RAD tools group at Microsoft. He'll be talking about the future of RAD tools (Visual Studio and more).

Our second keynote speaker is me, Rockford Lhotka. I'll be talking about the future of CSLA .NET and CSLA Light, a version of CSLA that will run in Silverlight.

The rest of the day is divided into two tracks for a total of 8 high-quality technical sessions. This will be a full day of hard-core technical content and fun!

This FREE event is being held in Downers Grove near Chicago, IL. It starts at 8:30 AM, we're providing lunch, and the event runs through to a reception at the end of the day at around 5 PM.

If you'd like an invitation to attend, please email

Click here more information about the event

Wednesday, May 28, 2008 10:36:55 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 27, 2008

Sue, my wife's best friend, and mother of my goddaughter, was diagnosed with breast cancer late last year. Fortunately they caught it early and with surgery and radiation she beat the cancer. It was a rough period of time, but she got through it.

Sue has convinced my wife to do a three day, 60 mile, Walk for the Cure this fall. It is a fundraiser for cancer research, and a worthy cause. I would think there must be better ways to raise money than to walk 20 miles a day for three days, but apparently not...

Here's my wife Teresa's post about the walk, including links to a web site where people (you perhaps?) can donate to the cause.

In fact, just to make it as easy as possible, here's the direct link to the donation page.

I rarely post personal items on my blog, preferring to keep it focused on cool technical stuff. But cancer is such an important issue. Sue beat it through good fortune and perseverance. My mother is living in an ongoing battle against a different type of cancer. A battle that appears unwinnable, and which has altered our lives dramatically over the past few years.

Any help in raising money for this cancer research effort is most appreciated! Thank you!

Tuesday, May 27, 2008 9:36:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, May 21, 2008

OK, this is really cool:

It appears that Microsoft, having reserved the convention center for the two weeks of Tech Ed (Dev week and IT Pro week), is allowing the user groups in the region to take advantage of the idle space in the intervening weekend. That is so cool!!

But now, to be fair, Microsoft should provide two days of conference center space and AV in <insert your city here>, don't you think? :)

Wednesday, May 21, 2008 8:12:30 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 20, 2008

I was just in a discussion about ClickOnce with several people, include Brian Noyes, who wrote the book on ClickOnce.

I was under the mistaken impression, and I know quite a few other people who have this same misconception, that ClickOnce downloads complete new versions of your application each time you publish a new version. In fact, I know of at least a couple companies who specifically chose not to use ClickOnce because their app is quite large, and re-downloading the whole thing each time a new version is published is unrealistic.

It turns out though, that ClickOnce does optimize the download. When you publish a new version of your app, all the new files are written to the server, that is true. But the client only downloads changed files. All unchanged files are copied from the previous install folder on the client to the new install folder on the client.

In other words, all unchanged files are reused from the copy already on the client, and so are not downloaded again. Only changed files are downloaded from the server.

The trick to making this work is to only rebuild assemblies that have actually changed before you do a publish. Don't rebuild unchanged assemblies, because that could change the assembly - and even a one byte change in the assembly would cause it to be downloaded because the file hash would be different.

Saying that gives me flashbacks to binary compatibility issues with VB6, but it makes complete sense that they'd have to use something like a file hash to decide whether to re-download each file.

Tuesday, May 20, 2008 9:00:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 13, 2008

Magenic is holding a full-day, two-track mini-conference on June 20. We have put together a great lineup of speakers and topics, including 2 keynotes and 8 sessions. This will be a full day of hard-core technical content and fun!

The event is being held in Downers Grove near Chicago, IL. It starts at 8:30 AM and runs through to a reception at the end of the day at around 5 PM.

The event is by invitation only - specifically invitation by one of Magenic's sales people. If you are already a Magenic customer and you'd like an invitation, please contact your Magenic AE and let them know. If you are not a Magenic customer please email and let us know you'd like an invitation.

The event is free, and includes both lunch and a reception at the end of the day. You are responsible for any travel expenses involved in getting to the event. Magenic is arranging a block of rooms at a nearby hotel with special pricing and ground transportation between the conference and hotel.

The following link has more information about the event

Tuesday, May 13, 2008 7:45:53 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, April 24, 2008

Dunn Training has been offering a very good 3 day class on CSLA .NET for some time now, with lots of great feedback. And this class continues (with a sold-out class coming up in Toronto).

As a compliment to that class, Dunn is now lining up a bigger and deeper 5 day master class. The plan is to have just two of these each year.

This master class is quite different from the 3 day class. It will have more lecture, deeper labs and a faster pace. They tell me the intent is to cover everything from OO design to CSLA object creation to WPF/Windows/Web/WCF/WF interface design to LINQ in one intense week.

Not only will this be the ultimate in CSLA .NET training, it'll be some incredibly awesome training on .NET itself!!

Thursday, April 24, 2008 2:45:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer