Rockford Lhotka

 Thursday, June 12, 2008
« CSLA .NET training in Dublin, Ireland | Main | WCF Security Guidance beta available »

I've been engaged in a discussion around CSLA .NET on an MSDN architecture forum - you can see the thread here. I put quite a bit of time/effort into one of my replies, and I wanted to repost it here for broader distribution:

I don’t think it is really fair to relate CSLA .NET to CSLA Classic. I totally rewrote the framework for .NET (at least 4 times actually – trying different approaches/concepts), and the only way in which CSLA .NET relates to CSLA Classic is through some of the high level architectural goals around the use of mobile objects and the minimization of UI code.

The use of LSet in VB5/6, btw, was the closest approximation to the concept of a union struct from C or Pascal or a common block from FORTRAN possible in VB. LSet actually did a memcopy operation, and so wasn’t as good as a union struct, but was radically faster than any other serialization option available in VB at the time. So while it was far from ideal, it was the best option available back then.

Obviously .NET provides superior options for serialization through the BinaryFormatter and NetDataContractSerializer, and CSLA .NET makes use of them. To be fair though, a union struct would still be radically faster Smile

Before I go any further, it is very important to understand the distinction between ‘layers’ and ‘tiers’. Clarity of wording is important when having this kind of discussion. I discuss the difference in Chapter 1 of my Expert 2005 Business Objects book, and in several blog posts – perhaps this one is best:

http://www.lhotka.net/weblog/MiddletierHostingEnterpriseServicesIISDCOMWebServicesAndRemoting.aspx

The key thing is that a layer is a logical separation of concerns, and a tier directly implies a process or network boundary. Layers are a design constraint, tiers are a deployment artifact.

How you layer your code is up to you. Many people, including myself, often use assemblies to separate layers. But that is really just a crutch – a reminder to have discipline. Any clear separation is sufficient. But you are absolutely correct, in that a great many developers have trouble maintaining that discipline without the clear separation provided by having different code in different projects (assemblies).

1)

CSLA doesn’t group all layers into a single assembly. Your business objects belong in one layer – often one assembly – and so all your business logic (validation, calculation, data manipulation, authorization, etc) are in that assembly.

Also, because CSLA encourages the use of object-oriented design and programming, encapsulation is important. And other OO concepts like data hiding are encouraged. This means that the object must manage its own fields. Any DAL will be working with data from the object’s fields. So the trick is to get the data into and out of the private fields of the business object without breaking encapsulation. I discussed the various options around this issue in my previous post.

Ultimately the solution in most cases is for the DAL to provide and consume the data through some clearly defined interface (ADO.NET objects or DTOs) so the business object can manage its own fields, and can invoke the DAL to handle the persistence of the data.

To be very clear then, CSLA enables separation of the business logic into one assembly and the data access code into a separate assembly.

However, it doesn’t force you to do this, and many people find it simpler to put the DAL code directly into the DataPortal_XYZ methods of their business classes. That’s fine – there’s still logical separation of concerns and logical layering – it just isn’t as explicit as putting that code in a separate assembly. Some people have the discipline to make that work, and if they do have that discipline then there’s nothing wrong with the approach imo.

2)

I have no problem writing business rules in code. I realize that some applications have rules that vary so rapidly or widely that the only real solution is to use a metadata-driven rules engine, and in that case CSLA isn’t a great match.

But let’s face it, most applications don’t change that fast. Most applications consist of business logic written in C#/VB/Java/etc. CSLA simply helps formalize what most people already do, by providing a standardized approach for implementing business and validation rules such that they are invoked efficiently and automatically as needed.

Also consider that CSLA’s approach separates the concept of a business rule from the object itself. You then link properties on an object to the rules that apply to that object. This linkage can be dynamic – metadata-driven. Though the rules themselves are written as code, you can use a table-driven scheme to link rules to properties, allowing for SaaS scenarios, etc.

3)

This is an inaccurate assumption. CSLA .NET requires a strong separation between the UI and business layers, and allows for a very clear separation between the business and data access layers, and you can obviously achieve separation between the data access and data storage layers.

This means that you can easily have UI specialists that know little or nothing about OO design or other business layer concepts. In fact, when using WPF it is possible for the UI to only have UI-specific code – the separation is cleaner than is possible with Windows Forms or Web Forms thanks to the improvements in data binding.

Also, when using ASP.NET MVC (in its present form at least), the separation is extremely clear. Because the CSLA-based business objects implement all business logic, the view and controller are both very trivial to create and maintain. A controller method is typically just the couple lines of code necessary to call the object’s factory and connect it to the view, or to call the MVC helper to copy data from the postback into the object and to have the object save itself. I’m really impressed with the MVC framework when used in conjunction with CSLA .NET.

And it means that you can have data access specialists that only focus on ADO.NET, LINQ to SQL, EF, nHibernate or whatever. In my experience this is quite rare – very few developers are willing to be pigeonholed into such a singularly uninteresting aspect of software – but perhaps your experiences have been different.

Obviously it is always possible to have database experts who design and implement physical and logical database designs.

4)

I entirely agree that the DTO design pattern is incredibly valuable when building services. But no one pattern is a silver bullet and all patterns have both positive and negative consequences. It is the responsibility of professional software architects, designers and developers to use the appropriate patterns at the appropriate times to achieve the best end results.

CSLA .NET enables, but does not require, the concept of mobile objects. This concept is incredibly powerful, and is in use by a great many developers. Anyone passing disconnection ADO recordsets, or DataSets or hashtables/dictionaries/lists across the network uses a form of mobile objects. CSLA simply wraps a pre-existing feature of .NET and makes it easier for you to pass your own rich objects across the network.

Obviously only the object’s field values travel across the network. This means that a business object consumes no more network bandwidth than a DTO. But mobile objects provide a higher level of transparency in that the developer can work with essentially the same object model, and the same behaviors, on either side of the network.

Is this appropriate for all scenarios? No. Decisions about whether the pattern is appropriate for any scenario or application should be based on serious consideration of the positive and negative consequences of the pattern. Like any pattern, mobile objects has both types of consequence.

If you look at my blog over the past few years, I’ve frequently discussed the pros and cons of using a pure service-oriented approach vs an n-tier approach. Typically my n-tier arguments pre-suppose the use of mobile objects, and there are some discussions explicitly covering mobile objects.

The DTO pattern is a part of any service-oriented approach, virtually by definition. Though it is quite possible to manipulate your XML messages directly, most people find that unproductive and prefer to use a DTO as an intermediary – which makes sense for productivity even if it isn’t necessarily ideal for performance or control.

The DTO pattern can be used for n-tier approaches as well, but it is entirely optional. And when compared to other n-tier techniques involving things like the DataSet or mobile objects the DTO pattern’s weaknesses become much more noticeable.

The mobile object pattern is not useful for any true service-oriented scenario (note that I’m not talking about web services here, but rather true message-based SOA). This is because your business objects are your internal implementation and should never be directly exposed as part of your external contract. That sort of coupling between your external interface contract and your internal implementation is always bad – and is obviously inappropriate when using DTOs as well. DTOs can comprise part of your external contract, but should never be part of your internal implementation.

The mobile object pattern is very useful for n-tier scenarios because it enables some very powerful application models. Most notably, the way it is done in CSLA, it allows the application to switch between a 1-, 2- and 3-tier physical deployment merely by changing a configuration file. The UI, business and data developers do not need to change any code or worry about the details – assuming they’ve followed the rules for building proper CSLA-based business objects.