Rockford Lhotka

 Saturday, March 4, 2006

Here's another status update on the books.

I am done with the C# book, and it is still on schedule to be available at the end of March.

I am in the final revision stages with the VB book, so it is still on schedule to be available mid-April.

Since posting the beta code, people have pointed out a couple bugs (which I've fixed), one bug which can't be fixed without breaking the book (so it will be fixed later - and it isn't a show-stopper anyway) and a few feature requests (which will obviously come later).

I expect to be able to put the final version 2.0 code for both books online around March 13.

I'm also working with Magenic to have them host a new online forum. The plan is to use Community Server 2.0, hosted on a server at Magenic and most likely available through (though that's not for sure at this point).

So the next few weeks look to be exciting (anticipated):

March 13

CSLA .NET 2.0 code available

End of March

Expert C# 2005 Business Objects available

End of March

New CSLA .NET forums available


Expert VB 2005 Business Objects available

Saturday, March 4, 2006 11:51:10 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, March 1, 2006

I was recently asked whether I thought it was a good idea to avoid using the Validation events provided by Microsoft (in the UI), in favor of putting the validation logic into a set of objects. I think the answer to the question (to some degree) depends on whether you are doing Windows or Web development.


With Windows development, Windows Forms provides all the plumbing you need to put all your validation in the business objects and still have a very rich, expressive UI - with virtually no code in the UI at all. This is particularly true in .NET 2.0, but is quite workable in .NET 1.1 as well.


With Web development life isn't as nice. Of course there's no way to run real code in the browser, so you are stuck with JavaScript. While the real authority for any and all business logic must reside on the web server (because browsers are easily compromised), many web applications duplicate validation into the browser to give the user a better experience. While this is expensive and unfortunate, it is life in the world of the Web.


(Of course what really happens with a lot of Web apps is that the validation is only put into the browser - which is horrible, because it is too easy to bypass. That is simply a flawed approach to development...)


At least with ASP.NET there are the validation controls, which simplify the process of creating and maintaining the duplicate validation logic in the browser. You are still left to manually keep the logic in sync, but at least it doesn't require hand-coding JavaScript in most cases.



Obviously Windows Forms is an older and more mature technology (or at least flows from an older family of technologies), so it is no surprise that it allows you to do more things with less effort. But in most cases the effort to create Web Forms interfaces isn't bad either.


In any case, I do focus greatly on keeping code out of the UI. There's nothing more expensive than a line of code in the UI - because you _know_ it has a half-life of about 1-2 years. Everyone is rewriting their ASP.NET 1.0 UI code to ASP.NET 2.0. Everyone is tweaking their Windows Forms 1.0 code for 2.0. And all of it is junk when WinFX comes out, since WPF is intended to replace both Windows and Web UI development in most cases. Thus code in the UI is expensive, because you'll need to rewrite it in less than 2 years in most cases.


Code in a business object, on the other hand, is far less expensive because most business processes don't change nearly as fast as the UI technologies provided by our vendors... As long as your business objects conform to the basic platform interfaces for data binding, they tend to flow forward from one UI technology to the next. For instance, WPF uses the same interfaces as Windows Forms, so reusing the same objects from Windows Forms behind WPF turns out to be pretty painless. You just redesign the UI and away you go.


A co-worker at Magenic has already taken my ProjectTracker20 sample app and created a WPF interface for it – based on the same set of CSLA .NET 2.0 business objects as the Windows Forms and Web Forms interfaces. Very cool!


So ultimately, I strongly believe that validation (and all other business logic) should be done in a set of business objects. That’s much of the focus in my upcoming Expert VB 2005 and C# 2005 Business Objects books. While you might opt to duplicate some of the validation in the UI for a rich user experience, that’s merely an unfortunate side-effect of the immature (and stagnant) state of the HTML world.

Wednesday, March 1, 2006 3:44:33 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

This recent MSDN article talks about SPOIL: Stored Procedure Object Interface Layer.

This is an interesting, and generally good idea as I see it. Unfortunately this team, like most of Microsoft, apparently just doesn't understand the concept of data hiding in OO. SPOIL allows you to use your object's properties as data elements for a stored procedure call, which is great as long as you only have public read/write properties. But data hiding requires that you will have some private fields that simply aren't exposed as public read/write properties. If SPOIL supported using fields as data elements for a stored procedure call it would be totally awesome!

The same is true for LINQ. It works against public read/write properties, which means it is totally useless if you want to use it to load "real" objects that employ basic concepts like encapsulation and data hiding. Oh sure, you can use LINQ (well, dlinq really) to load a DTO (data transfer object - an object with only public read/write properties and no business logic) and then copy the data from the DTO into your real object. Or you could try to use the DTO as the "data container" inside your real object rather than using private fields. But frankly those options introduce complexity that should be simply unnecessary...

While it is true that loading private fields requires reflection - Microsoft could solve this. They do own the CLR after all... It is surely within their power to provide a truly good solution to the problem, that supports data mapping and also allows for key OO concepts like encapsulation and data hiding.

Wednesday, March 1, 2006 10:00:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, February 12, 2006

Every now and then someone asks what tools I use to write books and articles.


I write in Word. Typically the publisher provides a specific template with styles that are to be used, so when they get to the last stage and need to do layout they can easily change the styles to get the proper appearance on the printed page.


Graphics, figures and screen-shots are, I find, the most frustrating part of the process in many ways. I use Snagit for capturing screen images into TIFF files, which isn’t too bad. But for creating other graphics I use a combination of Visio, PowerPoint, Corel Draw, MS Paint, screen shots and Snagit – along with Acrobat and (more recently) PDF Converter to generate PDF docs containing the figures. Not being graphically oriented, I find the whole process arcane and frustrating – especially as I’ve often had to redo figures a couple times because they are “blurry” or something – typically due to various resolution issues.


As an aside, this is what scares me about WPF (Avalon), since all of us programmers are going to be forced to learn all this arcane graphics stuff just to be competent at even basic application development. Personally I think that this could derail WPF adoption overall – at least until a large set of stock, good-looking, controls come into being from either Microsoft or third parties.


Microsoft seems to have this deluded idea that business sponsors are going to pay for graphic designers to build the UI – which I think is highly unlikely, given that they typically won’t even pay for decent testing… Who’d pay to make it pretty when they won’t even pay to make sure it actually works?!?


But back to the tools.


All the writing is done in Word. The final stage of reviewing however, occurs in PDF form. The publisher does the final layout, resulting in a PDF which will ultimately be sent to the printer. But I have to go through each chapter in PDF form to catch any final issues (typos, graphics issues, etc). I annotate the PDF files and send them back, so the layout people can make the changes and recreate the PDF.


I also use Microsoft Project. Not for the writing itself, but to schedule the process. Before committing to a writing schedule I create a project of my life. I put fixed-date tasks for all my travel, client commitments, vacations and anything else that I know will consume time during the writing process. Then I put in floating tasks for every chapter and have Project level my life (so to speak). This gives me at least a reasonable projection of how many calendar days it will take to do each chapter.


That’s pretty much it :)

Sunday, February 12, 2006 9:09:42 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, February 7, 2006

A few people have asked how the book is coming along, so here’s an update.


I touch each chapter a minimum of four times: to write it (AU), to revise it based on tech review comments (TR), to revise it based on copyedit questions (CE) and to do a final review after layout/typesetting (TS).


I am writing both the VB and C# books “concurrently”. What this really means is that I’m writing the book with one language, then skimming back through to change all language keywords, code blocks and diagrams to match the other language.


To practice what I preach (which is that you should be competent, if not fluent, in at least two languages) I am doing the book in C# and then converting to VB. It takes around 8 hours per chapter to do that conversion, 12 if there are a lot of diagrams to convert (code is easy – the damn graphics are the hard part…).


So, here’s the status as of this evening:






AU done


Front matter

AU done



TS done

TR done


TS done



TS done






CE done



CE done



CE done



CE done



CE done



CE done



TR done



TR done



People have also asked how much I expect the CSLA .NET 2.0 public beta code to change between now and the book’s release at the end of March. Chapters 2-5 cover the framework, and as you can see those chapters are into the final editing stages. As such, I certainly don’t anticipate much change.


While I’ve made every effort to keep the VB and C# code in sync, there may be minor tweaks to the code as I roll through the VB chapters 2-5. But I’ve used both in projects and at conferences like VS Live last week, and both pass my unit tests, so those changes should be cosmetic, not functional.


In other words, the beta is pretty darn close to the final code that’ll be provided for download with the book.

Tuesday, February 7, 2006 9:05:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, February 2, 2006

It is fairly common for people to focus so much on the CSLA .NET “feature list” that they forget that the purpose of the framework is merely to enable the implementation of good OO designs. Many times I’ll get questions or comments asking why CSLA .NET “only” supports certain types of objects, and why it doesn’t support others.


The thing is, CSLA .NET doesn't dictate your object model. All it does is enable a set of 10 or so stereotypes (in CSLA .NET 2.0), providing base classes to simplify the implementation of those stereotypes:


·         Editable root

·         Editable child

·         Editable root collection

·         Editable child collection

·         Read-only root

·         Read-only child

·         Read-only root collection

·         Read-only child collection

·         Name-value list

·         Command

·         Process


CSLA .NET provides base classes to minimize the code a business developer must write to implement this set of common stereotypes. But the fact is that most systems will have objects that fit into other stereotypes, and that's great! CSLA .NET doesn’t stop you from implementing those objects, and it may help you.


If you have objects that don’t fit the stereotypes listed above, you may still find that specific CSLA .NET features are useful. For instance, if your object must move between the client and application server, the data portal is extremely valuable. This is true even for objects that don’t actually use “data” (as in a database), but may have other reasons for moving between client and server. In other cases, an object may benefit from the Security functionality in CSLA .NET, such as authorization.


The point is that you can pick and choose the features that are of use for your particular object stereotype. If you’ll be creating multiple objects that fit this new stereotype, I recommend creating your own base class to support their creation (not necessarily by altering the Csla project itself, as that complicates updates). As much as possible, I’ve tried to keep the framework open so you can extend it by creating your own base classes – either from scratch, or inheriting from something like Csla.Core.BusinessBase, or Csla.Core.ReadOnlyBindingList.


The key point, is that you should develop your object model based on behavioral object design principles. Then you should determine how (and if) those objects map into the existing CSLA .NET stereotypes, thus identifying where you can leverage the pre-existing base classes and tap into the functionality they provide. For objects that simply don’t fit into one of these stereotypes you can decide what (if any) CSLA .NET functionality those objects should use and you can go from there.


In short, the starting point should be your object model, not the “feature list” of CSLA .NET itself.

Thursday, February 2, 2006 2:06:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, January 30, 2006

A public beta of CSLA .NET 2.0 is now available at

The Expert C# 2005 Business Objects and Expert VB 2005 Business Objects books are still due out around the end of March. That is also the point at which the framework with "RTM".

The public beta should be reasonable stable. I am nearly done with technical reviews of the chapters, and so am basically done with the framework updates. The ProjectTracker sample application is still under review at this point.

Monday, January 30, 2006 2:44:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 24, 2006

Next week at VS Live in San Francisco I’ll release a public beta of CSLA .NET 2.0.


This will include both VB 2005 and C# 2005 editions of the framework code, along with a sample application in both languages, showing how the framework can be used to create Windows, Web and Web service interfaces on top of a common set of business objects.


Those business objects are substantially more powerful than their CSLA .NET 1.x counterparts, while preserving the same architectural concepts and benefits. These objects leverage new .NET 2.0 features such as generics. They also bring forward features from CSLA .NET 1.5, but in a more integrated and elegant manner. There are additional features as well, including support for authorization rules, a more flexible data portal implementation and more.


Watch for the release of the beta next week.


Click here to see Jim Fawcette's blog entry about the upcoming beta release.

Tuesday, January 24, 2006 1:43:25 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, January 23, 2006

[Warning, the following is a bit of a rant...]


Custom software development is too darn hard. As an industry, we spend more time discussing “plumbing” issues like Java vs .NET or .NET Remoting vs Web Services than we do discussing the design of the actual business functionality itself.


And even when we get to the business functionality, we spend most of our time fighting with the OO design tools, the form designer tools that “help” create the UI, the holes in data binding, the data access code (transactions, concurrency, field/column mapping) and the database design tools.


Does the user care, or get any benefit out of any of this? No. They really don’t. Oh sure, we convince ourselves that they do, or that they’d “care if they knew”. But they really don’t.


The users want a system that does something useful. I think Chris Stone gets it right in this editorial.


I think the underlying problem is that just building business software to solve users’ problems is ultimately boring. Most of us get into computers to express our creative side, not to assemble components into working applications.


If we wanted to do widget assembly, there’s a whole lot of other jobs that’d provide more visceral satisfaction. Carpentry, plumbing (like with pipes and water), cabinet making and many other jobs provide the ability to assembly components to build really nice things. And those jobs have a level of creativity too – just like assembly programs do in the computer world.


But they don’t offer the level of creativity and expression you get by building your own client/server, n-tier, SOA framework. Using data binding isn’t nearly as creative or fun as building a replacement for data binding…


I think this is the root of the issue. The computer industry is largely populated by people who want to build low-level stuff, not solve high-level business issues. And even if you go with conventional wisdom; that Mort (the business-focused developer) outnumbers the Elvis/Einstein framework-builders 5 to 1, there are still enough Elvis/Einstein types in every organization to muck things up for the Morts.


Have you tried building an app with VB3 (yes, Visual Basic 3.0) lately? A couple years ago I dusted that relic off and tried it. VB3 programs are fast. I mean blindingly fast! It makes sense, they were designed for the hardware of 1993… More importantly, a data entry screen built in VB3 is every bit as functional to the user as a data entry screen built with today’s trendy technologies.


Can you really claim that your Windows Forms 2.0, ASP.NET 2.0 or Flash UI makes the user’s data entry faster or more efficient? Do you save them keystrokes? Seconds of time?


While I think the primary fault lies with us, as software designers and developers, there’s no doubt that the vendors feed into this as well.


We’ve had remote-procedure-call technology for a hell of a long time now. But the vendors keep refining it every few years: got to keep the hype going. Web services are merely the latest incarnation of a very long sequence of technologies that all you to call a procedure/component/method/service on another computer. Does using Web services make the user more efficient than DCOM? Does it save them time or keystrokes? No.


But we justify all these things by saying it will save us time. They’ll make us more efficient. So ask your users. Do your users think you are doing a better job, that you are more efficient and responsive, that you are giving them better software than you were before Web Services? Before .NET? Before Java?


Probably not. My guess: users don’t have a clue that the technology landscape changed out from under them over the past 5 years. They see the same software, with the same mis-match against the business needs and the same inefficient data entry mechanisms they’ve seen for at least the past 15 years…


No wonder they offshore the work. We (at least in the US and Europe) have had a very long time to prove that we can do better work, that all this investment in tools and platforms will make our users’ lives better. Since most of what we’ve done hasn’t lived up to that hype, can it be at all surprising that our users feel they can get the same crappy software at a tiny fraction of the price by offshoring?


Recently my friend Kathleen Dollard made the comment that all of Visual Basic (and .NET in general) should be geared toward Mort. I think she’s right. .NET should be geared toward making sure that the business developer is exceedingly productive and can provide tremendous business value to their organizations, with minimal time and effort.


If our tools did what they were supposed to do, the Elvis/Einstein types could back off and just let the majority of developers use the tools – without wasting all that time building new frameworks. If our tools did what they were supposed to do, the Mort types would be so productive there’d be no value in offshoring. Expressing business requirements in software would be more efficiently done by people who work in the business, who understand the business, language and culture; and who have tools that allow them to build that software rapidly, cheaply and in a way that actually meets the business need.

Monday, January 23, 2006 11:41:13 AM (Central Standard Time, UTC-06:00)  #    Disclaimer