Rockford Lhotka

 Sunday, September 4, 2005
We're a few local (ATL) ASP.NET programmers who have created a central datastore, collection web interfaces, and processing system for missing/safe person data on people affected by hurricane katrina.
The idea caught on quick... we've only got 2 guys working on this full time and a third helping out when he can. Since going online we've ben contacted by LOTS of groups who want to work with us. We're overwhelmed and need help... we've got plans for a mobile device interface to our data, and are working with a callcenter fielding calls from disaster victims (the callcenter needs some special enhancements to the interface we can't get to fast enough.). We're being listed as the central survivors database on the emergency web terminals being deployed in disaster zones (
We've got people on the ground at shelters trying to gather lists and transmit them to our datastore electronically. We're also contacting almost 50 other websites which have lists or data collection systems... they are starting to use our MSP file spec and transmit data to us.
If you could get the word out that we are looking for volunteer programmers who can hit the ground running and take ownership of some of the special projects we have WE WOULD APPRECIATE IT.. but there are people out there looking for loved ones who would appreciate it even more.
Current experience we need:
  • .net mobile framework using our data classes to collect and process data
  • string data parsing experience... there's a lot of data in forum posts we can't use but could triple our data if we could parse into useful fields
  • Mass emailing system... as our system processes incoming bulk data, it makes matches between searchers and safe persons, flagging them as needed contact via email. We need a system to read from a database table containing these contact requests and send them out
  • any general ASP.NET (we're using VB) experience to help build special lookups and interfaces
I hate to make such a request on a holiday weekend, but we are swamped.
Please spread the word if you can.
Developer, The Katrina Data Project
Sunday, September 4, 2005 2:48:46 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, August 29, 2005

I just got back from a week’s vacation/holiday in Great Britain and I feel very refreshed.


And that’s good, given that just before going to the UK I wrapped up the first draft of Chapter 1 in the new book Billy Hollis and I are writing. As you have probably gathered by now, this book uses DataSet objects rather than my preferred use of business objects.


I wanted to write a book using the DataSet because I put a lot of time and energy into lobbying Microsoft to make certain enhancements to the way the objects work and how Visual Studio works with them. Specifically I wanted a way to use DataSet objects as a business layer – both in 2-tier and n-tier scenarios.


Also, I wanted to write a book using Windows Forms rather than the web. This reflects my bias of course, but also reflects the reality that intelligent/smart client development is making a serious comeback as businesses realize that deployment is no longer the issue it was with COM and that development of a business application in Windows Forms is a lot less expensive than with the web.


The book is geared toward professional developers, so we assume the reader has a clue. The expectation is that if you are a professional business developer (a Mort) that uses VB6, Java, VB.NET, C# or whatever – that you’ll be able to jump in and be productive without us explaining the trivial stuff.


So Chapter 1 jumps in and creates the sample app to be used throughout the book. The chapter leverages all the features Microsoft has built into the new DataSet and its Windows Forms integration – thus showing the good, the bad and the ugly all at once.


Using partial classes you really can embed most of your validation and other logic into the DataTable objects. When data is changed at a column or row level you can act on that changed data. As you validate the data you can provide text indicating why a value is invalid.


The bad part at the moment is that there are bugs that prevent your error text from properly flowing back to the UI (through the ErrorProvider control or DataGridView) in all cases. In talking to the product team I believe that my issues with the ErrorProvider will be resolved, but that some of my DataGridView issues won’t be fixed (the problems may be a “feature” rather than a bug…). Fortunately I was able to figure out a (somewhat ugly) workaround to make the DataGridView actually work like it should.


The end result is that Chapter 1 shows how you can create a DataSet from a database, then write your business logic in each DataTable. Then you can create a basic Windows Forms UI with virtually no code. It is really impressive!!


But then there’s another issue. Each DataTable comes with a strongly-typed TableAdapter. The TableAdapter is a very nice object that handles all the I/O for the DataTable – understanding how to get the data, fill the DataTable and then update the DataTable into the database. Better still, it includes atomic methods to insert, update and delete rows of data directly – without the need for a DataTable at all. Very cool!


Unfortunately there are no hooks in the TableAdapter by which you can apply business logic when the Insert/Update/Delete methods are called. The end result is that any validation or other business logic is pushed into the UI. That’s terrible!! And yet that’s the way my Chapter 1 works at the moment…


This functionality obviously isn’t going to change in .NET or Visual Studio at this stage of the game, meaning that the TableAdapter is pretty useless as-is.


(to make it worse, the TableAdapter code is in the same physical file as the DataTable code, which makes n-tier implementations seriously hard)


Being a framework kind of guy, my answer to these issues is a framework. Basically, the DataTable is OK, but the TableAdapter needs to be hidden behind a more workable layer of code. What I’m working through at the moment is how much of that code is a framework and how much is created via code-generation (or by hand – ugh).


But what’s really frustrating is that Microsoft could have solved the entire issue by simply declaring and raising three events from their TableAdapter code so it was possible to apply business logic during the insert/update/delete operations… Grumble…


The major bright point out of all this is that I know business objects solve all these issues in a superior manner. Digging into the DataSet world merely broadens my understanding of how business objects make life better.


Though to be fair, the flip side is that creating simple forms to edit basic data in a grid is almost infinitely easier with a DataTable than with an object. Microsoft really nailed the trivial case with the new features - and that has its own value. While frustrating when trying to build interesting forms, the DataTable functionality does mean you can whip out the boring maintenance screens with just a few minutes work each.


Objects on the other hand, make it comparatively easy to build interesting forms, but require more work than I'd like for building the boring maintenance screens...

Monday, August 29, 2005 9:37:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, August 15, 2005

Listen to my latest interview on .NET Rocks! where I discuss CSLA .NET present and future.

Monday, August 15, 2005 1:38:43 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 11, 2005

Like any author who writes practical, pragmatic content I am constantly torn. Torn between showing how to write maintainable code vs fast code. Between distilling the essence of an idea vs showing a complete solution that might obscure that essence.

Just at the moment I'm building a demo application, including its database. Do I create the best database I can, hopefully showing good database design techniques and subsequently showing how to write an app against an "ideal" database? Or do I create the database to look more like the ones I see when I go to clients - so it will have good parts and some parts that are obviously ill-designed. This latter approach allows me to show how to write an app against what I believe to be a more realistic database.

I'm opting for the latter approach. Yet sitting here right now, I know that I'll get lots of emails (some angry) berating me for creating and/or using such a poor database in a demo. "Demos should show the right approach" and so forth. Of course if I were to use a more ideal database design I'd get comments at conferences (some angry) because my demo app "isn't realistic" and only works in "a perfect world".

See, authors can't win. All we can do is choose the sub-group from which we're going to get nasty emails...

But that's OK. The wide diversity of viewpoints in our industry is one of our collective strengths. Pragmatists vie against idealists, performance-hounds vie against those focused on maintainability. Somewhere in the middle is reality – the cold, hard reality that none of us have the time to write performant, maintainable code using perfect implementations of all best practices and known design patterns. Somewhere in the middle are those hard choices each of us makes to balance the cost of development vs the cost of licensing vs the cost of hardware vs the cost of maintenance vs the cost of performance.

I think ultimately that this is why computer books don’t sell in the numbers you’d expect. If there are 7+ million developers, why does a good selling computer book sell around 20,000 copies?

(The exceptions being end-user books and theory books. End-user books because end users just want the answers, and theory books because they often rise above the petty bickering of the real world. Every faction can interpret theory books to say what they like to hear, so everyone likes that kind of book. Theory books virtually always “reinforce” everyone’s different world views.)

I think the reason “good selling” books do so poorly though. Only a subset or faction within the computer industry will agree with any given book. That faction tends to buy the book. Other factions might be a few copies, but they get a bad taste in their mouths when reading it, so they don’t recommend or propagate the book. Instead they find other books that do agree with their world view.

Note that I’m not complaining. Not at all!

I’m merely observing that C# people won’t (as a general rule) buy a VB book, and Java people won’t buy a C# book. OO people won’t buy a book on DataSet usage, and people who love DataSets won’t waste money on an OO book. People who love the super-complex demos from Microsoft really hate books that use highly distilled examples, while people who just want the essence of a solution really hate highly complex examples (and thus the books that use them).

As I write the 2nd editions of my Business Objects books I’m simultaneously writing both VB and C# editions. Several people have asked whether it wouldn’t be better to interleave code samples, have both languages in the book or something so that I don’t produce two books. But publishers have tried this. And the reality is that C# people don’t like to buy books that contain any VB code (and visa versa). Mixed language books simply don’t sell as well as single-language books. And like it or not, the idea behind publishing books is to sell them – so we do what sells.

But that solution doesn’t work for realistic vs idealistic database designs – which is my current dilemma. You can’t really write a book twice – once with an ideal database and once with a more realistic one. Nor can you really double the size (and thus cost) of a book to fit both ideas into it at once.

So I’m settling for the only compromise I can find. Parts of my database are pretty well designed. Other tables are obviously very poor. Thus some of my forms are trivial to create because they come almost directly from the “ivory tower”, while other forms rely on more complex and less ideal techniques because they come from the ugly world of reality.

Maybe this way I can get nasty emails from every group :)

Thursday, August 11, 2005 1:54:09 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, August 9, 2005

Just to put questions to rest and reassure anyone who is concerned, I am actively working on CSLA .NET 2.0 and still intend to release 2nd editions of both the Expert VB and C# Business Objects books next spring - probably March or April.

A few people have asked if this is still the case, since my web site post was a few months ago and I haven't updated it. But it turns out that the post remains pretty accurate and I really don't have much more to say :)

CSLA .NET 2.0 will use generics - primarily for collections. It will support the new databinding, primarily because the new databinding uses the same interfaces as today (yea!), with the exception that ASP.NET Web Forms databinding will require some UI helper classes because it sadly isn't as transparent as Windows Forms.

The big change will be that the DataPortal will support multiple transports (most likely including remoting, asmx and enterprise services). This sets the stage for transparent support of Indigo (now Windows Communication Foundation, WCF) when it comes out later. But it also allows choice between today's three primary RPC technologies in a very transparent manner.

The other big change from the .NET 1.x book versions is that CSLA .NET 2.0 and the books will be covering CSLA .NET version 1.5 plus. In other words, most of the functionality of 1.5 will roll forward, including the RuleManager, context transfer (for globalization), exception handling and so forth. The one big change here is that I don't intend to support in-place sorting of collections, instead opting for a sortable view approach more akin to the way the DataView works against a DataTable. This will be more powerful, simpler and (I think) faster.

I am also trying to preserve as much backward compatibility as possible while adding the new capabilities. For instance, I intend on keeping BusinessCollectionBase and adding a new BusinessListBase which uses generics. That said, there's no doubt that some changes to existing code will be required - I just don't know exactly what yet...

Of course my primary goal with the original books wasn't to promote CSLA .NET specifically as much as to demonstrate the process (and some specific solutions) of creating a framework in .NET to support distributed OO. That remains my goal, and I think the lack of earth-shattering change helps with this. A good framework shields business applications from changes in the underlying platform. In .NET 2.0 data binding changes, but we largely don't care. Generics get added, and we can use or ignore them without penalty. Indigo will show up and we will be shielded from its changes. ADO.NET 2.0 has some incompatibilities with 1.x and we are largely shielded from those changes. On the whole I am pretty pleased with how little change is actually required moving from .NET 1.x to 2.0.


Tuesday, August 9, 2005 9:26:18 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, August 5, 2005

The FCC has decided to allow phone companies to screw us consumers over just like cable companies do. They really should have gone the other way and forced cable to be more like DSL.

It will be interesting to see what my particular phone company does, as I've been with the same ISP for many years. Now I could be forced to switch to my phone company. Odds of them allowing static IP addresses are probably about as good as with cable - which is to say not good...

In short, the FCC probably just costed me several hundred dollars a year either in buying a hosting service for all my domains and sites, or in buying my phone company's corporate level service which is a lot more expensive than my current ISP.

If it comes to that though, at least I'll get faster service. Our cable company provides a lot faster connectivity and my DSL, and if I'm going to have to pay super-high rates to get a static IP address I'd rather go with cable and get the faster speeds...

While the current goverment might be against raising taxes, they certainly seem to be happy to help corporations get more of our money...

Update: Here's an actual news article on the topic as well.

Friday, August 5, 2005 11:01:43 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

Ted Neward believes that “distributed objects” are, and always have been, a bad idea, and John Cavnar-Johnson tends to agree with him.


I also agree. "Distributed objects" are bad.


Shocked? You shouldn’t be. The term “distributed objects” is most commonly used to refer to one particular type of n-tier implementation: the thin client model.


I discussed this model in a previous post, and you’ll note that I didn’t paint it in an overly favorable light. That’s because the model is a very poor one.


The idea of building a true object-oriented model on a server, where the objects never leave that server is absurd. The Presentation layer still needs all the data so it can be shown to the user and so the user can interact with it in some manner. This means that the “objects” in the middle must convert themselves into raw data for use by the Presentation layer.


And of course the Presentation layer needs to do something with the data. The ideal is that the Presentation layer has no logic at all, that it is just a pass-through between the user and the business objects. But the reality is that the Presentation layer ends up with some logic as well – if only to give the user a half-way decent experience. So the Presentation layer often needs to convert the raw data into some useful data structures or objects.


The end result with “distributed objects” is that there’s typically duplicated business logic (at least validation) between the Presentation and Business layers. The Presentation layer is also unnecessarily complicated by the need to put the data into some useful structure.


And the Business layer is complicated as well. Think about it. Your typical OO model includes a set of objects designed using OOD sitting on top of an ORM (object-relational mapping) layer. I typically call this the Data Access layer. That Data Access layer then interacts with the real Data layer.


But in a “distributed object” model, there’s the need to convert the objects’ data back into raw data – often quasi-relational or hierarchical – so it can be transferred efficiently to the Presentation layer. This is really a whole new logical layer very akin to the ORM layer, except that it maps between the Presentation layer’s data structures and the objects rather than between the Data layer’s structures and the objects.


What a mess!


Ted is absolutely right when he suggests that “distributed objects” should be discarded. If you are really stuck on having your business logic “centralized” on a server then service-orientation is a better approach. Using formalized message-based communication between the client application and your service-oriented (hence procedural, not object-oriented) server application is a better answer.


Note that the terminology changed radically! Now you are no longer building one application, but rather you are building at least two applications that happen to interact via messages. Your server doesn't pretend to be object-oriented, but rather is service-oriented - which is a code phrase for procedural programming. This is a totally different mindset from “distributed objects”, but it is far better.


Of course another model is to use mobile objects or mobile agents. This is the model promoted in my Business Objects books and enabled by CSLA .NET. In the mobile object model your Business layer exists on both the client machine (or web server) and application server. The objects physically move between the two machines – running on the client when user interaction is required and running on the application server to interact with the database.


The mobile object model allows you to continue to build a single application (rather than 2+ applications with SO), but overcomes the nasty limitations of the “distributed object” model.

Friday, August 5, 2005 9:12:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, July 26, 2005
Here's an article that appears to provide a really good overview of the whole mobile agent/object concept. I've only skimmed through it, but the author appears to have a good grasp on the concepts and portrays them with some good diagrams.
Monday, July 25, 2005 11:16:49 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, July 24, 2005

Mike has requested my thoughts on 3-tier and the web – a topic I avoided in my previous couple entries (1 and 2) because I don’t find it as interesting as a smart/intelligent client model. But he’s right, the web is widely used and a lot of poor people are stuck building business software in that environment, so here’s the extension of the previous couple entries into the web environment.


In my view the web server is an application server, pure and simple. Also, in the web world it is impractical to run the Business layer on the client because the client is a very limited environment. This is largely why the web is far less interesting.


To discuss things in the web world I break the “Presentation” layer into two parts – Presentation and UI. This new Presentation layer is purely responsible for display and input to/from the user – it is the stuff that runs on the browser terminal. The UI layer is responsible for all actual user interaction – navigation, etc. It is the stuff that runs on the web server: your aspx pages in .NET.


In web applications most people consciously (or unconsciously) duplicate most validation code into the Presentation layer so they can get it to run in the browser. Thus is expensive to create/maintain, but is an unfortunate evil required to have a half-way decent user experience in the web environment. You must still have that logic in your actual Business layer of course, because you can never trust the browser - it is too easily bypassed (Greasemonkey anyone?). This is just the way it is on the web, and will be until we get browsers that can run complete code solutions in .NET and/or Java (that's sarcasm btw).


On the server side, the web server IS an application server. It fills the exact same role of the mainframe or minicomputer over the past 30 years of computing. For "interactive" applications, it is preferable to run the UI layer, Business layer and Data Access layer all on the web server. This is the simplest (and thus cheapest) model, and provides the best performance[1]. It can also provide very good scalability because it is relatively trivial to create a web farm to scale out to many servers. By creating a web farm you also get very good fault tolerance at a low price-point. Using ISA as a reverse proxy above the web farm you can get good security.


In many organizations the reverse proxy idea isn’t acceptable (not being a security expert I can’t say why…) and so they have a policy saying that the web server is never allowed to interact directly with the database server – thus forcing the existence of an application server that at a minimum runs the Data Access layer. Typically this application server is behind a second firewall. While this security approach hurts performance (often by as much as 50%), it is relatively easily achieved with CSLA .NET or similar architecture/frameworks.


In other situations people prefer to put the Business layer and Data Access layer on the application server behind the second firewall. This means that the web server only runs the UI layer. Any business processing, validation, etc. must be deferred across the network to the application server. This has a much higher impact on performance (in a bad way).


However, this latter approach can have a positive scalability impact in certain applications. Specifically applications where there’s not much interactive content, but instead there’s a lot of read-only content. Most read-only content (by definition) has no business logic and can often be served directly from the UI layer. In such applications the IO load for the read-only content can be quite enough to keep the web server very busy. By offloading all business processing to an application server overall scalability may be improved.


Of course this only really works if the interactive (OLTP) portions of the application are quite limited in comparison to the read-only portions.


Also note that this latter approach suffers from the same drawbacks as the thin client model discussed in my previous post. The most notable problem is that you must come up with a way to do non-chatty communication between the UI layer and the Business layer, without compromising either layer. This is historically very difficult to pull off. What usually happens is that the “business objects” in the Business layer require code to externalize their state (data) into a tabular format such as a DataSet so the UI layer can easily use the data. Of course externalizing object state breaks encapsulation unless it is done with great care, so this is an area requiring extra attention. The typical end result are not objects in a real OO sense, but rather are “objects” comprised of a set of atomic, stateless methods. At this point you don’t have objects at all – you have an API.


In the case of CSLA .NET, I apply the mobile object model to this environment. I personally believe it makes things better since it gives you the flexibility to run some of your business logic on the web application server and some on the pure application server as appropriate. Since the Business layer is installed on both the web and application servers, your objects can run in either place as needed.


In short, to make a good web app it is almost required that you must compromise the integrity of your layers and duplication some business logic into the Presentation layer. It sucks, but its life in the wild world of the web. If you can put your UI, Business and Data Access layers on the web application server that’s best. If you can’t (typically due to security) then move only the Data Access layer and keep both UI and Business layers on the web application server. Finally, if you must put the Business layer on a separate application server I prefer to use a mobile object model for flexibility, but recognize that a pure API model on the application server will scale higher and is often required for applications with truly large numbers of concurrent users (like 2000+).



[1] As someone in a previous post indirectly noted, there’s a relationship between performance and scalability. Performance is the response time of the system for a user. Scalability is what happens to performance as the number of users and/or transactions is increased.

Sunday, July 24, 2005 8:47:26 AM (Central Standard Time, UTC-06:00)  #    Disclaimer