Rockford Lhotka

 Monday, August 29, 2005

I just got back from a week’s vacation/holiday in Great Britain and I feel very refreshed.

 

And that’s good, given that just before going to the UK I wrapped up the first draft of Chapter 1 in the new book Billy Hollis and I are writing. As you have probably gathered by now, this book uses DataSet objects rather than my preferred use of business objects.

 

I wanted to write a book using the DataSet because I put a lot of time and energy into lobbying Microsoft to make certain enhancements to the way the objects work and how Visual Studio works with them. Specifically I wanted a way to use DataSet objects as a business layer – both in 2-tier and n-tier scenarios.

 

Also, I wanted to write a book using Windows Forms rather than the web. This reflects my bias of course, but also reflects the reality that intelligent/smart client development is making a serious comeback as businesses realize that deployment is no longer the issue it was with COM and that development of a business application in Windows Forms is a lot less expensive than with the web.

 

The book is geared toward professional developers, so we assume the reader has a clue. The expectation is that if you are a professional business developer (a Mort) that uses VB6, Java, VB.NET, C# or whatever – that you’ll be able to jump in and be productive without us explaining the trivial stuff.

 

So Chapter 1 jumps in and creates the sample app to be used throughout the book. The chapter leverages all the features Microsoft has built into the new DataSet and its Windows Forms integration – thus showing the good, the bad and the ugly all at once.

 

Using partial classes you really can embed most of your validation and other logic into the DataTable objects. When data is changed at a column or row level you can act on that changed data. As you validate the data you can provide text indicating why a value is invalid.

 

The bad part at the moment is that there are bugs that prevent your error text from properly flowing back to the UI (through the ErrorProvider control or DataGridView) in all cases. In talking to the product team I believe that my issues with the ErrorProvider will be resolved, but that some of my DataGridView issues won’t be fixed (the problems may be a “feature” rather than a bug…). Fortunately I was able to figure out a (somewhat ugly) workaround to make the DataGridView actually work like it should.

 

The end result is that Chapter 1 shows how you can create a DataSet from a database, then write your business logic in each DataTable. Then you can create a basic Windows Forms UI with virtually no code. It is really impressive!!

 

But then there’s another issue. Each DataTable comes with a strongly-typed TableAdapter. The TableAdapter is a very nice object that handles all the I/O for the DataTable – understanding how to get the data, fill the DataTable and then update the DataTable into the database. Better still, it includes atomic methods to insert, update and delete rows of data directly – without the need for a DataTable at all. Very cool!

 

Unfortunately there are no hooks in the TableAdapter by which you can apply business logic when the Insert/Update/Delete methods are called. The end result is that any validation or other business logic is pushed into the UI. That’s terrible!! And yet that’s the way my Chapter 1 works at the moment…

 

This functionality obviously isn’t going to change in .NET or Visual Studio at this stage of the game, meaning that the TableAdapter is pretty useless as-is.

 

(to make it worse, the TableAdapter code is in the same physical file as the DataTable code, which makes n-tier implementations seriously hard)

 

Being a framework kind of guy, my answer to these issues is a framework. Basically, the DataTable is OK, but the TableAdapter needs to be hidden behind a more workable layer of code. What I’m working through at the moment is how much of that code is a framework and how much is created via code-generation (or by hand – ugh).

 

But what’s really frustrating is that Microsoft could have solved the entire issue by simply declaring and raising three events from their TableAdapter code so it was possible to apply business logic during the insert/update/delete operations… Grumble…

 

The major bright point out of all this is that I know business objects solve all these issues in a superior manner. Digging into the DataSet world merely broadens my understanding of how business objects make life better.

 

Though to be fair, the flip side is that creating simple forms to edit basic data in a grid is almost infinitely easier with a DataTable than with an object. Microsoft really nailed the trivial case with the new features - and that has its own value. While frustrating when trying to build interesting forms, the DataTable functionality does mean you can whip out the boring maintenance screens with just a few minutes work each.

 

Objects on the other hand, make it comparatively easy to build interesting forms, but require more work than I'd like for building the boring maintenance screens...

Monday, August 29, 2005 9:37:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, August 15, 2005

Listen to my latest interview on .NET Rocks! where I discuss CSLA .NET present and future.

Monday, August 15, 2005 1:38:43 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 11, 2005

Like any author who writes practical, pragmatic content I am constantly torn. Torn between showing how to write maintainable code vs fast code. Between distilling the essence of an idea vs showing a complete solution that might obscure that essence.

Just at the moment I'm building a demo application, including its database. Do I create the best database I can, hopefully showing good database design techniques and subsequently showing how to write an app against an "ideal" database? Or do I create the database to look more like the ones I see when I go to clients - so it will have good parts and some parts that are obviously ill-designed. This latter approach allows me to show how to write an app against what I believe to be a more realistic database.

I'm opting for the latter approach. Yet sitting here right now, I know that I'll get lots of emails (some angry) berating me for creating and/or using such a poor database in a demo. "Demos should show the right approach" and so forth. Of course if I were to use a more ideal database design I'd get comments at conferences (some angry) because my demo app "isn't realistic" and only works in "a perfect world".

See, authors can't win. All we can do is choose the sub-group from which we're going to get nasty emails...

But that's OK. The wide diversity of viewpoints in our industry is one of our collective strengths. Pragmatists vie against idealists, performance-hounds vie against those focused on maintainability. Somewhere in the middle is reality – the cold, hard reality that none of us have the time to write performant, maintainable code using perfect implementations of all best practices and known design patterns. Somewhere in the middle are those hard choices each of us makes to balance the cost of development vs the cost of licensing vs the cost of hardware vs the cost of maintenance vs the cost of performance.

I think ultimately that this is why computer books don’t sell in the numbers you’d expect. If there are 7+ million developers, why does a good selling computer book sell around 20,000 copies?

(The exceptions being end-user books and theory books. End-user books because end users just want the answers, and theory books because they often rise above the petty bickering of the real world. Every faction can interpret theory books to say what they like to hear, so everyone likes that kind of book. Theory books virtually always “reinforce” everyone’s different world views.)

I think the reason “good selling” books do so poorly though. Only a subset or faction within the computer industry will agree with any given book. That faction tends to buy the book. Other factions might be a few copies, but they get a bad taste in their mouths when reading it, so they don’t recommend or propagate the book. Instead they find other books that do agree with their world view.

Note that I’m not complaining. Not at all!

I’m merely observing that C# people won’t (as a general rule) buy a VB book, and Java people won’t buy a C# book. OO people won’t buy a book on DataSet usage, and people who love DataSets won’t waste money on an OO book. People who love the super-complex demos from Microsoft really hate books that use highly distilled examples, while people who just want the essence of a solution really hate highly complex examples (and thus the books that use them).

As I write the 2nd editions of my Business Objects books I’m simultaneously writing both VB and C# editions. Several people have asked whether it wouldn’t be better to interleave code samples, have both languages in the book or something so that I don’t produce two books. But publishers have tried this. And the reality is that C# people don’t like to buy books that contain any VB code (and visa versa). Mixed language books simply don’t sell as well as single-language books. And like it or not, the idea behind publishing books is to sell them – so we do what sells.

But that solution doesn’t work for realistic vs idealistic database designs – which is my current dilemma. You can’t really write a book twice – once with an ideal database and once with a more realistic one. Nor can you really double the size (and thus cost) of a book to fit both ideas into it at once.

So I’m settling for the only compromise I can find. Parts of my database are pretty well designed. Other tables are obviously very poor. Thus some of my forms are trivial to create because they come almost directly from the “ivory tower”, while other forms rely on more complex and less ideal techniques because they come from the ugly world of reality.

Maybe this way I can get nasty emails from every group :)

Thursday, August 11, 2005 1:54:09 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, August 9, 2005

Just to put questions to rest and reassure anyone who is concerned, I am actively working on CSLA .NET 2.0 and still intend to release 2nd editions of both the Expert VB and C# Business Objects books next spring - probably March or April.

A few people have asked if this is still the case, since my web site post was a few months ago and I haven't updated it. But it turns out that the post remains pretty accurate and I really don't have much more to say :)

CSLA .NET 2.0 will use generics - primarily for collections. It will support the new databinding, primarily because the new databinding uses the same interfaces as today (yea!), with the exception that ASP.NET Web Forms databinding will require some UI helper classes because it sadly isn't as transparent as Windows Forms.

The big change will be that the DataPortal will support multiple transports (most likely including remoting, asmx and enterprise services). This sets the stage for transparent support of Indigo (now Windows Communication Foundation, WCF) when it comes out later. But it also allows choice between today's three primary RPC technologies in a very transparent manner.

The other big change from the .NET 1.x book versions is that CSLA .NET 2.0 and the books will be covering CSLA .NET version 1.5 plus. In other words, most of the functionality of 1.5 will roll forward, including the RuleManager, context transfer (for globalization), exception handling and so forth. The one big change here is that I don't intend to support in-place sorting of collections, instead opting for a sortable view approach more akin to the way the DataView works against a DataTable. This will be more powerful, simpler and (I think) faster.

I am also trying to preserve as much backward compatibility as possible while adding the new capabilities. For instance, I intend on keeping BusinessCollectionBase and adding a new BusinessListBase which uses generics. That said, there's no doubt that some changes to existing code will be required - I just don't know exactly what yet...

Of course my primary goal with the original books wasn't to promote CSLA .NET specifically as much as to demonstrate the process (and some specific solutions) of creating a framework in .NET to support distributed OO. That remains my goal, and I think the lack of earth-shattering change helps with this. A good framework shields business applications from changes in the underlying platform. In .NET 2.0 data binding changes, but we largely don't care. Generics get added, and we can use or ignore them without penalty. Indigo will show up and we will be shielded from its changes. ADO.NET 2.0 has some incompatibilities with 1.x and we are largely shielded from those changes. On the whole I am pretty pleased with how little change is actually required moving from .NET 1.x to 2.0.

 

Tuesday, August 9, 2005 9:26:18 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, August 5, 2005

The FCC has decided to allow phone companies to screw us consumers over just like cable companies do. They really should have gone the other way and forced cable to be more like DSL.

It will be interesting to see what my particular phone company does, as I've been with the same ISP for many years. Now I could be forced to switch to my phone company. Odds of them allowing static IP addresses are probably about as good as with cable - which is to say not good...

In short, the FCC probably just costed me several hundred dollars a year either in buying a hosting service for all my domains and sites, or in buying my phone company's corporate level service which is a lot more expensive than my current ISP.

If it comes to that though, at least I'll get faster service. Our cable company provides a lot faster connectivity and my DSL, and if I'm going to have to pay super-high rates to get a static IP address I'd rather go with cable and get the faster speeds...

While the current goverment might be against raising taxes, they certainly seem to be happy to help corporations get more of our money...

Update: Here's an actual news article on the topic as well.

Friday, August 5, 2005 11:01:43 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

Ted Neward believes that “distributed objects” are, and always have been, a bad idea, and John Cavnar-Johnson tends to agree with him.

 

I also agree. "Distributed objects" are bad.

 

Shocked? You shouldn’t be. The term “distributed objects” is most commonly used to refer to one particular type of n-tier implementation: the thin client model.

 

I discussed this model in a previous post, and you’ll note that I didn’t paint it in an overly favorable light. That’s because the model is a very poor one.

 

The idea of building a true object-oriented model on a server, where the objects never leave that server is absurd. The Presentation layer still needs all the data so it can be shown to the user and so the user can interact with it in some manner. This means that the “objects” in the middle must convert themselves into raw data for use by the Presentation layer.

 

And of course the Presentation layer needs to do something with the data. The ideal is that the Presentation layer has no logic at all, that it is just a pass-through between the user and the business objects. But the reality is that the Presentation layer ends up with some logic as well – if only to give the user a half-way decent experience. So the Presentation layer often needs to convert the raw data into some useful data structures or objects.

 

The end result with “distributed objects” is that there’s typically duplicated business logic (at least validation) between the Presentation and Business layers. The Presentation layer is also unnecessarily complicated by the need to put the data into some useful structure.

 

And the Business layer is complicated as well. Think about it. Your typical OO model includes a set of objects designed using OOD sitting on top of an ORM (object-relational mapping) layer. I typically call this the Data Access layer. That Data Access layer then interacts with the real Data layer.

 

But in a “distributed object” model, there’s the need to convert the objects’ data back into raw data – often quasi-relational or hierarchical – so it can be transferred efficiently to the Presentation layer. This is really a whole new logical layer very akin to the ORM layer, except that it maps between the Presentation layer’s data structures and the objects rather than between the Data layer’s structures and the objects.

 

What a mess!

 

Ted is absolutely right when he suggests that “distributed objects” should be discarded. If you are really stuck on having your business logic “centralized” on a server then service-orientation is a better approach. Using formalized message-based communication between the client application and your service-oriented (hence procedural, not object-oriented) server application is a better answer.

 

Note that the terminology changed radically! Now you are no longer building one application, but rather you are building at least two applications that happen to interact via messages. Your server doesn't pretend to be object-oriented, but rather is service-oriented - which is a code phrase for procedural programming. This is a totally different mindset from “distributed objects”, but it is far better.

 

Of course another model is to use mobile objects or mobile agents. This is the model promoted in my Business Objects books and enabled by CSLA .NET. In the mobile object model your Business layer exists on both the client machine (or web server) and application server. The objects physically move between the two machines – running on the client when user interaction is required and running on the application server to interact with the database.

 

The mobile object model allows you to continue to build a single application (rather than 2+ applications with SO), but overcomes the nasty limitations of the “distributed object” model.

Friday, August 5, 2005 9:12:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, July 26, 2005
Here's an article that appears to provide a really good overview of the whole mobile agent/object concept. I've only skimmed through it, but the author appears to have a good grasp on the concepts and portrays them with some good diagrams.
Monday, July 25, 2005 11:16:49 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, July 24, 2005

Mike has requested my thoughts on 3-tier and the web – a topic I avoided in my previous couple entries (1 and 2) because I don’t find it as interesting as a smart/intelligent client model. But he’s right, the web is widely used and a lot of poor people are stuck building business software in that environment, so here’s the extension of the previous couple entries into the web environment.

 

In my view the web server is an application server, pure and simple. Also, in the web world it is impractical to run the Business layer on the client because the client is a very limited environment. This is largely why the web is far less interesting.

 

To discuss things in the web world I break the “Presentation” layer into two parts – Presentation and UI. This new Presentation layer is purely responsible for display and input to/from the user – it is the stuff that runs on the browser terminal. The UI layer is responsible for all actual user interaction – navigation, etc. It is the stuff that runs on the web server: your aspx pages in .NET.

 

In web applications most people consciously (or unconsciously) duplicate most validation code into the Presentation layer so they can get it to run in the browser. Thus is expensive to create/maintain, but is an unfortunate evil required to have a half-way decent user experience in the web environment. You must still have that logic in your actual Business layer of course, because you can never trust the browser - it is too easily bypassed (Greasemonkey anyone?). This is just the way it is on the web, and will be until we get browsers that can run complete code solutions in .NET and/or Java (that's sarcasm btw).

 

On the server side, the web server IS an application server. It fills the exact same role of the mainframe or minicomputer over the past 30 years of computing. For "interactive" applications, it is preferable to run the UI layer, Business layer and Data Access layer all on the web server. This is the simplest (and thus cheapest) model, and provides the best performance[1]. It can also provide very good scalability because it is relatively trivial to create a web farm to scale out to many servers. By creating a web farm you also get very good fault tolerance at a low price-point. Using ISA as a reverse proxy above the web farm you can get good security.

 

In many organizations the reverse proxy idea isn’t acceptable (not being a security expert I can’t say why…) and so they have a policy saying that the web server is never allowed to interact directly with the database server – thus forcing the existence of an application server that at a minimum runs the Data Access layer. Typically this application server is behind a second firewall. While this security approach hurts performance (often by as much as 50%), it is relatively easily achieved with CSLA .NET or similar architecture/frameworks.

 

In other situations people prefer to put the Business layer and Data Access layer on the application server behind the second firewall. This means that the web server only runs the UI layer. Any business processing, validation, etc. must be deferred across the network to the application server. This has a much higher impact on performance (in a bad way).

 

However, this latter approach can have a positive scalability impact in certain applications. Specifically applications where there’s not much interactive content, but instead there’s a lot of read-only content. Most read-only content (by definition) has no business logic and can often be served directly from the UI layer. In such applications the IO load for the read-only content can be quite enough to keep the web server very busy. By offloading all business processing to an application server overall scalability may be improved.

 

Of course this only really works if the interactive (OLTP) portions of the application are quite limited in comparison to the read-only portions.

 

Also note that this latter approach suffers from the same drawbacks as the thin client model discussed in my previous post. The most notable problem is that you must come up with a way to do non-chatty communication between the UI layer and the Business layer, without compromising either layer. This is historically very difficult to pull off. What usually happens is that the “business objects” in the Business layer require code to externalize their state (data) into a tabular format such as a DataSet so the UI layer can easily use the data. Of course externalizing object state breaks encapsulation unless it is done with great care, so this is an area requiring extra attention. The typical end result are not objects in a real OO sense, but rather are “objects” comprised of a set of atomic, stateless methods. At this point you don’t have objects at all – you have an API.

 

In the case of CSLA .NET, I apply the mobile object model to this environment. I personally believe it makes things better since it gives you the flexibility to run some of your business logic on the web application server and some on the pure application server as appropriate. Since the Business layer is installed on both the web and application servers, your objects can run in either place as needed.

 

In short, to make a good web app it is almost required that you must compromise the integrity of your layers and duplication some business logic into the Presentation layer. It sucks, but its life in the wild world of the web. If you can put your UI, Business and Data Access layers on the web application server that’s best. If you can’t (typically due to security) then move only the Data Access layer and keep both UI and Business layers on the web application server. Finally, if you must put the Business layer on a separate application server I prefer to use a mobile object model for flexibility, but recognize that a pure API model on the application server will scale higher and is often required for applications with truly large numbers of concurrent users (like 2000+).

 

 

[1] As someone in a previous post indirectly noted, there’s a relationship between performance and scalability. Performance is the response time of the system for a user. Scalability is what happens to performance as the number of users and/or transactions is increased.

Sunday, July 24, 2005 8:47:26 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, July 23, 2005

In my last post I talked about logical layers as compared to physical tiers. It may be the case that that post (and this one) are too obvious or basic. But I gotta say that I consistently am asked about these topics at conferences, user groups and via email. The reality is that none of this is all that obvious or clear to the vast majority of people in our industry. Even for those that truly grok the ideas, there’s far from universal agreement on how an application should be layered or how those layers should be deployed onto tiers.

 

In one comment on my previous post Magnus points out that my portrayal of the application server merely as a place for the Data Access layer flies in the face of Magnus’ understanding of n-tier models. Rather, Magnus (and many other people) is used to putting both the Business and Data Access layers on the application server, with only the Presentation layer on the client workstation (or presumably the web server in the case of a web application, though the comment didn’t make that clear).

 

The reality is that there are three primary models to consider in the smart/rich/intelligent client space. There are loose analogies in the web world as well, but personally I don’t find that nearly as interesting, so I’m going to focus on the intelligent client scenarios here.

 

Also one quick reminder – tiers should be avoided. This whole post assumes you’ve justified using a 3-tier model to obtain scalability or security as per my previous post. If you don’t need 3-tiers, don’t use them – and then you can safely ignore this whole entry :)

 

There’s the thick client model, where the Presentation and Business layers are on the client, and the Data Access is on the application server. Then there’s the thin client model where only the Presentation layer is on the client, with the Business and Data Access layers on the application server. Finally there’s the mobile object model, where the Presentation and Business layers are on the client, and the Business and Data Access layers are on the application server. (Yes, the Business layer is in two places) This last model is the one I discuss in my Expert VB.NET and C# Business Objects books and which is supported by my CSLA .NET framework.

 

The benefit to the thick client model is that we are able to provide the user with a highly interactive and responsive user experience, and we are able to fully exploit the resources of the client workstation (specifically memory and CPU). At the same time, having the Data Access layer on the application server gives us database connection pooling. This is a very high scaling solution, with a comparatively low cost, because we are able to exploit the strengths of the client, application and database servers very effectively. Moreover, the user experience is very good and development costs are relatively low (because we can use the highly productive Windows Forms technology).

 

The drawback to the thick client model is a loss of flexibility – specifically when it comes to process-oriented tasks. Most applications have large segments of OLTP (online transaction processing) functionality where a highly responsive and interactive user experience is of great value. However, most applications also have some important segments of process-oriented tasks that don’t require any user interaction. In most cases these tasks are best performed on the application server, or perhaps even directly in the database server itself. This is because process-oriented tasks tend to be very data intensive and non-interactive, so the closer we can do them to the database the better. In a thick client model there’s no natural home for process-oriented code near the database – the Business layer is way out on the client after all…

 

Another perceived drawback to the thick client is deployment. I dismiss this however, given .NET’s no-touch deployment options today, and ClickOnce coming in 2005. Additionally, any intelligent client application requires deployment of our code – the Presentation layer at least. Once you solve deployment of one layer you can deploy other layers as easily, so this whole deployment thing is a non-issue in my mind.

 

In short, the thick client model is really nice for interactive applications, but quite poor for process-oriented applications.

 

The benefit to the thin client model is that we have greater control over the environment into which the Business and Data Access layers are deployed. We can deploy them onto large servers, multiple servers, across disparate geographic locations, etc. Another benefit to this model is that it has a natural home for process-oriented code, since the Business layer is already on the application server and thus is close to the database.

 

Unfortunately history has shown that the thin client model is severely disadvantaged compared to the other two models. The first disadvantage is scalability in relationship to cost.  With either of the other two models as you add more users you intrinsically add more memory and CPU to your overall system, because you are leveraging the power of the client workstation. With a thin client model all the processing is on the servers, and so client workstations add virtually no value at all – their memory and CPU is wasted. Any scalability comes from adding larger or more numerous server hardware rather than by adding cheaper (and already present) client workstations.

 

The other key drawback to the thin client model is the user experience. Unless you are willing to make “chatty” calls from the thin Presentation layer to the Business layer across the network on a continual basis (which is obviously absurd), the user experience will not be interactive or responsive. By definition the Business layer is on a remote server, so the user’s input can’t be validated or processed without first sending it across the network. The end result is roughly equivalent to the mainframe user experiences users had with 3270 terminals, or the experience they get on the web in many cases. Really not what we should expect from an “intelligent” client…

 

Of course deployment remains a potential concern in this model, because the Presentation layer must still be deployed to the client. Again, I dismiss this as a main issue any longer due to no-touch deployment and ClickOnce.

 

In summary, the thin client model is really nice for process-oriented (non-interactive) applications, but is quite inferior for interactive applications.

 

This brings us to the mobile object model. You’ll note that neither the thick client nor thin client model is optimal, because almost all applications have some interactive and some non-interactive (process-oriented) functionality. Neither of the two “purist” models really addresses both requirements effectively. This is why I am such a fan of the mobile object (or mobile agent, or distributed OO) model, as it provides a compromise solution. I find this idea so compelling that it is the basis for my books.

 

The mobile object model literally has us deploy the Business layer to both the client and application server. Given no-touch deployment and/or ClickOnce this is quite practical to achieve in.NET (and in Java interestingly enough). Coupled with .NET’s ability to pass objects across the network by value (another ability shared with Java), all the heavy lifting to make this concept work is actually handled by .NET itself, leaving us to merely enjoy the benefits.

 

The end result is that the client has the Presentation and Business layers, meaning we get all the benefits of the thick client model. The user experience is responsive and highly interactive. Also we are able to exploit the power of the client workstation, offering optimal scalability at a low cost point.

 

But where this gets really exciting is the flexibility offered. Since the Business layer also runs on the application server, we have all the benefits of the thin client model. Any process-oriented tasks can be performed by objects running on the application server, meaning that all the power of the thin client model is at our disposal as well.

 

The drawback to the mobile object approach is complexity. Unless you have a framework to handle the details of moving an object to the client and application server as needed this model can be hard to implement. However, given a framework that supports the concept the mobile object approach is no more complex than either the thick or thin client models.

 

In summary, the mobile object model is great for both interactive and non-interactive applications. I consider it a “best of both worlds” model and CSLA .NET is specifically designed to make this model comparatively easy to implement in a business application.

 

At the risk of being a bit esoteric, consider the broader possibilities of a mobile object environment. Really a client application or an application server (Enterprise Services or IIS) are merely hosts for our objects. Hosts provide resources that our objects can use. The client “host” provides access to the user resource, while a typical application server “host” provides access to the database resource. In some applications you can easily envision other hosts such as a batch processing server that provides access to a high powered CPU resource or a large memory resource.

 

Given a true mobile object environment, objects would be free to move to a host that offers the resources an object requires at any point in time. This is very akin to grid computing. In the mobile object world objects maintain both data and behavior and merely move to physical locations in order to access resources. Raw data never moves across the network (except between the Data Access and Data Storage layers), because data without context (behavior) is meaningless.

 

Of course some very large systems have been built following both the thick client and thin client models. It would be foolish to say that either is fatally flawed. But it is my opinion that neither is optimal, and that a mobile object approach is the way to go.

Saturday, July 23, 2005 11:12:25 AM (Central Standard Time, UTC-06:00)  #    Disclaimer