Rockford Lhotka

 Thursday, July 21, 2005

I am often asked whether n-tier (where n>=3) is always the best way to go when building software.


Of course the answer is no. In fact, it is more likely that n-tier is not the way to go!


By the way, this isn’t the first time I’ve discussed this topic – you’ll find previous blog entries on this blog and an article at where I’ve covered much of the same material. Of course I also cover it rather a lot in my Expert VB.NET and C# Business Objects books.


Before proceeding further however, I need to get some terminology out of the way. There’s a huge difference between logical tiers and physical tiers. Personally I typically refer to logical tiers as layers and physical tiers as tiers to avoid confusion.


Logical layers are merely a way of organizing your code. Typical layers include Presentation, Business and Data – the same as the traditional 3-tier model. But when we’re talking about layers, we’re only talking about logical organization of code. In no way is it implied that these layers might run on different computers or in different processes on a single computer or even in a single process on a single computer. All we are doing is discussing a way of organizing a code into a set of layers defined by specific function.


Physical tiers however, are only about where the code runs. Specifically, tiers are places where layers are deployed and where layers run. In other words, tiers are the physical deployment of layers.


Why do we layer software? Primarily to gain the benefits of logical organization and grouping of like functionality. Translated to tangible outcomes, logical layers offer reuse, easier maintenance and shorter development cycles. In the final analysis, proper layering of software reduces the cost to develop and maintain an application. Layering is almost always a wonderful thing!


Why do we deploy layers onto multiple tiers? Primarily to obtain a balance between performance, scalability, fault tolerance and security. While there are various other reasons for tiers, these four are the most common. The funny thing is that it is almost impossible to get optimum levels of all four attributes – which is why it is always a trade-off between them.


Tiers imply process and/or network boundaries. A 1-tier model has all the layers running in a single memory space (process) on a single machine. A 2-tier model has some layers running in one memory space and other layers in a different memory space. At the very least these memory spaces exist in different processes on the same computer, but more often they are on different computers. Likewise, a 3-tier model has two boundaries. In general terms, an n-tier model has n-1 boundaries.


Crossing a boundary is expensive. It is on the order of 1000 times slower to make a call across a process boundary on the same machine than to make the same call within the same process. If the call is made across a network it is even slower. It is very obvious then, that the more boundaries you have the slower your application will run, because each boundary has a geometric impact on performance.


Worse, boundaries add raw complexity to software design, network infrastructure, manageability and overall maintainability of a system. In short, the more tiers in an application, the more complexity there is to deal with – which directly increases the cost to build and maintain the application.


This is why, in general terms tiers should be minimized. Tiers are not a good thing, they are a necessary evil required to obtain certain levels of scalability, fault tolerance or security.


As a good architect you should be dragged kicking and screaming into adding tiers to your system. But there really are good arguments and reasons for adding tiers, and it is important to accommodate them as appropriate.


The reality is that almost all systems today are at least 2-tier. Unless you are using an Access or dBase style database your Data layer is running on its own tier – typically inside of SQL Server, Oracle or DB2. So for the remainder of my discussion I’ll primarily focus on whether you should use a 2-tier or 3-tier model.


If you look at the CSLA .NET architecture from my Expert VB.NET and C# Business Objects books, you’ll immediately note that it has a construct called the DataPortal which is used to abstract the Data Access layer from the Presentation and Business layers. One key feature of the DataPortal is that it allows the Data Access layer to run in-process with the business layer, or in a separate process (or machine) all based on a configuration switch. It was specifically designed to allow an application to switch between a 2-tier or 3-tier model as a configuration option – with no changes required to the actual application code.


But even so, the question remains whether to configure an application for 2 or 3 tiers.


Ultimately this question can only be answered by doing a cost-benefit analysis for your particular environment. You need to weigh the additional complexity and cost of a 3-tier deployment against the benefits it might bring in terms of scalability, fault tolerance or security.


Scalability flows primarily from the ability to get database connection pooling. In CSLA .NET the Data Access layer is entirely responsible for all interaction with the database. This means it opens and closes all database connections. If the Data Access layer for all users is running on a single machine, then all database connections for all users can be pooled. (this does assume of course, that all users employ the same database connection string include the same database user id – that’s a prerequisite for connection pooling in the first place)


The scalability proposition is quite different for web and Windows presentation layers.


In a web presentation the Presentation and Business layers are already running on a shared server (or server farm). So if the Data Access layer also runs on the same machine database connection pooling is automatic. In other words, the web server is an implicit application server, so there’s really no need to have a separate application server just to get scalability in a web setting.


In a Windows presentation the Presentation and Business layers (at least with CSLA .NET) run on the client workstation, taking full advantage of the memory and CPU power available on those machines. If the Data Access layer is also deployed to the client workstations then there’s no real database connection pooling, since each workstation connects to the database directly. By employing an application server to run the Data Access layer all workstations offload that behavior to a central machine where database connection pooling is possible.


The big question with Windows applications is at what point to use an application server to gain scalability. Obviously there’s no objective answer, since it depends on the IO load of the application, pre-existing load on the database server and so forth. In other words it is very dependant on your particular environment and application. This is why the DataPortal concept is so powerful, because it allows you to deploy your application using a 2-tier model at first, and then switch to a 3-tier model later if needed.


There’s also the possibility that your Windows application will be deployed to a Terminal Services or Citrix server rather than to actual workstations. Obviously this approach totally eliminates the massive scalability benefits of utilizing the memory and CPU of each user’s workstation, but does have the upside of reducing deployment cost and complexity. I am not an expert on either server environment, but it is my understanding that each user session has its own database connection pool on the server, thus acting the same as if each user has their own separate workstation. If this is actually the case, then an application server would have benefit by providing database connection pooling. However, if I’m wrong and all user sessions share database connections across the entire Terminal Services or Citrix server then having an application server would offer no more scalability benefit here than it does in a web application (which is to say virtually none).


Fault tolerance is a bit more complex than scalability. Achieving real fault tolerance requires examination of all failure points that exist between the user and the database – and of course the database itself. And if you want to be complete, you just also consider the user to be a failure point, especially when dealing with workflow, process-oriented or service-oriented systems.


In most cases adding an application server to either a web or Windows environment doesn’t improve fault tolerance. Rather it merely makes it more expensive because you have to make the application server fault tolerant along with the database server, the intervening network infrastructure and any client hardware. In other words, fault tolerance is often less expensive in a 2-tier model than in a 3-tier model.


Security is also a complex topic. For many organizations however, security often comes down to protecting access to the database. From a software perspective this means restricting the code that interacts with the database and providing strict controls over the database connection strings or other database authentication mechanisms.


Security is a case where 3-tier can be beneficial. By putting the Data Access layer onto its own application server tier we isolate all code that interacts with the database onto a central machine (or server farm). More importantly, only that application server needs to have the database connection string or the authentication token needed to access the database server. No web server or Windows workstation needs the keys to the database, which can help improve the overall security of your application.


Of course we must always remember that switching from 2-tier to 3-tier decreases performance and increases complexity (cost). So any benefits from scalability or security must be sufficient to outweigh these costs. It all comes down to a cost-benefit analysis.


Thursday, July 21, 2005 3:51:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, July 20, 2005

Be sure to visit all the options under "Configuration" in the Admin Menu Bar above. There are 16 themes to choose from, and you can also create your own.


Wednesday, July 20, 2005 1:00:00 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, July 19, 2005

Magenic (my employer) bested five other finalists in this category, which recognizes partners that are developing and implementing innovative technical applications for clients using one or more Microsoft products. The company was chosen out of an international field of top Microsoft Partners for delivering market-leading customer solutions built on Microsoft technology. The awards were distributed at a ceremony July 9 in Minneapolis at the Microsoft Worldwide Partner Conference. Sandy White and Paul Fridman accepted the award on behalf of Magenic.

Awards were presented in a number of categories, with winners chosen from a pool of more than 1,800 entrants worldwide. Magenic was recognized for superior Technology Innovation in the Custom Development Solutions category. The Custom Development Solutions Award recognizes the year’s top partners in providing custom-developed solutions to customers that require value-added capabilities. Magenic won this award by developing solutions that optimize business opportunities for its customers through custom technology.

The Microsoft Partner Program Awards recognize Microsoft partners that have developed and delivered exceptional Microsoft-based solutions during the past year. With Microsoft’s recognition that Magenic’s applications are at the forefront of their industries, clients know that Magenic has proven commitment and expertise when delivering solutions based on Microsoft technologies. This recognition identifies Magenic as the most skilled partner in its custom application development areas.


Tuesday, July 19, 2005 8:00:34 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, July 2, 2005

In a reply to a previous entry on the Mort persona, Dan B makes this comment:


I've written about this before, but I'll say it again - I think the dilemma VB faces is the dichotomy between being taken seriously as a modern OO language and the need to carry along the Morts. It's the challenge of balancing the need to distance itself from some of those VB6 carryovers with the need to keep those millions of high-school, hobbyist, etc. developers buying the product.


My previous post wasn't really about VB as such, but more about the "Mort" persona. That persona exists and isn't going anywhere. There are a whole lot of Morts out there, and more entering the industry all the time. Most developers are professional business developers and thus most developers fit the Mort persona. That's just fact.


Whether this large group of people chooses to congregate around VB, C#, Java, Powerbuilder or some other tool doesn't matter in the slightest.


What does matter from a vendor's perspective (such as Microsoft) is that this is the single largest developer demographic, and so it makes a hell of a lot of sense to have a tool that caters to the pragmatic and practical focus of the Mort persona.


If this is VB that's awesome and I am happy. But if enough Morts move to C#, then C# will be forced to accommodate the priorities and requirements of the Mort persona. Microsoft has proven time and time again that they are very good at listening to their user base, and so whatever tool attracts the overwhelming population of Morts will ultimately conform to their desires.


Don’t believe me? Why does C# 2005 have edit-and-continue? Because so many Morts went from VB to C# and they voted very loudly and publicly to get e&c put into their new adopted language. I know a great many Elvis/Einstein people who think the whole e&c thing was a waste of time and money – but they’ve already lost control. And this is just the beginning.


In other words, for those Elvis and Einstein personas who evangelize C# my words are cautionary. You are outnumbered 5 to 1, and if Mort comes a-calling you will almost instantly lose control of C# and you'll probably feel like you need a new home.


The irony is that you’ll have brought this doom on yourselves by telling the vast majority of developers that the only way to get your respect is to use semi-colons, when the reality is that the only way to get your respect is a fundamental change in worldview from pragmatic and practical to elegant and engineered - and frankly that's just not going to happen.


Most people are in this industry only partially because of technology. They are driven by the desire to solve business problems and to help their organizations be better and stronger. It is a small subset that are primarily driven by the love of technology.


If this ratio is changing at all, it is changing away from technology. Tools like Biztalk and concepts like software factories and domain-specific languages are all about abstracting the technology to further enable people who are primarily driven by the business issues and the passion to solve them.


But I don’t see this as hopeless. As one of my fellow RDs mentioned to me a few weeks ago, in Visual Studio 2005 C++ is finally a first-class .NET language. To paraphrase her view, Mort can have VB or C# or both, because the real geeks (the Elvis/Einstein types) can and will just go back to C++ and be happy. But the truly wise will geeks will use both where appropriate.

Saturday, July 2, 2005 11:09:31 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, June 23, 2005
Take the MIT Weblog Survey
Thursday, June 23, 2005 11:51:11 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, June 14, 2005
This looks like quite the software development competition - check it out!
Tuesday, June 14, 2005 10:20:35 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, June 13, 2005

Mort is the Microsoft “persona” often associated with VB, and for various reasons “Mort” has unfortunately become an insult.


But in reality, Mort is the business developer. You know, like the 3 million or so VB6 developers that have been programming for anywhere from 5-20 years and have a pretty good clue about how software is built. Mort is not a newbie or a hobbyist. Mort is a professional[1] business developer. In fact, the Mort persona represents the vast majority of developers. There are around 3-5 VB developers for every C++/C# developer out there. And an increasing number of C# developers are “Morts” as well - you don't stop being a Mort just because you change programming languages after all.


The highly productive majority of developers who build business systems day in and day out. These business developers typically build systems for 1 to 1000 users. Systems that are of critical importance to their business. Systems that are linked to the very lifeblood of the companies for which these people work.


Mort is the heart and soul of the Microsoft platform. Mort is the reason Microsoft development is pervasive in virtually all small to mid-sized companies, and why it is lurking in the shadows everywhere you look even in the Fortune 100. These are the business developers that won't say no, that won't give up and who refuse to spend weeks or months on over-thinking J2EE or COM+ architectures when they can have their software up and saving money in a few days.


These are the people who never left the "smart client" and so isn’t "coming back to it" in a revolution. Long-time business developers are the people who saw the web for the terminal-based monstrosity it is and never left the productivity of Windows itself. These are the true Microsoft loyalists.


They aren't the uber-geeks. They aren't in it for the love of technology nearly as much as for the love of helping their end users and their companies. They are pragmatic and focused on just getting stuff done and running and saving money.


It isn’t like quality doesn’t count. Quality is critical, but also relative to the task at hand. In most cases, adequate software that’s deployed in a couple months is infinitely superior to exquisitely designed and tested software that’s deployed in a couple years.


A very large number of these “Mort” business developers are still using VB6 and have yet to move to .NET. Whether these business developers stay in VB or move to C# doesn't really matter to me a whole lot. Speaking as a geek I think that what’s important is that they move to .NET, because it is a far superior platform than Windows. But is this actually important to the business developers themselves?


The fact is that the majority of business developers aren't going to change the way they work due to a new language or even a new platform. If .NET can't give them the high levels of productivity of VB6 they won't move. If Microsoft can't convince mainstream business developers that they can switch to .NET quickly, easily and in order to gain serious and pragmatic benefit they'll never move. Nor should they. If .NET doesn’t make their job easier what would be the point?


Personally I am convinced that Visual Studio 2005 (with its attendant new VB and C# languages and related tools) is the tipping point. Not from a geek perspective (though there’s cool stuff there too), but from a pragmatic get-it-done perspective.


The new data access features in ADO.NET and in Windows Forms are truly the best stuff Microsoft has ever done in this area. The levels of productivity for building business applications in Windows Forms are unmatched by any technology I’ve seen.


The new and updated Windows Forms controls and the streamlined nature of Windows development brings back memories of VB6. Yes, Windows Forms is still a new forms engine when compared to VB6, but finally we can honestly make the claim that it is easier and more powerful than its VB6 predecessor. We can sincerely show that a business application can be written faster and with less code in VB 2005 than in VB6.


Things like the new splitter control, the flow layout, the toolstrip (the toolstrip is my new favorite toy btw), the new datagridview and other controls are the keys to serious productivity. Couple them with the easy way you create template projects, forms and classes and you almost immediately have a highly consistent and productive development environment to match or exceed VB6.


At Tech Ed last week Microsoft announced that VS 2005 and SQL Server 2005 will be released around the week of Nov 8, 2005. If you are one of the very large number of business developers who’ve been holding off on .NET, I understand. But I strongly suggest you look at VS 2005 and VB 2005, because I’m betting you are going to love what you see!



[1] As in sports, a professional is someone who makes their living by doing something. In this context, a professional business developer is someone who makes their living by building business software

Monday, June 13, 2005 4:45:12 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, June 9, 2005

Several of the Tech Ed speakers (myself included) have donated an hour of our time to charity, and you can help!

Go to and bid for your speaker of choice.

The idea is that you'll get an hour of valuable time you can use to get questions answered or problems solved, and the money will go to a good cause.

Thursday, June 9, 2005 4:43:09 PM (Central Standard Time, UTC-06:00)  #    Disclaimer