Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Thursday, February 14, 2013

In a recent email thread I ended up writing a lengthy bit of content summarizing some of my thoughts around the idea of automatically projecting js code into an HTML 5 (h5js) browser app.

Another participant in the thread mentioned that he’s a strong proponent of separation of concerns, and in particular keeping the “model” separate from data access. In his context the “model” is basically a set of data container or DTO objects. My response:

-----------------------------

I agree about separation of concerns at the lower levels.

I am a firm believer in domain focused business objects though. In the use of “real” OOD, which largely eliminates the need for add-on hacks like a viewmodel.

In other words, apps should have clearly defined logical layers. I use this model:

Interface
Interface control
Business
Data access
Data storage

This model works for pretty much everything: web apps, smart client apps, service apps, workflow tasks (apps), etc.

The key is that the business layer consists of honest-to-god real life business domain objects. These are designed using OOD so they reflect the requirements of the user scenario, not the database design.

If you have data-centric objects, they’ll live in the Data access layer. And that’s pretty common when using any ORM or something like EF, where the tools help you create data-centric types. That’s very useful – then all you need to do is use object:object mapping (OOM) to get the data from the data-centric objects into the more meaningful business domain objects.

At no point should any layer talk to the database other than the Data access layer. And at no point should the Interface/Interface control layers interact with anything except the Business layer.

Given all that, the question with smart client web apps (as I’ve taken to calling these weird h5js/.NET hybrids) is whether you are using a service-oriented architecture or an n-tier architecture. This choice must be made _first_ because it impacts every other decision.

The service-oriented approach says you are creating a system composed of multiple apps. In our discussion this would be the smart client h5js app and the server-side service app. SOA mandates that these apps don’t trust each other, and that they communicate through loosely coupled and clearly defined interface contracts. That allows the apps to version independently. And the lack of trust means that data flowing from the consuming app (h5js) to the service app isn’t trusted – which makes sense given how easy it is to hack anything running in the browser. In this world each app should (imo) consist of a series of layers such as those I mentioned earlier.

The n-tier approach says you are creating one app with multiple layers, and those layers might be deployed on different physical tiers. Because this is one app, the layers can and should have reasonable levels of trust between them. As a result you shouldn’t feel the need to re-run business logic just because the data flowed from one layer/tier to another (completely different from SOA).

N-tier can be challenging because you typically have to decide where to physically put the business layer: on the client to give the user a rich and interactive experience, or on the server for more control and easier maintenance. In the case of my CSLA .NET framework I embraced the concept of _mobile objects_ where the business layer literally runs on the client AND on the server, allowing you to easily run business logic where most appropriate. Sadly this requires that the same code can actually run on the client and server, which isn’t the case when the client and server are disparate platforms (e.g. h5js and .NET).

This idea of projecting server-side business domain objects into the client fits naturally into the n-tier world. This has been an area of deep discussion for months within the CSLA dev team – how to make it practical to translate the rich domain business behaviors into js without imposing a major burden of writing js alongside C#.

CSLA objects have a very rich set of rules and behaviors that ideally would be automatically projected into a js business layer for use by the smart client h5js Interface and Interface control layers. I love this idea – but the trick is to make it possible such that there’s not a major new burden for developers.

This idea of projecting server-side business domain objects into the client is a less natural fit for a service-oriented system, because there’s a clear and obvious level of coupling between the service app and the h5js app (given that parts of the h5js app literally generate based on the service app). I’m not sure this is a total roadblock, but you have to go into this recognizing that such an approach compromises the primary purpose of SOA, which is loose coupling between the apps in the system…

Thursday, February 14, 2013 10:39:23 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [8]  | 
 Monday, May 07, 2012

There are three fairly popular presentation layer design patterns that I collectively call the “M” patterns: MVC, MVP, and MVVM. This is because they all have an “M” standing for “Model”, plus some other constructs.

The thing with all of these “M” patterns is that for typical developers the patterns are useless without a framework. Using the patterns without a framework almost always leads to confusion, complication, high costs, frustration, and ultimately despair.

These are just patterns after all, not implementations. And they are big, complex patterns that include quite a few concepts that must work together correctly to enable success.

You can’t sew a fancy dress just because you have a pattern. You need appropriate tools, knowledge, and experience. The same is true with these complex “M” patterns.

And if you want to repeat the process of sewing a fancy dress over and over again (efficiently), you need specialized tooling for this purpose. In software terms this is a framework.

Trying to do something like MVVM without a framework is a huge amount of work. Tons of duplicate code, reinventing the wheel, and retraining people to think differently.

At least with a framework you avoid the duplicate code and hopefully don’t have to reinvent the wheel – allowing you to focus on retraining people. The retraining part is generally unavoidable, but a framework provides plumbing code and structure, making the process easier.

You might ask yourself why the MVC pattern only became popular in ASP.NET a few short years ago. The pattern has existed since (at least) the mid-1990’s, and yet few people used it, and even fewer used it successfully. This includes people on other platforms too, at least up to the point that those platforms included well-implemented MVC frameworks.

Strangely, MVC only started to become mainstream in the Microsoft world when ASP.NET MVC showed up. This is a comprehensive framework with tooling integrated into Visual Studio. As a result. typical developers can just build models, views, and controllers. Prior to that point they also had to build everything the MVC framework does – which is a lot of code. And not just a lot of code, but code that has absolutely nothing to do with business value, and only relates to implementation of the pattern itself.

We’re in the same situation today with MVVM in WPF, Silverlight, Windows Phone, and Windows Runtime (WinRT in Windows 8). If you want to do MVVM without a framework, you will have to build everything a framework would do – which is a lot of code that provides absolutely no direct business value.

Typical developers really do want to focus on building models, views, and viewmodels. They don’t want to have to build weak reference based event routers, navigation models, view abstractions, and all the other things a framework must do. In fact, most developers probably can’t build those things, because they aren’t platform/framework wonks. It takes a special kind of passion (or craziness) to learn the deep, highly specialized techniques and tricks necessary to build a framework like this.

What I really wish would happen, is for Microsoft to build an MVVM framework comparable to ASP.NET MVC. Embed it into the .NET/XAML support for WinRT/Metro, and include tooling in VS so we can right-click and add views and viewmodels. Ideally this would be an open, iterative process like ASP.NET MVC has been – so after a few years the framework reflects the smartest thoughts from Microsoft and from the community at large.

In the meantime, Caliburn Micro appears to be the best MVVM framework out there – certainly the most widely used. Probably followed by various implementations using PRISM, and then MVVM Light, and some others.

Monday, May 07, 2012 2:01:55 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [9]  | 
 Tuesday, November 08, 2011

Disclaimer: I know nothing. The following is (hopefully) well educated speculation on my part. Time will tell whether I’m right.

I really like Silverlight. I’ve been a strong proponent of Silverlight since 2007 when I rushed to port CSLA .NET to the new platform.

In fact, Magenic provided me with a dev and test team to make that transition happen, because we all saw the amazing potential of Silverlight.

And it has been a good few years.

But let’s face reality. Microsoft has invested who-knows-how-much money to build WinRT, and no matter how you look at it, WinRT is the replacement for Win32. That means all the stuff that runs on Win32 is “dead”. This includes Silverlight, Windows Forms, WPF, console apps – everything.

(this is partially in answer to Mary-Jo’s article on Silverlight 5)

I wouldn’t be surprised if Silverlight 5 was the last version. I also wouldn’t be surprised if .NET 4.5 was the last version for the Win32 client, and that future versions of .NET were released for servers and Azure only.

Before you panic though, remember that VB6 has been “dead” for well over a decade. It died at the PDC in 1999, along with COM. But you still use VB6 and/or COM? Or at least you know organizations who do? How can that be when it is dead??

That’s my point. “dead” isn’t really dead.

Just how long do you think people (like me and you) will continue to run Win32-based operating systems and applications? At least 10 years, and many will probably run 15-20 years into the future. This is the rate of change that exists in the corporate world. At least that’s been my observation for the past couple decades.

Microsoft supports their technologies for 10 years after a final release. So even if SL5 is the end (and they haven’t said it is), that gives us 10 years of supported Silverlight usage. The same for the other various .NET and Win32 technologies.

That’s plenty of time for Microsoft to get WinRT mature, and to allow us to migrate to that platform over a period of years.

I don’t expect WinRT 1.0 (the Windows 8 version) to be capable of replacing Win32 or .NET. I rather expect it to be pretty crippled in many respects. Much like VB 1.0 (and 2.0), .NET 1.0 and 1.1, Silverlight 1 and 2, etc.

But Windows 9 or Windows 10 (WinRT 2.0 or 3.0) should be quite capable of replacing Win32 and .NET and Silverlight.

If we assume Win8 comes out in 2012, and that Microsoft does a forced march release of 9 and 10 every two years, that means 2016 will give us WinRT 3.0. And if we hold to the basic truism that Microsoft always gets it right on their third release, that’ll be the one to target.

I think it is also reasonable to expect that Win9 and Win10 will probably continue to have the “blue side” (see my Windows 8 dev platform post), meaning Win32, .NET, and Silverlight will continue to be released and therefore supported over that time. They may not change over that time, but they’ll be there, and they’ll be supported – or so goes my theory.

This means that in 2016 the clock might really start for migration from Win32/.NET/Silverlight to WinRT.

Yes, I expect that a lot of us will build things for WinRT sooner than 2016. I certainly hope so, because it looks like a lot of fun!

But from a corporate perspective, where things move so slowly, this is probably good news. Certain apps can be ported sooner, but big and important apps can move slowly over time.

What to do in the meantime? Between now and 2016?

Focus on XAML, and on n-tier or SOA async server access as architectural models.

Or focus on HTML 5 (soon to be HTML 6 fwiw, and possibly HTML 7 by 2016 for all we know).

I’m focusing on XAML, creating a CSLA 4 version 4.5 release that supports .NET 4.5 on servers, Azure, Windows (Win32), and Windows (WinRT). And Silverlight 5 of course.

In fact, the plan is for a version 4.3 release to support Silverlight 5, then version 4.5 with support for .NET 4.5 and WinRT.

I suspect that you can use Silverlight or WPF as a bridge to WinRT. The real key is architecture.

  1. An n-tier architecture is fine, as long as the data access layer is running on a server, and the client uses async calls to interact with the server. WinRT requires a lot of async, at a minimum all server interactions. Silverlight forces you to adopt this architecture already, so it is a natural fit. WPF doesn’t force the issue, but you can choose to do “the right thing”.
  2. You can also build your client applications to be “edge applications” – on the edge of a service-oriented system. This is a less mature technology area, and it is more costly. But it is also a fine architecture for environments composed of many disparate applications that need to interact as a loosely coupled system. Again, all service interactions by the edge applications (the ones running on the clients) must be async.
  3. Or you can build “hybrid solutions”, where individual applications are built using n-tier architectures (with async server calls). And where some of those applications also expose service interfaces so they can participate as part of a broader service-oriented system.

I favor option 3. I don’t like to accept the cost and performance ramifications of SOA when building an application, so I’d prefer to use a faster and cheaper n-tier architecture. At the same time, many applications do need to interact with each other, and the requirement to create “application mashups” through edge applications happens from time to time. So building my n-tier applications to have dual interfaces (XAML and JSON for example) is a perfect compromise.

The direct users of my application get n-tier performance and maintainability. And the broader organization can access my slower-moving, standards-based, contractual service interface. It is the best of both worlds.

So do I care if Silverlight 5 is the last version of Silverlight?

Only if WPF continues to evolve prior to us all moving to WinRT. If WPF continues to evolve, I would expect Silverlight to, at a minimum, keep up. Otherwise Microsoft has led a lot of people down a dead-end path, and that’s a serious betrayal of trust.

But if my suspicions are correct, we won’t see anything but bug fixes for WPF or Silverlight for many years. I rather expect that these two technologies just became the next Windows Forms. You’ll notice that WinForms hasn’t had anything but bug fixes for 6 years right? The precedent is there for a UI technology to be “supported, stable, and stagnant” for a very long time, and this is my expectation for WPF/SL.

And if that’s the case, then I don’t care at all about a Silverlight 6 release. We can use WPF/SL in their current form, right up to the point that WinRT is stable and capable enough to act as a replacement for today’s Win32/.NET applications.

Tuesday, November 08, 2011 8:51:12 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [22]  | 
 Monday, October 03, 2011

People often ask “Why should I use MVVM? It seems like it complicates things.”

The most common reason hard-core developers put forward for MVVM is that “it enables unit testing of presentation code”.

Although that can be true, I don’t think that’s the primary reason people should invest in MVVM.

I think the primary reason is that it protects your presentation code against some level of change to the XAML. It makes your code more maintainable, and will help it last longer.

For example, build a WPF app with code-behind. Now try to move it to Silverlight without changing any code (only XAML). Pretty hard huh?

Or, build a Silverlight app with code-behind. Now have a user experience designer rework your XAML to look beautiful. Your app won’t build anymore? They changed numerous XAML types and there are now compiler errors? Oops…

Looking forward, try taking any WPF or Silverlight app that has code-behind and moving it to WinRT (Windows 8) without changing any code (only XAML – and the XAML will need to change). Turns out to be nearly impossible doesn’t it?

And yet, I have lots of CSLA .NET application code that uses MVVM to keep the presentation code cleanly separated from the XAML. Examples where the exact same code runs behind WPF and Silverlight XAML. I’m not quite yet to the point of having CSLA working on WinRT, but I fully expect the exact same code to run on Windows 8, just with a third set of XAML.

To me, that is the power and value of MVVM. Your goal should be no code-behind, viewmodel code that doesn’t rely on specific XAML types, and an abstract set of viewmodel methods that can be bound to arbitrary UI events.

Yes, MVVM is an investment. But it will almost certainly pay for itself over time, as you maintain your app, and move it from one flavor of XAML to another.

Does MVVM mean you can completely avoid changing code when the XAML changes? No. But it is a whole lot closer than any other technique I’ve seen in my 24+ year career.

Monday, October 03, 2011 9:59:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [2]  | 
 Friday, April 01, 2011

The Minnesota chapter of IASA is holding an IT architect training event in May. Details:

Course Summary:

Foundation Core Skills - The Key Distinguishing Factor for IT Architects

  • Business Technology Strategy - Identified by Iasa members and global thought leaders as the core value proposition of any architect and the key set of skills in our profession, business technology strategy ensures immediate business value from technology strategy. It ensures that your organization will succeed in converting technology and IT from a liability to an asset.

Foundation Supporting Skills

  • Design- Architects commonly use design skills to create solutions to problems identified in developing technology strategy solutions.
  • Quality Attributes - Quality Attributes represent cross-cutting concerns in technology solutions such as Performance, security, manageability, etc that must be considered across the entire enterprise technology strategy space.
  • IT Environment - Technology strategy must include a general knowledge of the IT space including application development, operations, infrastructure, data/information management, quality assurance, and project management. The IT Environment skills prepare an architect for the IT side of a technology strategists job function.
  • Human Dynamics - Much of an architects daily role is working with other stakeholders to understand and define the technology strategy of the organization. The human dynamics skills represent the means for working with others in the organization including situational awareness, politics, communication and leadership.
Friday, April 01, 2011 8:12:16 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, November 08, 2010

Listen to an interview where I talk about CSLA 4, UnitDriven and a lot of things related to Silverlight, WPF and Windows Phone (WP7) development.

Pluralcast 28 : Talking Business and Objectification with Rocky Lhotka

This was recorded in October at the Patterns and Practices Symposium in Redmond.

Monday, November 08, 2010 10:56:59 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, October 27, 2010

In the past week I’ve had a couple people mention that CSLA .NET is ‘heavyweight’. Both times it was in regard to the fact that CSLA 4 now works on WP7.

I think the train of thought is that CSLA 4 supports .NET and Silverlight, but that phones are so … small. How can this translate?

But I’ve had people suggest that CSLA is too heavyweight for small to medium app development on Windows too.

As you can probably guess, I don’t usually think about CSLA as being ‘heavyweight’, and I feel comfortable using it for small, medium and large apps, and even on the phone. However, the question bears some thought – hence this blog post.

I think there are perhaps three things to consider:

  1. Assembly size
  2. Runtime footprint
  3. Conceptual surface area

CSLA is a small framework in terms of assembly size – weighing in at around 300k (slightly more for .NET, less for SL and WP7). This is smaller than many UI component libraries or other frameworks in general, so I feel pretty good about the lightweight nature of the actual assemblies.

The runtime footprint is more meaningful though, especially if we’re talking about the phone. It is a little hard to analyze this, because it varies a lot depending on what parts of CSLA you use.

The most resource-intensive feature is the ability to undo changes to an object graph, because that triggers a snapshot of the object graph – obviously consuming memory. Fortunately this feature is entirely optional, and on the phone it is not clear you’d implement the type of Cancel button this feature is designed to support. Fortunately, if you don’t use this feature then it doesn’t consume resources.

The other primary area of resource consumption is where business rules are associated with domain object types. This can get intense for applications with lots and lots of business rules, and objects with lots and lots of properties. However, the phone has serious UI limitations due to screen size, and it is pretty unrealistic to think that you are going to allow a user to edit an object with 100 properties via a single form on the phone…

Of course if you did decide to create a scrolling edit form so a user could interact with a big object like this, it doesn’t really matter if you use CSLA or not – you are going to have a lot of code to implement the business logic and hook it into your object so the logic runs as properties change, etc.

There’s this theory I have, that software has an analogy to the Conservation of Energy Principle (which says you can neither create nor destroy energy). You can neither create nor destroy the minimum logic necessary to solve a business problem. In other words, if your business problem requires lots of properties with lots of rules, you need those properties and rules – regardless of which technology or framework you are using.

The CSLA 4 business rule system is quite spare – lightweight – at least given the functionality it provides in terms of running rules as properties change and tracking the results of those rules for display to the user.

The conceptual surface area topic is quite meaningful to me – for any framework or tool or pattern. Developers have a lot to keep track of – all the knowledge about their business, their personal lives, their relationships with co-workers, their development platform, the operating system they use, their network topography, how to interact with their IT department, multiple programming languages, multiple UI technologies, etc. Everything I just listed, and more, comes with a lot of concepts – conceptual surface area.

Go pick up a new technology. How do you learn to use it? You start by learning the concepts of the technology, and (hopefully) relating those concepts to things you already know. Either by comparison or contrast or analogy. Technologies with few concepts (or few new concepts) are easy to pick up – which is why it is easy to switch between C# and VB – they are virtually identical in most respects. But it is harder to switch from Windows Forms to Web Forms, because there are deep and important conceptual differences at the technology, architecture and platform levels.

I think large conceptual surface areas are counterproductive. Which is why, while I love patterns in general, I think good frameworks use complex patterns behind the scenes, and avoid (as much as possible) the requirement that every developer internalize every pattern. Patterns are a major avenue for conceptual surface area bloat.

CSLA has a fairly large conceptual surface area. Larger than I’d like, but as small as I’ve been able to maintain. CSLA 4 is, I think, the best so far, in that it pretty much requires a specific syntax for class and property implementations – and you have to learn that – but it abstracts the vast majority of what’s going on behind that syntax, which reduces the surface area compared to older versions of the framework.

Still, when people ask me what’s going to be the hardest part of getting up to speed with CSLA, my answer is that there are two things:

  1. Domain-driven, behavior-focused object design
  2. Learning the concepts and coding practices to use the framework itself

The first point is what it is. That has less to do with CSLA than with the challenges learning good OOD/OOP in general. Generally speaking, most devs don’t do OO design, and those that try tend to create object models that are data-focused, not behavior-focused. It is the fault of tooling and a matter of education I think. So it becomes an area of serious ramp-up before you can really leverage CSLA.

The second point is an area where CSLA could be considered ‘heavyweight’ – in its basic usage it is pretty easy (I think), but as you dive deeper and deeper, it turns out there are a lot of concepts that support advanced scenarios. You can use CSLA to create simple apps, and it can be helpful; but it also supports extremely sophisticated enterprise app scenarios and they obviously have a lot more complexity.

I can easily see where someone tasked with building a small to mid-size app, and who’s not already familiar with CSLA, would find CSLA very counter-productive in the short-term. I think they’d find it valuable in the long-term because it would simplify their maintenance burden, but that can be hard to appreciate during initial development.

On the other hand, for someone familiar with CSLA it is a lot harder to build even simple apps without the framework, because you end up re-solving problems CSLA already solved. Every time I go to build an app without CSLA it is so frustrating, because I end up re-implementing all this stuff that I know is already there for the taking if I could only use CSLA…

.NET is great – but it is a general-purpose framework – so while it gives you all the tools to do almost anything, it is up to you (or another framework) to fill in the gaps between the bits and pieces in an elegant way so you can get down to the business of writing business code. So while CSLA has a non-trivial conceptual surface area where it solves these problems – you’ll have to solve them anyway because they exist and must be solved.

In summary, from an assembly size and runtime size perspective, I don’t think CSLA is heavyweight – which is why it works nicely on WP7. But from a conceptual surface area perspective, I think there’s an argument to be made that CSLA is a very comprehensive framework, and has the heavyweight depth that comes along with that designation.

Wednesday, October 27, 2010 9:58:30 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [5]  | 
 Wednesday, September 01, 2010

Foundation 101/102 IASA Training will be hosted at Magenic Technologies on October 4th through October 8th.

The foundations coursework and certification expose candidates to the “awareness” level of the skills matrix, then takes the students through real-world application of those skills at the project level in a 4+ day series of workshops covering: a year lifecycles of business justification/selection of projects, creating the architecture, managing through delivery, and then maturing the engagement model of the architecture team.

Finally a full half-day is spent on preparation for the CITA-Foundation certification exam.

You can register for the course at IASA’s home site www.iasahome.org “Twin Cities” location.

Group discounts are available.

Course Rational

Heard of IASA and the Certified IT Architect – Professional (CITA-P) certification process? Interested in going through the process? Have you mapped your skill set and experience to the IASA Skills Matrix? This course will introduce you to the IASA skills matrix that is validated with the CITA-P certification. This 1 week course provides introduction to the IASA skills matrix and provides a self-analysis tool to evaluate current skill level against the IT Architect Body of Knowledge:

IT Architecture Body of Knowledge  (ITABoK)

Pillar 1: Business Technology

Pillar 2: IT Environment

Pillar 3: Quality Attributes

Pillar 4: Human Dynamics

Pillar 5: System Design

IASA

The IASA vision is Professionalization of IT architecture, and our model is built on proven methods that successful professions have taken historically (Doctors, Lawyers, Building Architects, etc.); our career path model and supporting education/certification come from practicing architects as a professional association, not from individuals in a back room. IASA has built a global community of over 60,000 in readership, and thousands of contributors from multiple countries, industries and specializations. From that community, we have built extensive and inclusive skills taxonomy and methods for teaching and evaluating the capabilities within the taxonomy.

www.iasahome.org

Wednesday, September 01, 2010 4:12:47 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, August 22, 2010

My personal area of focus is on application architecture, obviously around the .NET platform, though most of the concepts, patterns and techniques apply to any mature development platform.

Application architecture is all about defining the standards, practices and patterns that bring consistency across all development efforts on a platform. It is not the same as application design – architecture spans applications, while design is applied to every specific application. Any relevant architecture will enable a broad set of applications, and therefore must enable multiple possible designs.

It is also the case that application architecture, in my view, includes “horizontal” and “vertical” concepts.

Horizontal concepts apply to any and all applications and are largely orthogonal to any specific application design. These concepts include guidelines around authentication, authorization, integration with operational monitoring systems, logging, tracing, etc.

Vertical concepts cover the actual shape of applications, including concepts like layered application structure, what presentation layer design patterns to use (MVC, MVVM, etc), how the presentation layer interacts with the business layer, how the business layer is constructed (object-oriented, workflow, function libraries, etc), how the data access layer is constructed, whether broad patterns like DI/IoC are used and so forth.

In today’s world, an application architecture must at least encompass the possibility of n-tier and service-oriented architectures. Both horizontal and vertical aspects of the architecture must be able to account for n-tier and SOA application models, because both of these models are required to create the applications necessary to support any sizable suite of enterprise application requirements.

It is quite possible to define application architectures at a platform-neutral level. And in a large organization this can be valuable. But in my view, this is all an academic (and essentially useless) endeavor unless the architectures are taken to another level of detail specific to a given platform (such as .NET).

This is because the architecture must be actually relevant to on-the-ground developers or it is just so much paper. Developers have hard goals and deadlines, and they usually want to get their work done in time to get home for their kid’s soccer games. Abstract architectures just mean more reading and essentially detract from the developers’ ability to get their work done.

Concrete architectures might be helpful to developers – at least there’s some chance of relevance in day to day work.

But to really make an architecture relevant, the architect group must go beyond concepts, standards and guidelines. They need to provide tools, frameworks and other tangible elements that make developer’s lives easier.

Developers are like electricity/water/insert-your-analogy-here, in that they take the path of least resistance. If an architecture makes things harder, they’ll bypass it. If an architecture (usually along with tools/frameworks/etc) makes things easier, they’ll embrace it.

Architecture by itself is just paper – concepts – nothing tangible. So it is virtually impossible for architecture to make developer’s lives easier. But codify an architecture into a framework, add some automated tooling, pre-built components and training – and all of a sudden it becomes easier to do the right thing by following the architecture, than by doing the wrong thing by ignoring it.

This is the recipe for winning when it comes to application architecture: be pragmatic, be comprehensive and above all, make sure the easiest path for developers is the right path – the path defined by your architecture.

Sunday, August 22, 2010 9:55:06 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [6]  | 
 Monday, August 09, 2010

This is a nice list of things a software architect should do to be relevant and accepted by developers in the org.

Monday, August 09, 2010 8:04:27 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, July 19, 2010

I was just at a family reunion and heard a joke about the comedian’s retirement home. A guy walks into the common room and hears one old guy shout out “19”, and everyone laughs. Across the room another guy shouts out “54” and everyone laughs even harder. The guy turns to his guide and asks “What is going on?”. The guide replies “These guys know all the same jokes, and at their age it takes too long to tell them, so they just assigned them all numbers.” The guy smiles, and shouts out “92”, which results in a just a few grudging chuckles. “What’d I do wrong?” he asks the guide. The guide replies “Some people can tell a joke, some people can’t.”

That made me think about patterns (yes, I know, I’m a geek).

I like design patterns. Who wouldn’t? They are a formalized description of a solution to a specific problem. If you have that problem, then having a formally described solution seems like a dream come true.

Perhaps more importantly patterns are a language short-cut. If everyone in a conversation understands a pattern, the pattern (and often its problem) can be discussed merely by using the pattern name, which saves an immense amount of time as opposed to describing the actual problem and solution in detail.

Of course the “formalized description” is prose. Human language. And therefore it is ambiguous and open to interpretation. The descriptions must be human-readable, because any pattern worth ink and paper transcends any specific platform or programming language. Describing a “pattern” in Java or C# is silly – because that makes it far too likely that it isn’t really a broad pattern, but is simply a practice that happens to work in a given language or on a given platform.

But this ambiguity leads to trouble. Not unlike the comedian’s retirement home, patterns are a short-cut language to some really complex concepts, and often even more complex implementations. While everyone might have a basic comprehension of “inversion of control”, I can guarantee you that saying IoC doesn’t bring the same concept, implementation or emotional response from everyone who hears it.

Pattern zealots often forget (or overlook) the fact that patterns have consequences. Good and bad consequences. Every pattern has bad consequences, as well as good ones. Some people get attached to a pattern because it helped them at some point, and they just assume that pattern will always have a positive or beneficial result. But that’s simply not true. Sometimes the negative consequences of a pattern outweigh the positive – it is all very dependent on the specific problem domain and environment.

Soft things like staffing levels, skill sets, attitudes and time frames all enter into the real world environment. Add the reality that any given problem almost certainly has several patterns that provide solutions – for different variations of the problem – and it becomes clear that no one pattern is always “good”.

It should come as no surprise then, that patterns are often misused – in several different ways.

My pet peeve is when a pattern is applied because something likes the pattern, not because the application actually has the problem the pattern would solve. I often see people using IoC, for example, because it is trendy, not because they actually need the flexibility provided by the pattern. They use a container to create instances of objects that they will never swap out for other implementations. What a waste – they’ve accepted all the negative consequences of the pattern for absolutely no benefit since they don’t have the problem the pattern would solve. Is this the fault of IoC? Of course not, IoC is a powerful pattern.

It is the fault of what I call the “Pattern Of The Year” (POTY) syndrome. When a pattern becomes really popular and trendy, it becomes the POTY. And everyone wants to go to the POTY. If you need the POTY, you should go. But if you don’t need the POTY, it is really a little silly (if not creepy) for you to go to the POTY…

In short: only use a pattern if you have the problem it solves, and the positive consequences outweigh the negative consequences.

Perhaps the most common misuse of patterns is failure to actually understand the pattern or its implementation. To stick with IoC as an example, it is pretty common for a development team to completely misunderstand the pattern or the framework that implements the pattern. Sure, some architect or lead developer “got it” (or so we hope) which is why the team is using the pattern – but you can find apps where numerous competing containers are created, each initialized differently.

I always thought Apple BASIC spaghetti code was the worst thing possible – but misuse of certain design patterns quickly creates a mess that is an order of magnitude worse than anything people wrote back in the early 80’s…

In short: if you use a pattern, make sure your entire team understand the pattern and your implementation of the pattern.

As I mentioned earlier, most problems can be solved by more than one pattern. Any truly interesting problem almost certainly has multiple solutions, each with different good/bad consequences and various subtle differences in outcome. It is not uncommon for the best solution to be a combination of a few more basic patterns.

As an example, the CSLA data portal is a combination of around six basic design patterns that work together in concert to solve the problem space the data portal targets. I’m not saying the data portal is a design pattern, but it is a solution for a problem that came into being by combining several complimentary patterns.

A few years after I created the data portal, various other design patterns were formalized that describe other solutions to this same problem space. Some are similar, some are not. If you look into each solution, it is clear that each one is actually a different combination of some lower level design patterns, working together to solve the problem.

The thing is, every pattern your bring into your solution (or ever pattern brought in by a higher level pattern) comes with its own consequences. You need to be careful to minimize the negative consequences of all those patterns so the overall balance is toward the positive.

In short: don’t be afraid to combine simple or basic design patterns together to solve a bigger problem, but be aware of the negative consequences of every pattern you bring into play.

Having introduced this concept of “low level” vs “high level” patterns, I’m going to follow that a bit further. Most of the patterns in the original GoF book are what I’d call low level patterns. They stand alone and have little or no dependency on each other. Each one solves a very narrow and clear problem and has very clear good/bad consequences.

Of course that was 15 years ago, and since then people have applied the pattern concept to more complex and bigger problem spaces. The resulting solutions (patterns) very often build on other patterns. In other words we’re raising the level of abstraction by building on previous abstractions. And that’s a fine thing.

But it is really important to understand that ultimately patterns are implemented, and the implementations of patterns are often far messier than the abstract though models provided by the patterns themselves. Even that is OK, but there’s a meta-consequence the flows out of this: complexity.

As you start to use higher level patterns, and their implementations, you can easily become locked into not only the implementation of the pattern you wanted, but also the implementations of the lower level patterns on which the implementation is built.

Again I’ll use IoC to illustrate my point. If you want IoC you’ll almost certainly use a pre-existing implementation. And once you pick that framework, you are stuck with it. You won’t want to use more than one IoC framework, because then you’d have multiple containers, each configured differently and each competing for the attention of every developer. The result is a massive increase in complexity, which means a reduction in maintainability and a corresponding increase in cost.

Now suppose you pick some higher level pattern, perhaps a portal or gateway, that is implemented using IoC. If you want the implementation of the gateway pattern you must also accept a dependency on their IoC framework choice.

People often ask me whether (or when will) CSLA .NET will incorporate Enterprise Library, log4net, Unity, Castle/Windsor, <insert your framework here>. I try very, very, very hard to avoid any such dependencies, because as soon as I pick any one of these, I make life really hard for everyone out there who didn’t choose that other framework.

CSLA 3.8 has a dependency on a simple data structure framework, and even that was a continual nightmare. I can hardly express how happy I am that I was able to get rid of that dependency for CSLA 4. Not that the data structure framework was bad – it does a great job – but the complexity introduced by the dependency was just nasty.

In short: be aware of the complexity introduced as high level patterns force you to accept dependencies on lower level patterns and implementations.

The final topic I’d like to cover flows from a conversation I had with Ward Cunningham a few years ago. We were talking about patterns and the “pattern movement”, and how it has become a little warped over time as people actively look for ways to apply patterns, rather than the patterns being used because they are the natural answer to a problem.

It is kind of like a carpenter who spends a lot of money buying some really nice new power tool. And then trying to use that power tool for every part of the construction process – even if that means being less efficient or increasing the complexity of the job – just to use the tool.

Obviously I’d never want to hire such a carpenter to work on my house!!

Yet I’ve seen developers and architects get so fascinated by specific patterns, frameworks or technologies that they do exactly that: increase the complexity of simple problem domains specifically so they can use their new toy concept.

In this conversation Ward suggested that there are different levels of understanding or mastery of patterns. At the most basic level are people just learning what patterns are, followed by people who “get” a pattern and actively seek opportunities to use that pattern. But at higher levels of mastery are people who just do their job and (often without a conscious thought) apply patterns as necessary.

Carpenters don’t think twice about when and how to construct a staircase or put together a 2x6” wall frame. These are common design patterns, but they are natural solutions to common problems.

In short: strive for “pattern mastery” where you are not fixated on the pattern, but instead are just solving problems with natural solutions, such that the pattern “disappears” into the fabric of the overall solution.

The pattern movement has been going on for at least 15 years in our industry. And over that time I think it has been far more beneficial than destructive.

But that doesn’t mean (especially as a consultant) that you don’t walk into many organizations and see horrible misuse of design patterns – the results being higher complexity, lower maintainability and higher cost of development and maintenance.

I think it is important that we continually strive to make patterns be a common abstract language for complex problems and solutions. And I think it is important that we continually educate everyone on development teams about the patterns and implementations we bring into our applications.

But most importantly, I think we need to always make conscious choices, not choices based on trends or fads or because somebody on the team is in love with pattern X or framework Y or technology Z.

  1. Use a pattern because you have the problem it solves
  2. Only use a pattern if the good consequences outweigh the bad (and remember that every pattern has negative consequences)
  3. Use patterns and implementations only if the entire team understands them
  4. Use the simplest pattern that solves your problem
  5. Don’t be afraid to combine several simple patterns to solve a complex problem
  6. Be aware of (and consciously accept) the consequences of any low level patterns that come with most high level pattern implementations
  7. Strive for “pattern mastery”, where you are solving problems with natural solutions, not looking for ways to apply any specific pattern

Eventually maybe we’ll all be in a software development retirement home and we can shout things like “Memento” and “Channel adaptor” and everyone will chuckle with fond memories of how those patterns made our lives easier as we built the software on which the world runs.

Monday, July 19, 2010 11:03:04 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [8]  | 
 Wednesday, June 02, 2010

I am working on a video series discussing the use of the MVVM design pattern, plus a “zero code-behind” philosophy with CSLA .NET version 4.

I’ve decided to put the intro video out there for free. It is a good video, covering the basics of the pattern, my approach to the pattern and how a CSLA-based Model works best in the pattern.

http://www.lhotka.net/files/MvvmIntro.wmv

The rest of the series will be demo-based, covering the specifics of implementation. I’ll add another blog post when that’s available for purchase, but I thought I’d get this intro online now so people can make use of it.

Wednesday, June 02, 2010 12:36:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [4]  | 
 Friday, May 07, 2010

There’s a lot of understandable buzz about the iPad, including this well-written post that I think conveys something important.

As a Microsoft-oriented technical evangelist I’m now almost constantly asked what I think of the iPad. I’ve spent perhaps a total of 20 minutes with one, so I’m not an expert, but I can confidently say that I found it very pleasant to use, and I think it is a great v1 product.

(as an aside, many people also call it a “Kindle killer”, which I absolutely think it is NOT. It is too heavy, the screen is shiny and backlit and the battery doesn’t last 1-2 weeks – it was almost inconceivable to me that anything could replace real books (but the Kindle did), and the iPad certainly doesn’t compete with real paper or the Kindle)

I think the iPad reveals something very, very important. As does the iPhone, the Android phones and the upcoming Windows Phone 7: most users don’t need a “computer”.

Not a computer in the traditional sense, the way we as software designer/developers think about it.

Given the following:

  • “Instant” on
  • Primarily touch-based UI for the OS
  • Apps (and OS) that is designed for touch through and through (and no non-touch apps)
  • Light weight
  • Good battery life
  • Good networking (including home LAN, corporate domain, network printing, etc)
  • Portable peripherals, and standard connectors (USB, Firewire, ESATA, etc)
  • Docking station capability

I submit that your typical user doesn’t need a traditional computer. Sure, there are the “knowledge workers” in accounting, who push computers harder than developers do, but they aren’t a typical user either.

From what I can see, a typical user spends a lot of time

  • reading and composing email
  • using specialized line of business apps, mostly doing data entry and data viewing/analysis
  • browsing the web
  • playing lightweight casual games (solitaire, Flash-based games, etc)
  • using consumer apps like birthday card makers
  • organizing and viewing pictures and home videos
  • creating simple art projects with drawing apps, etc

None of these things require anything like the new i7 quad core (w/ hyperthreading – so 8 way) laptop Magenic is rolling out to all its consultants. Most users just don’t need that kind of horsepower, and would gladly trade it to get better battery life and more intuitive apps.

Which (finally) brings me to the real point of this post: today’s apps suck (just ask David Platt).

David talks a lot about why software sucks. But I want to focus on one narrow area: usability, especially in a world where touch is the primary model, and keyboard/mouse is secondary.

I have a Windows 7 tablet, which I like quite a lot. But it is far, far, far less usable than the iPad for most things. Why? It really isn’t because of Windows, which can be configured to be pretty touch-friendly. It is because of the apps.

Outlook, for example, is absolutely horrible. Trying to click on a message in the inbox, or worse, trying to click on something in the ribbon – that’s crazy. I’m a big guy, and I have big fingers. I typically touch the wrong thing more often than the right thing…

Web browsers are also horrible. Their toolbars are too small as well. But web pages are as much to blame – all those text links crammed together in tables and lists – it is nearly impossible to touch the correct link to navigate from page to page. Sure, I can zoom in and out, but that’s just a pain.

The web page thing is one area where the iPad is just as bad as anything else. It isn’t the fault of the devices (Windows or iPad), it is the fault of the web page designers. And it really isn’t their fault either, because their primary audience is keyboard/mouse computer users…

And that’s the challenge we all face. If the traditional computing form factor is at its end, and I suspect it is, then we’re in for an interesting ride over the next 5-7 years. I don’t think there’s been as big a user interaction transition since we moved from green-screen terminals to the Windows GUI keyboard/mouse world.

Moving to a world that is primarily touch is going to affect every app we build in pretty fundamental ways. When click targets need to be 2-4 times bigger than they are today, our beautiful high-resolution screens start to seem terribly cramped. And these battery-conserving end user devices don’t have that high of resolution to start with, so that makes space really cramped.

And that means interaction metaphors must change, and UI layouts need to be more dynamic. That’s the only way to really leverage this limited space and retain usability.

For my part, I think Microsoft is in a great place in this regard. Several years ago they introduced WPF and XAML, which are awesome tools for addressing these UI requirements. More recently they streamlined those concepts by creating Silverlight – lighter weight and more easily deployed, but with the same UI power.

I’d be absolutely shocked if we don’t see some sort of Silverlight-based tablet/slate/pad/whatever device in the relatively near future. And I’d be shocked if we don’t see the iPad rapidly evolve based on user feedback.

I really think we’re entering a period of major transition in terms of what it means to be a “computer user”, and this transition will have a deep impact on how we design and develop software for these computing appliances/devices.

And it all starts with recognizing that the type of UI we’ve been building since the early 1990’s is about to become just as obsolete as the green-screen terminal UIs from the 1980’s.

It took about 5 years for most organizations to transition from green-screen to GUI. I don’t think the iPad alone is enough to start the next transition, but I think it is the pebble that will start the avalanche. Once there are other devices (most notably some Silverlight-based device – in my mind at least), then the real change will start, because line of business apps will shift, as will consumer apps.

I’m looking forward to the next few years – I think it is going to be a wild ride!

Friday, May 07, 2010 10:16:26 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [2]  | 
 Thursday, May 06, 2010
Thursday, May 06, 2010 11:30:45 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [2]  | 
 Thursday, February 19, 2009

Setting up any type of n-tier solution requires the creation of numerous projects in the solution, along with appropriate references, configuration and so forth. Doing this with a Silverlight application is complicated slightly because Silverlight and .NET projects are slightly different (since they use different compilers, runtimes, etc). And sharing code between Silverlight and .NET projects complicates things a bit more, because the same physical code files are typically shared between two different projects in the solution.

CSLA .NET for Silverlight makes it relatively easy to create powerful n-tier applications that do share some code between the Silverlight client and the .NET server(s). Even though CSLA .NET does solve a whole host of issues for you, the reality is that the solution still needs to be set up correctly.

Here are the basic steps required to set up an n-tier CSLA .NET for Silverlight solution:

  1. Create a new Silverlight application project
    1. Have Visual Studio create a web application for the Silverlight project
  2. Add a new Silverlight Class Library project (this is your business library)
  3. Add a new .NET Class Library project (this is your business library)
  4. Use the Project Properties windows to set the Silverlight and .NET Class Library projects to use the same namespace and assembly name
    image
    image
  5. Remove the Class1 files from the Silverlight and .NET Class Library projects
  6. (optional) Add a .NET Class Library project to contain the data access code
  7. Set up references
    1. The Silverlight application should reference Csla.dll (for Silverlight) and the Silverlight Class Library
    2. The Silverlight Class Library (business) should reference Csla.dll (for Silverlight)
    3. The ASP.NET Web application should reference Csla.dll (for .NET), the .NET Class Library (business) and the .NET Class Library (data)
    4. The .NET Class Library (data) should reference Csla.dll (for .NET) and the .NET Class Library (business)
    5. The .NET Class Library (business) should reference Csla.dll (for .NET)
      image
  8. Add your business classes to the .NET Class Library (business)
    1. Link them to the Silverlight Class Library (business)
    2. Use compiler directives (#if SILVERLIGHT) or partial classes to create Silverlight-only or .NET-only code in each Class Library
  9. Configure the data portal
    1. Add a WcfPortal.svc file to the ASP.NET web application to define an endpoint for the Silverlight data portal
    2. Add a <system.serviceModel> element to web.config in the ASP.NET web application to configure the endpoint for the Silverlight data portal
    3. Add any connection string or other configuration values needed on the server to the web.config file
    4. Add a ServiceReferences.ClientConfig file to the Silverlight application and make sure it has an endpoint named BasicHttpBinding_IWcfPortal pointing to the server

This isn’t the simplest or most complex option for creating a CSLA .NET for Silverlight solution. You could use CSLA .NET for Silverlight to create a client-only application (that’s the simplest), or a 4-tier application where there is not only a web server in the DMZ, but also a separate application server behind a second firewall. I do think that the model I’ve shown in this blog post is probably the most common scenario however, which is why this is the one I chose to outline.

Thursday, February 19, 2009 4:24:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, January 12, 2009

AppArch2.0_small.jpgThe Microsoft Patterns and Practices group has a new Application Architecture Guide available.

Architecture, in my view, is primarily about restricting options.

Or to put it another way, it is about making a set of high level, often difficult, choices up front. The result of those choices is to restrict the options available for the design and construction of a system, because the choices place a set of constraints and restrictions around what is allowed.

When it comes to working in a platform like Microsoft .NET, architecture is critical. This is because the platform provides many ways to design and implement nearly anything you’d like to do. There are around 9 ways to talk to a database – from Microsoft, not counting the myriad 3rd party options. The number of ways to build web apps continues to grow, etc. The point I’m making is that if you just throw the entire .NET framework at a dev group you’ll get a largely random result that may or may not actually meet the short, medium and long-term needs of your business.

Developing an architecture first allows you to rationally evaluate the various options, discard those that don’t fit the business and application requirements and only allow use of those that do meet the needs.

An interesting side-effect of this process is that your developers may disagree. They may only see short-term issues, or purely technical concerns, and may not understand some of the medium/long term issues or broader business concerns. And that’s OK. You can either say “buck up and do what you are told”, or you can try to educate them on the business issues (recognizing that not all devs are particularly business-savvy). But in the end, you do need some level of buy-in from the devs or they’ll work against the architecture, often to the detriment of the overall system.

Another interesting side-effect of this process is that an ill-informed or disconnected architect might create an architecture that is really quite impractical. In other words, the devs are right to be up in arms. This can also lead to disaster. I’ve heard it said that architects shouldn’t code, or can’t code. If your application architect can’t code, they are in trouble, and your application probably is too. On the other hand, if they don’t know every nuance of the C# compiler, that’s probably good too! A good architect can’t afford to be that deep into any given tool, because they need more breadth than a hard-core coder can achieve.

Architects live in the intersection between business and technology.

As such they need to be able to code, and to have productive meetings with business stakeholders – often both in the same day. Worse, they need to have some understanding of all the technology options available from their platform – and the Microsoft platform is massive and complex.

Which brings me back to the Application Architecture Guide. This guide won’t solve all the challenges I’m discussing. But it is an invaluable tool in any .NET application architect’s arsenal. If you are, or would like to be, an architect or application designer, you really must read this book!

Monday, January 12, 2009 11:29:13 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 12, 2008

My previous blog post (Some thoughts on Windows Azure) has generated some good comments. I was going to answer with a comment, but I wrote enough that I thought I’d just do a follow-up post.

Jamie says “Sounds akin to Architecture:"convention over configuration" I'm wholly in favour of that.”

Yes, this is very much a convention over configuration story, just at an arguably larger scope that we’ve seen thus far. Or at least showing a glimpse of that larger scope. Things like Rails are interesting, but I don't think they are as broad as the potential we're seeing here.

It is my view that the primary job of an architect is to eliminate choices. To create artificial restrictions within which applications are created. Basically to pre-make most of the hard choices so each application designer/developer doesn’t need to.

What I’m saying with Azure, is that we’re seeing this effect at a platform level. A platform that has already eliminated most choices, and that has created restrictions on how applications are created – thus ensuring a high level of consistency, and potential portability.

Jason is confused by my previous post, in that I say Azure has a "lock-in problem", but that the restricted architecture is a good thing.

I understand your confusion, as I probably could have been more clear. I do think Windows Azure has a lock-in problem - it doesn't scale down into my data center, or onto my desktop. But the concept of a restricted runtime is a good one, and I suspect that concept may outlive this first run at Azure. A restricted architecture (and runtime to support it) doesn't have to cause lock-in at the hosting level. Perhaps at the vendor level - but those of us who live in the Microsoft space put that concern behind us many years ago.

Roger suggests that most organizations may not have the technical savvy to host the Azure platform in-house.

That may be a valid point – the current Azure implementation might be too complex for most organizations to administer. Microsoft didn't build it as a server product, so they undoubtedly made implementation choices that create complexity for hosting. This doesn’t mean the idea of a restricted runtime is bad. Nor does it mean that someone (possibly Microsoft) could create such a restricted runtime that could be deployed within an organization, or in the cloud. Consider that there is a version of Azure that runs on a developer's workstation already - so it isn't hard to imagine a version of Azure that I could run in my data center.

Remember that we’re talking about a restricted runtime, with a restricted architecture and API. Basically a controlled subset of .NET. We’re already seeing this work – in the form of Silverlight. Silverlight is largely compatible with .NET, even though they are totally separate implementations. And Moonlight demonstrates that the abstraction can carry to yet another implementation.

Silverlight has demonstrated that most business applications only use 5 of the 197 megabytes in the .NET 3.5 framework download to build the client. Just how much is really required to build the server parts? A different 5 megabytes? 10? Maybe 20 tops?

If someone had a defined runtime for server code, like Silverlight is for the client, I think it becomes equally possible to have numerous physical implementations of the same runtime. One for my laptop, one for my enterprise servers and one for Microsoft to host in the cloud. Now I can write my app in this .NET subset, and I can not only scale out, but I can scale up or down too.

That’s where I suspect this will all end up, and spurring this type of thinking is (to me) the real value of Azure in the short term.

Finally, Justin rightly suggests that we can use our own abstraction layer to be portable to/from Azure even today.

That's absolutely true. What I'm saying is that I think Azure could light the way to a platform that already does that abstraction.

Many, many years ago I worked on a project to port some software from a Prime to a VAX. It was only possible because the original developers had (and I am not exaggerating) abstracted every point of interaction with the OS. Everything that wasn't part of the FORTRAN-77 spec was in an abstraction layer. I shudder to think of the expense of doing that today - of abstracting everything outside the C# language spec - basically all of .NET - so you could be portable.

So what we need, I think, is this server equivalent to Silverlight. Azure is not that - not today - but I think it may start us down that path, and that'd be cool!

Friday, December 12, 2008 11:33:03 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, December 11, 2008

At PDC, Microsoft announced Windows Azure - their Windows platform for cloud computing. There's a lot we don't know about Azure, including the proper way to pronounce the word. But that doesn't stop me from being both impressed and skeptical and about what I do know.

In the rest of this post, as I talk about Azure, I'm talking about the "real Azure". One part of Azure is the idea that Microsoft would host my virtual machine in their cloud - but to me that's not overly interesting, because that's already a commodity market (my local ISP does this, Amazon does this - who doesn't do this??). What I do think is interesting is the more abstract Azure platform, where my code runs in a "role" and has access to a limited set of pre-defined resources. This is the Azure that forces the use of a scale-out architecture.

I was impressed by just how much Microsoft was able to show and demo at the PDC. For a fledgling technology that is only partially defined, it was quite amazing to see end-to-end applications built on stage, from the UI to business logic to data storage. That was unexpected and fun to watch.

I am skeptical as to whether anyone will actually care. I think the primary determination about whether people care will be determined by the price point Microsoft sets. But I also think it will be determined by how Microsoft addresses the "lock-in question".

I expect the pricing to end up in one of three basic scenarios:

  1. Azure is priced for the Fortune 500 (or 100) - in which case the vast majority of us won't care about it
  2. Azure is priced for the small to mid-sized company space (SMB) - in which case quite a lot of us might be interested
  3. Azure is priced to be accessible for hosting your blog, or my blog, or www.lhotka.net or other small/personal web sites - in which case the vast majority of us may care a lot about it

These aren't entirely mutually exclusive. I think 2 and 3 could both happen, but I think 1 is by itself. Big enterprises have such different needs than people or SMB, and they do business in such different ways compared to most businesses or people, that I suspect we'll see Microsoft either pursue 1 (which I think would be sad) or 2 and maybe 3.

But there's also the lock-in question. If I built my application for Azure, Microsoft has made it very clear that I will not be able to run my app on my servers. If I need to downsize, or scale back, I really can't. Once you go to Azure, you are there permanently. I suspect this will be a major sticking point for many organizations. I've seen quotes by Microsoft people suggesting that we should all factor our applications into "the part we host" and "the part they host". But even assuming we're all willing to go to that work, and introduce that complexity, this still means that part of my app can never run anywhere but on Microsoft's servers.

I suspect this lock-in issue will be the biggest single roadblock to adoption for most organizations (assuming reasonable pricing - which I think is a given).

But I must say that even if Microsoft doesn't back down on this. And even if it does block the success of "Windows Azure" as we see it now, that's probably OK.

Why?

Because I think the biggest benefit to Azure is one important concept: an abstract runtime.

If you or I write a .NET web app today, what are the odds that we can just throw it on a random Windows Server 200x box and have it work? Pretty darn small. The same is true for apps written for Unix, Linux or any other platform.

The reason apps can't just "run anywhere" is because they are built for a very broad and ill-defined platform, with no consistently defined architecture. Sure you might have an architecture. And I might have one too. But they are probably not the same. The resulting applications probably use different parts of the platform, in different ways, with different dependencies, different configuration requirements and so forth. The end result is that a high level of expertise is required to deploy any one of our apps.

This is one reason the "host a virtual machine" model is becoming so popular. I can build my app however I choose. I can get it running on a VM. Then I can deploy the whole darn VM, preconfigured, to some host. This is one solution to the problem, but it is not very elegant, and it is certainly not very efficient.

I think Azure (whether it succeeds or fails) illustrates a different solution to the problem. Azure defines a limited architecture for applications. It isn't a new architecture, and it isn't the simplest architecture. But it is an architecture that is known to scale. And Azure basically says "if you want to play, you play my way". None of this random use of platform features. There are few platform features, and they all fit within this narrow architecture that is known to work.

To me, this idea is the real future of the cloud. We need to quit pretending that every application needs a "unique architecture" and realize that there are just a very few architectures that are known to work. And if we pick a good one for most or all apps, we might suffer a little in the short term (to get onto that architecture), but we win in the long run because our apps really can run anywhere. At least anywhere that can host that architecture.

Now in reality, it is not just the architecture. It is a runtime and platform and compilers and tools that support that architecture. Which brings us back to Windows Azure as we know it. But even if Azure fails, I suspect we'll see the rise of similar "restricted runtimes". Runtimes that may leverage parts of .NET (or Java or whatever), but disallow the use of large regions of functionality, and disallow the use of native platform features (from Windows, Linux, etc). Runtimes that force a specific architecture, and thus ensure that the resulting apps are far more portable than the typical app is today.

Thursday, December 11, 2008 6:31:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, December 03, 2008

One of the topic areas I get asked about frequently is authorization. Specifically role-based authorization as supported by .NET, and how to make that work in the "real world".

I get asked about this because CSLA .NET (for Windows and Silverlight) follow the standard role-based .NET model. In fact, CSLA .NET rests directly on the existing .NET infrastructure.

So what's the problem? Why doesn't the role-based model work in the real world?

First off, it is important to realize that it does work for some scenarios. It isn't bad for course-grained models where users are authorized at the page or form level. ASP.NET directly uses this model for its authorization, and many people are happy with that.

But it doesn't match the requirements of a lot of organizations in my experience. Many organizations have a slightly more complex structure that provides better administrative control and manageability.

image

Whether a user can get to a page/form, or can view a property or edit a property is often controlled by a permission, not a role. In other words, users are in roles, and a role is essentially a permission set: a list of permissions the role has (or doesn't have).

This doesn't map real well into the .NET IPrincipal interface, which only exposes an IsInRole() method. Finding out if the user is in a role isn't particularly useful, because the application really needs to call some sort of HasPermission() method.

In my view the answer is relatively simple.

The first step is understanding that there are two concerns here: the administrative issues, and the runtime issues.

At administration time the concepts of "user", "role" and "permission" are all important. Admins will associate permissions with roles, and roles with users. This gives them the kind of control and manageability they require.

At runtime, when the user is actually using the application, the roles are entirely meaningless. However, if you consider that IsInRole() can be thought of as "HasPermission()", then there's a solution. When you load the .NET principal with a list of "roles", you really load it with a list of permissions. So when your application asks "IsInRole()", it does it like this:

bool result = currentPrincipal.IsInRole(requiredPermission);

Notice that I am "misusing" the IsInRole() method by passing in the name of a permission, not the name of a role. But that's ok, assuming that I've loaded my principal object with a list of permissions instead of a list of roles. Remember, the IsInRole() method typically does nothing more than determine whether the string parameter value is in a list of known values. It doesn't really matter if that list of values are "roles" or "permissions".

And since, at runtime, no one cares about roles at all, there's no sense loading them into memory. This means the list of "roles" can instead be a list of "permissions".

The great thing is that many people store their users, roles and permissions in some sort of relational store (like SQL Server). In that case it is a simple JOIN statement to retrieve all permissions for a user, merging all the user's roles together to get that list, and not returning the actual role values at all (because they are only useful at admin time).

Wednesday, December 03, 2008 11:19:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, July 28, 2008

In preparation for the next public release of CSLA Light (CSLA .NET for Silverlight), I'm going to do a series of posts describing the basic process of creating a CSLA Light application. This first post will provide a high-level overview of the project structure and some concepts.

First, it is important to realize that CSLA Light will support three primary physical architectures: a 3-tier (or SOA) model, a 3-tier mobile objects model and a 4-tier mobile objects model. At all times CSLA Light follows the same n-layer logical architecture of CSLA .NET, where the application has the following layers:

  1. Presentation/UI
  2. Business
  3. Data Access
  4. Data Storage

The primary goal of CSLA Light and CSLA .NET is to support the creation of the Business layer in such a way that the interface points to the Presentation/UI layer and to the Data Access layer are clearly defined.

To a small degree, CSLA Light will assist in the creation of the Presentation/UI layer by providing some custom Silverlight controls that solve common issues and reduce UI code.

Like CSLA .NET, CSLA Light is not an ORM and does not really care how you talk to your database. It simply defines a set of CRUD operators - locations where you should invoke your Data Access layer to retrieve and update data.

3-Tier or SOA Model

If you use CSLA Light in a 3-tier (or SOA) model, your physical tiers are:

  1. Silverlight
  2. Service
  3. Database

You can treat these as tiers, or as separate applications where you use SOA concepts for communication between the applications. That's really up to you. In this case the focus of CSLA Light is entirely on the Silverlight tier. In that Silverlight tier, you'll have the following logical layers:

  1. Presentation/UI
  2. Business
  3. Data Access/Service Facade

To do this, you configure the CSLA Light data portal to run in local mode and it calls the DataPortal_XYZ methods right there in the Silverlight client tier. You implement the DataPortal_XYZ methods to invoke remote services in the Service tier. CSLA Light doesn't care what those services look like or how they are implemented. They could be asmx, WCF, RSS, JSON, RESTful, SOAP-based - it just doesn't matter. If you can call it from Silverlight, you can use it.

The CSLA Light data portal is designed to be asynchronous, because any calls to remote services in Silverlight will be an asynchronous operation. In other words, the data portal is designed to support you as you call these remote services asynchronously.

The only requirement is that by the time your async DataPortal_XYZ method completes, that the object's data be created, retrieved, updated or deleted.

This is the simplest of the CSLA Light architectures (assuming the services and database already exist) because you just create one project, a Silverlight Application, and all your code goes into that project. Or perhaps you create two projects, a Silverlight Application for the UI and a Silverlight Class Library for your business objects, but even that is pretty straightforward.

3-Tier Mobile Objects Model

If you use CSLA Light in a 3-tier mobile objects model, your physical tiers are:

  1. Silverlight
  2. Data portal server
  3. Database

This is more like a traditional CSLA .NET model, because the data portal is used to communicate between the Business layer on the Silverlight tier and the Business layer on the Data portal server tier. Your business objects literally move back and forth between those two tiers - just like they do between the client and server tiers in a CSLA .NET 3-tier model.

One interesting bit of information here though, is that your business objects are not just moving between tiers, but they are moving between platforms. On Silverlight they are CSLA Light objects in the Silverlight runtime, and on the Data portal server they are CSLA .NET objects in the .NET runtime. The CSLA Light data portal makes this possible, and the result is really very cool, because most of your business object code is written one time and yet runs in both locations!

The Data Access layer only exists on the Data portal server, in .NET. So the Silverlight tier relies on the data portal, and the .NET server, for all create, fetch, update and delete behaviors. Also, that data access code is not even deployed to the client.

However, the client does have most business and validation and authorization logic. So the user experience is rich, interactive and immediate - just like a WPF or Windows Forms user experience.

This architecture is a little more complex, and requires that you create several projects. The following figure shows a simple solution with the required projects:

image

Here's an explanation of the projects:

  • SimpleAppWeb: web app hosting the Silverlight xap file and the data portal service
  • Library.Client: Silverlight Class Library containing business classes
  • Library.Server: .NET Class Library containing business classes
  • SimpleApp: Silverlight Application containing the Silverlight UI application

The big thing to notice here, is that the CustomerEdit.vb class exists in both Library.Client and Library.Server. In Library.Server it is a linked file, meaning that the project merely links to the file that exists in Library.Client. You can tell this by the different icon glyph in Solution Explorer.

The reason for this is that CustomerEdit.vb contains code for a business object that moves between the Silverlight client and the .NET server. It contains code that compiles in both environments.

You can then use compiler directives or partial classes to implement code in that class that is Silverlight-only or .NET-only. I started a thread on the forum about which is better - people seem to prefer partial classes, though compiler directives is certainly a clean solution as well.

4-Tier Mobile Objects Model

If you use CSLA Light in a 4-tier mobile objects model, your physical tiers are:

  1. Silverlight
  2. Data portal server (web)
  3. Data portal server (application)
  4. Database

From a Silverlight perspective this is exactly the same as the 3-tier model. The difference is purely in how the CSLA .NET data portal is configured on the web server (#2). In this case, the CSLA .NET data portal is configured to be in remote mode, so all data portal calls on machine 2 are relayed to machine 3.

This is totally normal CSLA .NET behavior, and doesn't change how anything else works. But it can be a useful physical architecture in the case that the web server (#2) that is exposed to the Internet so the Silverlight client can interact with it, is not allowed to talk to the database. In other words, you could have an external firewall between 1 and 2, and then an internal firewall between 2 and 3. So machine 2 is in a DMZ, and machine 3 is the only one allowed to talk to the database.

What I'm describing is a pretty standard web deployment model for CSLA .NET - but with a CSLA Light client out there, instead of a simple web page.

As I continue this series I'll dig into the code of this simple application, showing how to create the business class, configure the data portal and set up the Silverlight UI project.

Monday, July 28, 2008 10:26:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, June 12, 2008

I've been engaged in a discussion around CSLA .NET on an MSDN architecture forum - you can see the thread here. I put quite a bit of time/effort into one of my replies, and I wanted to repost it here for broader distribution:

I don’t think it is really fair to relate CSLA .NET to CSLA Classic. I totally rewrote the framework for .NET (at least 4 times actually – trying different approaches/concepts), and the only way in which CSLA .NET relates to CSLA Classic is through some of the high level architectural goals around the use of mobile objects and the minimization of UI code.

The use of LSet in VB5/6, btw, was the closest approximation to the concept of a union struct from C or Pascal or a common block from FORTRAN possible in VB. LSet actually did a memcopy operation, and so wasn’t as good as a union struct, but was radically faster than any other serialization option available in VB at the time. So while it was far from ideal, it was the best option available back then.

Obviously .NET provides superior options for serialization through the BinaryFormatter and NetDataContractSerializer, and CSLA .NET makes use of them. To be fair though, a union struct would still be radically faster Smile

Before I go any further, it is very important to understand the distinction between ‘layers’ and ‘tiers’. Clarity of wording is important when having this kind of discussion. I discuss the difference in Chapter 1 of my Expert 2005 Business Objects book, and in several blog posts – perhaps this one is best:

http://www.lhotka.net/weblog/MiddletierHostingEnterpriseServicesIISDCOMWebServicesAndRemoting.aspx

The key thing is that a layer is a logical separation of concerns, and a tier directly implies a process or network boundary. Layers are a design constraint, tiers are a deployment artifact.

How you layer your code is up to you. Many people, including myself, often use assemblies to separate layers. But that is really just a crutch – a reminder to have discipline. Any clear separation is sufficient. But you are absolutely correct, in that a great many developers have trouble maintaining that discipline without the clear separation provided by having different code in different projects (assemblies).

1)

CSLA doesn’t group all layers into a single assembly. Your business objects belong in one layer – often one assembly – and so all your business logic (validation, calculation, data manipulation, authorization, etc) are in that assembly.

Also, because CSLA encourages the use of object-oriented design and programming, encapsulation is important. And other OO concepts like data hiding are encouraged. This means that the object must manage its own fields. Any DAL will be working with data from the object’s fields. So the trick is to get the data into and out of the private fields of the business object without breaking encapsulation. I discussed the various options around this issue in my previous post.

Ultimately the solution in most cases is for the DAL to provide and consume the data through some clearly defined interface (ADO.NET objects or DTOs) so the business object can manage its own fields, and can invoke the DAL to handle the persistence of the data.

To be very clear then, CSLA enables separation of the business logic into one assembly and the data access code into a separate assembly.

However, it doesn’t force you to do this, and many people find it simpler to put the DAL code directly into the DataPortal_XYZ methods of their business classes. That’s fine – there’s still logical separation of concerns and logical layering – it just isn’t as explicit as putting that code in a separate assembly. Some people have the discipline to make that work, and if they do have that discipline then there’s nothing wrong with the approach imo.

2)

I have no problem writing business rules in code. I realize that some applications have rules that vary so rapidly or widely that the only real solution is to use a metadata-driven rules engine, and in that case CSLA isn’t a great match.

But let’s face it, most applications don’t change that fast. Most applications consist of business logic written in C#/VB/Java/etc. CSLA simply helps formalize what most people already do, by providing a standardized approach for implementing business and validation rules such that they are invoked efficiently and automatically as needed.

Also consider that CSLA’s approach separates the concept of a business rule from the object itself. You then link properties on an object to the rules that apply to that object. This linkage can be dynamic – metadata-driven. Though the rules themselves are written as code, you can use a table-driven scheme to link rules to properties, allowing for SaaS scenarios, etc.

3)

This is an inaccurate assumption. CSLA .NET requires a strong separation between the UI and business layers, and allows for a very clear separation between the business and data access layers, and you can obviously achieve separation between the data access and data storage layers.

This means that you can easily have UI specialists that know little or nothing about OO design or other business layer concepts. In fact, when using WPF it is possible for the UI to only have UI-specific code – the separation is cleaner than is possible with Windows Forms or Web Forms thanks to the improvements in data binding.

Also, when using ASP.NET MVC (in its present form at least), the separation is extremely clear. Because the CSLA-based business objects implement all business logic, the view and controller are both very trivial to create and maintain. A controller method is typically just the couple lines of code necessary to call the object’s factory and connect it to the view, or to call the MVC helper to copy data from the postback into the object and to have the object save itself. I’m really impressed with the MVC framework when used in conjunction with CSLA .NET.

And it means that you can have data access specialists that only focus on ADO.NET, LINQ to SQL, EF, nHibernate or whatever. In my experience this is quite rare – very few developers are willing to be pigeonholed into such a singularly uninteresting aspect of software – but perhaps your experiences have been different.

Obviously it is always possible to have database experts who design and implement physical and logical database designs.

4)

I entirely agree that the DTO design pattern is incredibly valuable when building services. But no one pattern is a silver bullet and all patterns have both positive and negative consequences. It is the responsibility of professional software architects, designers and developers to use the appropriate patterns at the appropriate times to achieve the best end results.

CSLA .NET enables, but does not require, the concept of mobile objects. This concept is incredibly powerful, and is in use by a great many developers. Anyone passing disconnection ADO recordsets, or DataSets or hashtables/dictionaries/lists across the network uses a form of mobile objects. CSLA simply wraps a pre-existing feature of .NET and makes it easier for you to pass your own rich objects across the network.

Obviously only the object’s field values travel across the network. This means that a business object consumes no more network bandwidth than a DTO. But mobile objects provide a higher level of transparency in that the developer can work with essentially the same object model, and the same behaviors, on either side of the network.

Is this appropriate for all scenarios? No. Decisions about whether the pattern is appropriate for any scenario or application should be based on serious consideration of the positive and negative consequences of the pattern. Like any pattern, mobile objects has both types of consequence.

If you look at my blog over the past few years, I’ve frequently discussed the pros and cons of using a pure service-oriented approach vs an n-tier approach. Typically my n-tier arguments pre-suppose the use of mobile objects, and there are some discussions explicitly covering mobile objects.

The DTO pattern is a part of any service-oriented approach, virtually by definition. Though it is quite possible to manipulate your XML messages directly, most people find that unproductive and prefer to use a DTO as an intermediary – which makes sense for productivity even if it isn’t necessarily ideal for performance or control.

The DTO pattern can be used for n-tier approaches as well, but it is entirely optional. And when compared to other n-tier techniques involving things like the DataSet or mobile objects the DTO pattern’s weaknesses become much more noticeable.

The mobile object pattern is not useful for any true service-oriented scenario (note that I’m not talking about web services here, but rather true message-based SOA). This is because your business objects are your internal implementation and should never be directly exposed as part of your external contract. That sort of coupling between your external interface contract and your internal implementation is always bad – and is obviously inappropriate when using DTOs as well. DTOs can comprise part of your external contract, but should never be part of your internal implementation.

The mobile object pattern is very useful for n-tier scenarios because it enables some very powerful application models. Most notably, the way it is done in CSLA, it allows the application to switch between a 1-, 2- and 3-tier physical deployment merely by changing a configuration file. The UI, business and data developers do not need to change any code or worry about the details – assuming they’ve followed the rules for building proper CSLA-based business objects.

Thursday, June 12, 2008 8:58:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, June 10, 2008

I've been prototyping various aspects of CSLA Light (CSLA for Silverlight) for some time now. Enough to be confident that a decent subset of CSLA functionality will work just fine in Silverlight - which is very exciting!

The primary area of my focus is serialization of object graphs, and I've blogged about this before. This one issue is directly on the critical path, because a resolution is required for the data portal, object cloning and n-level undo.

And I've come to a final decision regarding object serialization: I'm not going to try and use reflection. Silverlight turns out to have some reasonable support for reflection - enough for Microsoft to create a subset of the WCF DataContractSerializer. Unfortunately it isn't enough to create something like the BinaryFormatter or NetDataContractSerializer, primarily due to the limitations around reflecting against non-public fields.

One option I considered is to say that only business objects with public read-write properties are allowed. But that's a major constraint on OO design, and still doesn't resolve issues around calling a property setter to load values into the object - because object setters typically invoke authorization and validation logic.

Another option I considered is to actually use reflection. I discussed this in a previous blog post - because you can make it work as long as you insert about a dozen lines of code into every class you write. But I've decided this is too onerous and bug-prone. So while reflection could be made to work, I think the cost is too high.

Another option is to require that the business developer create a DTO (data transfer object) for each business object type. And all field values would be stored in this DTO rather than in normal fields. While this is a workable solution, it imposes a coding burden not unlike that of using the struct concepts from my CSLA Classic days in the 1990's. I'm not eager to repeat that model...

Yet another option is to rely on the concept of managed backing fields that I introduced in CSLA .NET 3.5. In CSLA .NET 3.5 I introduced the idea that you could choose not to declare backing fields for your properties, and that you could allow CSLA to manage the values for you in something called the FieldManager. Conceptually this is similar to the concept of a DependencyProperty introduced by Microsoft for WF and WPF.

The reason I introduced managed backing fields is that I didn't expect Silverlight to have reflection against private fields at all. I was excited when it turned out to have a level of reflection, but now that I've done all this research and prototyping, I've decided it isn't useful in the end. So I'm returning to my original plan - using managed backing fields to avoid the use of reflection when serializing business objects.

The idea is relatively simple. The FieldManager stores the property values in a dictionary (it is actually a bit more complex than that for performance reasons, but conceptually it is a dictionary). Because of this, it is entirely possible to write code to loop through the values in the field manager and to copy them into a well-defined data contract (DTO). In fact, it is possible to define one DTO that can handle any BusinessBase-derived object, and other for any BusinessListBase-derived object and so forth. Basically one DTO per CSLA base class.

The MobileFormatter (the serializer I'm creating) can simply call Serialize() and Deserialize() methods on the CSLA base classes (defined by an IMobileObject interface that is implemented by BusinessBase, etc.) and the base class can get/set its data into/out of the DTO supplied by the MobileFormatter.

In the end, the MobileFormatter will have one DTO for each business object in the object graph, all in a single list of DTOs. The DataContractSerializer can then be used to convert that list of DTOs into an XML byte stream, as shown here:

image

The XML byte stream can later be deserialized into a list of DTOs, and then into a clone of the original object graph, as shown here:

image

Notice that the object graph shape is preserved (something the DataContractSerializer in Silverlight can't do at all), and that the object graph is truly cloned.

This decision does impose an important constraint on business objects created for CSLA Light, in that they must use managed backing fields. Private backing fields will not be supported. I prefer not to impose constraints, but this one seems reasonable because the alternatives are all worse than this particular constraint.

My goal is to allow you to write your properties, validation rules, business rules and authorization rules exactly one time, and to have that code run on both the Silverlight client and on your web/app server. To have that code compile into the Silverlight runtime and the .NET runtime. To have CSLA .NET and CSLA Light provide the same set of public and protected members so you get the same CSLA services in both environments.

By restricting CSLA Light to only support managed backing fields, I can accomplish that goal without imposing requirements for extra coding behind every business object, or the insertion of arcane reflection code into every business class.

Tuesday, June 10, 2008 8:01:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, April 23, 2008

I spent some time over the past few days using my prototype Silverlight serializer to build a prototype Silverlight data portal. It is still fairly far from complete, but at least I've proved out the basic concept and uncovered some interesting side-effects of living in Silverlight.

The good news is that the basic concept of the data portal works. Defining objects that physically move between the Silverlight client and a .NET web server is practical, and works in a manner similar to the pure .NET data portal.

The bad news is that it can't work exactly like the pure .NET data portal, and the technique does require some manual effort when creating the business assemblies (yes, plural).

The approach I'm taking involves having two business assemblies (VS projects) that share many of the same code files. Suppose you want to have a Person object move between the client and server. You need Person in a Silverlight class library and in a .NET class library. This means two projects are required, even if they have the same code file.

Visual Studio makes this reasonable, because you can create the file in one project (say the Silverlight class library) and then Add Existing Item and use the Link feature to get that same file included into a .NET class library project.

I also make the class be a partial class, so I can add extra code to the .NET class library implementation. The result is:

BusinessLibrary.Client (Silverlight class library)
  -> Person.cs

BusinessLibrary.Server (.NET class library)
  -> Person.cs (linked from BusinessLibrary.Client)
  -> Person.Server.cs

One key thing is that both projects build a file called BusinessLibrary.dll. Also, because Person.cs is a shared file, it obviously has the same namespace. This is all very important, because the serializer requires that the fully qualified type name ("namespace.type,assembly") be the same on client and server. In my case it is "BusinessLibrary.Person,BusinessLibrary".

The Person.Server.cs file contains the server-only parts of the Person class - it is just the rest of the partial class. The only catch here is that it can not define any fields because that would obviously confuse the serializer since those fields wouldn't exist on the client. Well, actually it could define fields as long as they were marked as NonSerialized.

Of course you could also have a partial Person.Client.cs in the Silverlight class library - though I haven't found a need for that just yet.

One thing I'm debating is whether the .NET side of the data portal should just directly delegate Silverlight calls into the "real" data portal - effectively acting as a passive router between Silverlight and the .NET objects. OR the .NET side of the data portal could invoke specific methods (like Silverlight_Create(), Silverlight_Update(), etc) so the business developer can include code to decide whether the calls should be processed on the server at all.

The first approach is simple, and certainly makes for a compelling story because it works very much like CSLA today. The Silverlight client gets/updates objects in a very direct manner.

The second approach is a little more complex, but might be better because I'm not sure you should blindly trust anything coming from the Silverlight client. You can make a good argument that Silverlight is always outside the trust boundary of your server application, so blindly passing calls from the client through the data portal may not be advisable.

Either way, what's really cool is that the original .NET data portal remains fully intact. This means that the following two physical deployment scenarios are available:

Silverlight -> Web server -> database
Silverlight -> Web server -> App server -> database

Whether the web/app server is in 2- or 3-tier configuration is just a matter of how the original .NET data portal (running on the web server) is configured. I think that's awesome, as it easily enables two very common web server configurations.

The big difference in how the Silverlight data portal works as compared to the .NET data portal is on the client. In Silverlight you should never block the main UI thread, which means calls to the server should be asynchronous. Which means the UI code can't just do this:

var person = Person.GetPerson(123);

That sort of synchronous call would block the UI thread and lock the browser. Instead, my current approach requires the UI developer to write code like this:

var dp = new Csla.DataPortal();
dp.FetchCompleted +=
  new EventHandler<Csla.DataPortalResult<Person>>(dp_FetchCompleted);
dp.BeginFetch<Person>(new SingleCriteria<int>(123));

with a dp_FetchCompleted() method like:

private void dp_FetchCompleted(object sender, Csla.DataPortalResult<Person> e)
{
  if (e.Error != null)
    // e.Error is an exception - deal with the issue
  else
    // e.Object is your result - use it
}

So the UI code is more cumbersome than in .NET, but it follows the basic service-calling technique used in any current Silverlight code, and I don't think it is too bad. It isn't clear how to make this any simpler really.

Wednesday, April 23, 2008 8:08:36 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Saturday, April 12, 2008

Someone on the CSLA .NET discussion forum recently asked what new .NET 3.5 features I used in CSLA .NET 3.5. The poster noted that there are a lot of new features in .NET 3.5, which is true. They also included some .NET 3.0 features as "new", though really those features have now been around for 15 months or so and were addressed in CSLA .NET 3.0. CSLA .NET 3.0 already added support for WCF, WPF and WF, so those technologies had very little impact on CSLA .NET 3.5.

My philosophy is to use new technologies only if they provide value to me and my work. In the case of CSLA .NET this is extended slightly, such that I try to make sure CSLA .NET also supports new technologies that might be of value to people who use CSLA .NET.

While .NET 3.5 has a number of new technologies at various levels (UI, data, languages), many of them required no changes to CSLA to support. I like to think this is because I'm always trying to look into the future as I work on CSLA, anticipating at least some of what is coming so I can make the transition smoother. For example, this is why CSLA .NET 2.0 introduced a provider model for the data portal - because I knew WCF was coming in a couple years and I wanted to be ready.

Since CSLA .NET already supported data binding to WPF, Windows Forms and Web Forms, there was no real work to do at the UI level for .NET 3.5. I actually removed Csla.Wpf.Validator because WPF now directly supplies that behavior, but I really didn't add anything for UI support because it is already there.

Looking forward beyond 3.5, it is possible I'll need to add support for ASP.NET MVC because that technology eschews data binding in favor of other techniques to create the view - but it is too early to know for sure what I'll do in that regard.

Since CSLA .NET has always abstracted the business object concept from the data access technique you choose, it automatically supported LINQ to SQL (and will automatically support ADO.NET EF too). No changes required to do that were required, though I did add Csla.Data.ContextManager to simplify the use of L2S data context objects (as a companion to the new Csla.Data.ConnectionManager for raw ADO.NET connections). And I enhanced Csla.Data.DataMapper to have some more powerful mapping options that may be useful in some L2S or EF scenarios.

LINQ to Objects did require some work. Technically this too was optional, but I felt it was critical, and so there is now "LINQ to CSLA" functionality provided in 3.5 (thanks to my colleague Aaron Erickson). The primary feature of this is creating a synchronized view of a BusinessListBase list when you do a non-projection query, which means you can data bind the results of a non-projection query and allow the user to add/remove items from the query result and those changes are also reflected in the original list. As a cool option, LINQ to CSLA also implements indexed queries against lists, so if you are doing many queries against the same list object you should look into this as a performance booster!

So all that's left are some of the language enhancements that exist to support LINQ. And I do use some of them - mostly type inference (which I love). But I didn't go through the entire body of existing code to use the new language features. The risk of breaking functionality that has worked for 6-7 years is way too high! I can't see where anyone would choose to take such a risk with a body of code, but especially one like CSLA that is used by thousands of people world-wide.

That means I used some of the new language features in new code, and in code I had to rework anyway. And to be honest, I use those features sparingly and where I thought they helped.

I think trying to force new technologies/concepts/patterns into code is a bad idea. If a given pattern or technology obviously saves code/time/money or has other clear benefits then I use it, but I try never to get attached to some idea such that I force it into places where it doesn't fit with my overall goals.

Saturday, April 12, 2008 3:11:52 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, January 24, 2008

I reader recently sent me an email asking why the PTWebService project in the CSLA .NET ProjectTracker reference app has support for using the data portal to talk to an application server. His understanding was that web services were end points, and that they should just talk to the database directly. Here's my answer:

A web service is just like any regular web application. The only meaningful difference is that it exposes XML instead of HTML.

 

Web applications often need to talk to application servers. While you are correct – it is ideal to build web apps in a 2-tier model, there are valid reasons (mostly around security) why organizations choose to build them using a 3-tier model.

 

The most common scenario (probably used by 40% of all organizations) is to put a second firewall between the web server and any internal servers. The web server is then never allowed to talk to the database directly (for security reasons), and instead is required to talk through that second firewall to an app server, which talks to the database.

 

The data portal in CSLA .NET helps address this issue, by allowing a web application to talk to an app server using several possible technologies. Since different organizations allow different technologies to penetrate that second firewall this flexibility is important.

 

Additionally, the data portal exists for other scenarios, like Windows Forms or WPF applications. It would be impractical for me to create different versions of CSLA for different types of application interface. In fact that would defeat the whole point of the framework – which is to provide a consistent and unified environment for building business layers that support all valid interface types (Windows, Web, WPF, Workflow, web service, WCF service, console, etc).

Thursday, January 24, 2008 8:01:56 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 21, 2007

I recently received this email:

Thank you very much for your insightful articles concerning 2 vs. 3 tier models.  It’s very refreshing to hear a view point that I’m aligned with.  Here at work I’m dealing with network Nazi’s who believe there is no cost of a middle tier and that there is only huge security rewards to reap.  I use your articles to support my point but I’m still not getting tremendously far.  I have yet to have anyone explain to me exactly how that middle tier is going to really add a significant enough amount of security that will warrant the high price to pay for employing a performance blasting middle-man.

There are two scenarios, though they are similar.

In the web, adding a middle tier has a very high cost, because the web server is already an app server. It already does database connection pooling across all users on that server. Adding an app server just adds overhead.

The only exception here is where the web server is serving up a lot of static content and only some dynamic content. In that case, moving the data access to another machine may be beneficial because it can allow the web server to focus more on delivering the static content. It is important to realize that the dynamic content will still be delivered more slowly due to overhead, but the overall web site may work better.

In Windows, adding a middle tier has a cost because the data needs to make two network hops to get to the user. Each network hop has a tangible cost. It would be foolish to ignore the cost of moving data over the wire, and the cost of serializing/deserializing the data on each end of the wire. These are very real, measurable costs.

In both cases the middle tier means an extra machine to license, administer, maintain, power and cool. It is an extra point of failure, extra potential for network/CPU/memory/IO contention, etc. These costs come with every server you add into the mix. Anyone who’s ever been a manager or higher in an IT organization has a very good understanding of these costs, because they impact the budget at the end of every year.

However, the security benefits of a middle tier are real.

In a 2-tier web model the database credentials are on the web server. Even if they are encrypted they are there on that machine. A would-be hacker could get them by cracking into that one machine.

Switching to a 3-tier model moves the database credentials onto the middle tier and off the web server. Now the web server has credentials to the app server, but not the database. A would-be hacker must crack first the web server, then the app server to get those credentials.

In a 2-tier Windows model the database credentials are on each client workstation. Even if they are encrypted they are there on those machines. A would-be hacker could get them by sitting at that machine - all it takes is a little social engineering and they're in. More likely, an employee may get the credentials and use Excel or some other common tool to directly access the database, bypassing your application. Oh the havoc they could wreak!

Switching to a 3-tier model moves the database credentials onto the middle tier and off the client workstations. Now the workstations have credentials to the app server, but not the database. A would-be hacker must crack into the app server to get those credentials. And end users are almost automatically shut out, because they would have to be a hacker to get to the app server to get the database credentials.

Friday, December 21, 2007 10:55:19 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, November 13, 2007

I always encourage people to go with 2-tier deployments if possible, because more tiers increases complexity and cost.

But it is important to recognize when the value of 3-tier outweighs that complexity/cost. The primary three motivators are:

  1. Security – using an app server allows you to shift the database credentials from clients to the server, adding security because a would-be hacker (or savvy end-user) would need to hack into the app server to get those credentials.
  2. Network infrastructure limitations – sometimes a 3-tier solution can help offload processing from CPU-bound or memory-bound machines, or it can help minimize network traffic over slow or congested links. Direct database communication is a chatty dialog, and using an app server allows a single blob of data to go across the slow link, and the chatty dialog to occur across a fast link, so 3-tier can result in a net win for performance if the client-to-server link is slow or congested.
  3. Scalability – using an app server provides for database connection pooling. The trick here, is that if you don’t need it, you’ll harm performance by having an app server, so this is only useful if database connections are taxing your database server. Given the state of modern database hardware/software, scalability isn’t a big deal for the vast majority of applications anymore.

The CSLA .NET data portal is of great value here, because it allows you to switch from 2-tier to 3-tier with a configuration change - no code changes required*. You can deploy in the cheaper and simpler 2-tier model to start with, and if one of these motivators later justifies the complexity of moving to 3-tier, you can make that move with relative ease.

* disclaimer: if you plan to make such a move, I strongly suggest testing in 3-tier mode during development. While the data portal makes this 2-tier to 3-tier switch seamless, it only works if the code in the business objects follows all the rules around serialization and layer seperation. Testing in a 3-tier mode helps ensure that none of those rules get accidentally broken during development.

Tuesday, November 13, 2007 4:37:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I was recently asked whether CSLA incorporates thread synchronization code to marshal calls or events back onto the UI thread when necessary. The short answer is no, but the question deserves a bit more discussion to understand why it isn't the business layer's job to handle such details.

The responsibility for cross-thread issues resides with the layer that started the multi-threading.

So if the business layer internally utilizes multi-threading, then that layer must abstract the concept from other layers.

But if the UI layer utilizes multi-threading (which is more common), then it is the UI layer's job to abstract that concept.

It is unrealistic to build a reusable business layer around one type of multi-threading model and expect it to work in other scenarios. Were you to use Windows Forms components for thread synchronization, you'd be out of luck in the ASP.NET world, for example.

So CSLA does nothing in this regard, at the business object level anyway. Nor does a DataSet, or the entity objects from ADO.NET EF, etc. It isn't the business/entity layer's problem.

CSLA does do some multi-threading, specifically in the Csla.Wpf.CslaDataProvider control, because it supports the IsAsynchronous property. But even there, WPF data binding does the abstraction, so there's no thread issues in either the business or UI layers.

Tuesday, November 13, 2007 11:09:54 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, November 05, 2007

I was recently asked how to get the JSON serializer used for AJAX programming to serialize a CSLA .NET business object.

From an architectural perspective, it is better to view an AJAX (or Silverlight) client as a totally separate application. That code is running in an untrusted and terribly hackable location (the user’s browser – where they could have created a custom browser to do anything to your client-side code – my guess is that there’s a whole class of hacks coming that most people haven’t even thought about yet…)

So you can make a strong argument that objects from the business layer should never be directly serialized to/from an untrusted client (code in the browser). Instead, you should treat this as a service boundary – a semantic trust boundary between your application on the server, and the untrusted application running in the browser.

What that means in terms of coding, is that you don’t want to do some sort of field-level serialization of your objects from the server. That would be terrible in an untrusted setting. To put it another way - the BinaryFormatter, NetDataContractSerializer or any other field-level serializer shouldn't be used for this purpose.

If anything, you want to do property-level serialization (like the XmlSerializer or DataContractSerializer or JSON serializer). But really you don’t want to do this either, because you don’t want to couple your service interface to your internal implementation that tightly. This is, fundamentally, the same issue I discuss in Chapter 11 of Expert 2005 Business Objects as I talk about SOA.

Instead, what you really want to do (from an architectural purity perspective anyway) is define a service contract for use by your AJAX/Silverlight client. A service contract has two parts: operations and data. The data part is typically defined using formal data transfer objects (DTOs). You can then design your client app to work against this service contract.

In your server app’s UI layer (the aspx pages and webmethods) you translate between the external contract representation of data and your internal usage of that data in business objects. In other words, you map from the business objects in/out of the DTOs that flow to/from the client app. The JSON serializer (or other property-level serializers) can then be used to do the serialization of these DTOs - which is what those serializers are designed to do.

This is a long-winded way of saying that you really don’t want to do JSON serialization directly against your objects. You want to JSON serialize some DTOs, and copy data in/out of those DTOs from your business objects – much the way the SOA stuff works in Chapter 11.

Monday, November 05, 2007 11:53:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, February 07, 2007

I get a lot of questions about the new ADO.NET Entity Framework and LINQ and how these technologies interact with CSLA .NET. I've discussed this in a few other blog posts, but the question recently came up on the CSLA .NET forum and I thought I'd share my answer:

They are totally compatible, but it is important to remember what they are for.

Both ADO.NET EF and LINQ work with entity objects - objects designed primarily as data containers. These technologies are all about making it an easy and intuitive process to get data into and out of databases (or other data stores) into entity objects, and then to reshape those entity objects in memory.

CSLA .NET is all about creating business objects - objects designed primarily around the responsibilities and behaviors defined by a business use case. It is all about making it easy to build use case-derived objects that have business logic, validaiton rules and authorization rules. Additionally, CSLA .NET helps create objects that support a rich range of UI supporting behaviors, such as data binding and n-level undo.

It is equally important to remember what these technologies are not.

ADO.NET EF and LINQ are not well-suited to creating a rich business layer. While it is technically possible to use them in this manner, it is already clear that the process will be very painful for anything but the most trivial of applications. This is because the resulting entity objects are data-centric, and don't easily match up to business use cases - at least not in any way that makes any embedded business logic maintainable or easily reusable.

CSLA .NET is not an object-relational mapping technology. I have very specifically avoided ORM concepts in the framework, in the hopes that someone (like Microsoft) would eventually provide an elegant and productive solution to the problem. Obviously solutions do exist today: raw ADO.NET, , the DAAB, nHibernate, Paul Wilson's ORM mapper, LLBLgen and more. Many people use these various technologies behind CSLA .NET, and that's awesome.

So looking forward, I see a bright future. One where the DataPortal_XYZ methods either directly make use of ADO.NET EF and LINQ, or call a data access layer (DAL) that makes use of those technologies to build and return entity objects.

Either way, you can envision this future where the DP_XYZ methods primarily interact with entity objects, deferring all the actual persistence work off to EF/LINQ code. If Microsoft lives up to the promise with EF and LINQ, this model should seriously reduce the complexity of data access, resulting in more developer productivity - giving us more time to focus on the important stuff: object-oriented design ;) .

Wednesday, February 07, 2007 12:08:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, February 04, 2007

It seems that people are always looking for The Next Big Thing™, which isn’t surprising. What I do find surprising is that they always assume that TNBT must somehow destroy the Previous Big Thing™, which in the field of software development was really object-orientation (at least for most people).

 

Over the past few years Web Services and then their reincarnation as SOA or just SO were going to eliminate the use of OO. As more and more people have actually implemented and consumed services I think it has become abundantly clear that SOA is merely a repackaging of EAI from the 90’s and EDI from before that. Useful, but hardly of the sweeping impact of OO, at least not in the near term.

 

Though to be fair, OO took more than 20 years before it became remotely “mainstream”, and even today only around 30% of developers (based on my informal, but broad, polling over the past couple years) apply OOD in their daily work. So you could question whether OO is “mainstream” even today, when the vast majority of developers use procedural programming techniques.

 

Even though the majority of .NET and Java developers don’t actually use OO themselves, both the .NET and Java platforms are OO throughout. So there’s no doubt that OO has had an incredible impact on our industry, and that it has shaped the major platforms and development styles used by almost everyone.

 

So perhaps in 15-20 years SO really will have the sweeping impact that OO has had on software development. And I’m not sure that would be a bad thing, but it isn’t a near-term concern.

 

However, I was recently confronted by a (so-called) newcomer: workflow. Yes, now workflow (WF) is going to kill OO, or so I’ve been told. In fact, Through indirect channels, my challenger suggested that the CSLA .NET framework is obsolete going forward because it is based on these “old-school” OO concepts.

 

Obviously I beg to differ :)

 

It is true that some of the concepts I employ in CSLA .NET are quite old, but I am not ready to through OO into the “old-school” bucket just yet... The idea is somewhat ironic however, because WF is being put up as The Next Big Thing™. Depressingly, there's virtually no technology today that isn't a rehash of something from 20 years ago. That's more true of SOA and WF than OO. Remember that SOA is just repacking of message-based architecture, only with XML, and workflow is an extension of procedural design and the use of flowcharts. At least OO can claim to be younger than either of those two root technologies.

 

Personally, I very much doubt the use of objects as rich behavioral entities will go away any time soon. Sure, for non-interactive tasks procedural technologies like WF might be better than OO (though that's debatable). And certainly for connecting disparate systems you need to use loosely coupled architectures, of which SOA is one (but not the only one).

 

But the question remains: how do you create rich, interactive GUI applications in a productive and yet maintainable manner?

 

Putting your logic in a workflow reduces interactivity. Putting your logic behind a set of services reduces both interactivity and performance. Putting your logic in the UI is "VB3 programming". So what's left? For interactive apps you need a business layer that can run as close to the user as possible and using business objects is a particularly good way to do exactly that.

 

Neither WF nor WCF/SOA are, to me, competitors to CSLA. Rather, I see them as entirely complimentary.

 

Perhaps ADO.NET EF and/or LINQ are competitors - that depends on what they look like as they stabilize over the next several months. Everything I’ve seen thus far leads me to believe these technologies are complimentary as well. (And Scott Guthrie agrees – check out his recent interview on DNR) Neither of them addresses the issues around support for data binding, or centralization of business logic in a formal business layer.

 

But when it comes to SOA I still think it is a “fad”. It is either MTS done over with XML - which is what 95% of the people out there are doing with "services", or it is EDI/EAI with XML - which is a good thing and a move forward, but which is too complex and has too much overhead for use in a typical application. I think that most likely in 5-10 years we'll have a new acronym for it – just adding to the EDI/EAI/SOA list.

 

Just like "outsourcing" became ASP which became SaaS. The idea doesn't die, it just gets renamed so more money can be made by selling the same stuff over again - often to people who really don't need/want it.

 

The point is, our industry is cyclical. And we're heading toward into a period where "procedural programming" is once again reaching a peak of usage. This time under the names SOA and workflow. But if you step back from the hype and look at it, what's proposed is that we create a bunch of standalone code blocks that exchange formatted data. A set of procedures that exchange parameters. Dress it up how you like, that's what is being proposed by both the SOA and WF crowds.

 

And there's nothing wrong with that. Procedural design is well understood.

 

And it could actually work this time around if we don't cheat. I still maintain that the reason procedural programming "failed" was because we cheated and used common blocks and global variables. Why? Because it was too inefficient and required too much code to formally package up all the data and send it as parameters.

 

If we can stomach the efficiency and coding costs this time around - packing the data into XML messages - then I see no reason why we can't make procedural programming work in many cases. And by naming it something trendy, like SOA and/or workflow, we can avoid the negative backlash that would almost certainly occur if people realized they were abandoning OO to return to procedural programming.

 

Having spent the first third of my career doing procedural programming, I wouldn't necessarily mind going back. In some ways it really was easier than using objects - though there are some very ugly traps that people will have to rediscover again on this go-round. Most notably the trap that a procedure can't be reused without introducing fragility into the system, not due to syntactic coupling or API changes, but due to semantic coupling (about which I've blogged).

 

This is true for procedures and any other "reusable" code block like a workflow activity or a service. They all suffer the same limitation - and it is one that neither WSDL nor compilers can help solve...

 

Anyway, kind of went on a rant here - but I find it amusing that, in some circles at least, OO is viewed as the "legacy" technology in the face of services or workflow, which are even more legacy than OO ;)

Sunday, February 04, 2007 1:03:57 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, January 17, 2007

I recently had an email discussion where I was describing why I needed to solve the problem I described in this article for WPF and this article for Windows Forms.

 

In both cases the issue is that data binding doesn’t refresh the value from the data source after it updates the data source from the UI. This means that any changes to the value that occur in the property set code aren’t reflected in the UI.

 

The question he posed to me was whether it was a good idea to have a property set block actually change the value. In most programming models, goes the thought, assigning a property to a value can’t result in that property value changing. So any changes to the value that occur in the set block of a property are counter-intuitive, and so you simply shouldn’t change the value in the setter code.

 

Here’s my response:

 

The idea of a setter (which is really just a mutator method by another name) changing a value doesn't (or shouldn't) seem counter-intuitive at all.

 

If we were talking about assigning a value to a public field I’d agree entirely. But we are not. Instead we’re talking about assigning a value to a property, and that’s very different.

 

If all we wanted were public fields, we wouldn't need the concept of "property" at all. The concept of "property" is merely a formalization of the following:

 

  1. public fields are bad
  2. private fields are exposed through an accessor method
  3. private fields are changed through a mutator method
  4. creating and using accessor/mutator methods is awkward without a standard mechanism

 

So the concept of "property" exists to standardize and formalize the idea that we need controlled access to private fields, and a standard way to change their value through a mutator method.

 

Consider the business rule that says a document id must follow a certain form - like SOP433. The first three characters must be alpha and upper case, the last three must be numeric. This is an incredibly common scenario for document, product, customer and other user-entered id values.

 

Only a poor UI would force the user to actually enter upper case values. The user should be able to type what they want, and the software will fix it.

 

But putting the upper case rule in the UI is bad, because code in the UI isn't reusable, and tends to become obsolete very rapidly as technology and/or the UI design changes. There's nothing more expensive over the life of an application than a line of code in the UI. So while it is possible to implement this rule in a validation control, in JavaScript, in a button click event handler - none of those are good solutions to the real problem.

 

Yet if that rule is placed purely in the backend system, then the user can't get any sort of interactive response. The form must be "posted" or "transmitted" to the backend before the processing can occur. Users want to immediately see the value be upper case or they get nervous.

 

So then we're stuck. Many people implement the rule twice. Once in the UI to make the user happy, and once in the backend, which is the real rule implementation. And then they try to keep those rules in sync forever - the result being an expensive, unreliable and hard to maintain system.

 

I've watched this cycle occur for 20 years now, and it is the same time after time. And it sucks.

 

This, right here, is why VB got such a bad name through the 1990’s. The VB forms designer made it way too easy to write all the logic in the UI, and without any other clear alternative that's what happened. The resulting applications are very fragile and are impossible to upgrade to the next technology (like .NET). Today, as we talk, many thousands of lines of code are being written in Windows Forms and Web Forms in exactly the same way. Those poor people will have a hell of a time upgrading to WPF, because none of their code is reusable.

 

What's needed is one location for this rule. Business objects offer a workable solution here. If the object implements the rule, and the object runs on the client workstation, then (without code in the UI) the user gets immediate response and the rule is satisfied. And the rule is reusable, because the object is reusable - in a way that UI code never can be (or at least never has been).

 

That same object, with that same interactive rule, can be used behind Windows Forms, Web Forms, WPF and even a web services interface. The rule is always applied, because it is right there in the object. And for interactive UIs it is immediate, because it is in the field's mutator method (the property setter).

 

So in my mind the idea of changing a value in a setter isn't counter-intuitive at all - it is the obvious design purpose behind the property setter (mutator). Any other alternative is really just a ridiculously complex way of implementing public fields. And worse, it leaves us where we've been for 20+ years, with duplicate code and expensive, unreliable software.

Wednesday, January 17, 2007 3:45:30 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, January 03, 2007

I recently received an email that included this bit:

 

“You are killing me. I wrote a rather scathing review of your Professional Business Objects with C# on Amazon.com and on my own blog. However, recently I read a transcription of an ARCast with Ron Jacobs where you talked about business objects. I believe I agreed with everything you said. What are you trying to do to me?

 

I am just a regular shmoe developer, who preaches when listened to about the joys and benefits of OO design for the common business application. I feel too many develop every application like it is just babysitting a database. Every object’s purpose is for the CRUD of data in a table. I have developed great disdain for companies, development teams, and senior developers who perpetuate this problem. I felt Expert C# 2005 Business Object perpetuates this same kind of design, thus the 3 star rating on Amazon for your book.

 

In the ARCast you mentioned a new book coming out. I am hoping it is the book I have been looking for. If I wrote it myself it would be titled something like, “Business Objects in the Real World.” It would address the problems of data-centric design and how some objects truly are just for managing data and others conduct a business need explained by and expert. These two objects would be pretty different and possibly even use a naming convention to explicitly differentiate the two. For example I don’t want my “true” business objects with getters, setters, isDirty flags or anything else that might make them invalid and popping runtime errors when trying to conduct their business.

 

Anyway, I could ramble on (It’s my nature). However, I just want to drop you a line and say there is a real disconnect between us, but at the same time, I wanted to show everyone at my office what you were saying in your interview. You were backing me up! However, just months ago I was using your writings to explain what is wrong with software development. We seem to be on the same page, or close anyway, but maybe coming to different conclusions. I guess that’s what is bothering me.”

 

The following is my response, which I thought I’d share here on my blog because I ended up rambling on more than I’d planned, and I thought it might be interesting/useful to someone:

 

I would guess that the disconnect may flow from our experiences during our careers - what we've seen work and not work over time.

 

For my part, I've become very pragmatic. The greatest ideas in the world tend to fall flat in real life because people don't get them, or they have too high a complexity or cost barrier to gain their benefit. For a very long time OO itself fit this category. I remember exploring OO concepts in the late 80's and it was all a joke. The costs were ridiculously high, and the benefits entirely theoretical.

 

Get into the mid-90's and components show up, making some elements of OO actually useful in real life. But even then RAD was so powerful that the productivity barrier/differential between OO and RAD was ridiculously high. I spent untold amounts of time and effort trying to reconcile these two to allow the use of OO and RAD both - but with limited success. The tools and technologies simply didn't support both concepts - at least not without writing your own frameworks for everything and ignoring all the pre-existing (and established) RAD tools in existence.

 

Fortunately by this time I'd established both good contacts and a good reputation within key Microsoft product teams. Much of the data binding support in .NET (Windows Forms at least) is influenced by my constant pressure to treat objects as peers to recordset/resultset/dataset constructs. I continue to apply this pressure, because things are still not perfect - and with WPF they have temporarily slid backwards somewhat. But I know people on that team too, and I think they'll improve things as time goes on.

 

In the meantime Java popularizes the idea of ORM - but solutions exist for only the most basic scenarios - mapping table data into entity objects. While they claim to address the impedance mismatch problem, they really don't, because they aren't mapping into real OO designs, but rather into data-centric object models. Your disdain for today’s ORM tools must be boundless J

 

For better or worse, Microsoft is following that lead in the next version of ADO.NET - and I don't totally blame them; a real solution is complex enough that it is hard to envision, much less implement. However, here too I hold out hope, because the current crop of "ORM" tools are starting to create nice enough entity objects that it may become possible to envision a tool that maps from entity objects to OO objects using a metadata scheme between the two. This is an area I've been spending some time on of late, and I think there's some good potential here.

 

Through all this, I've been working primarily with mainstream developers (“Mort”). Developers who do this as a job, not as an all-consuming passion. Developers who want to go home to their families, their softball games, their real lives. Who don't want to master "patterns" and "philosophies" like Agile or TDD; but rather they just want to do their job with a set of tools that help them do the right thing.

 

I embrace that. This makes me diametrically opposed to the worldviews of a number of my peers who would prefer that the tools do less, so as to raise the bar and drive the mainstream developers out of the industry entirely. But that, imo, is silly. I want mainstream developers to have access to frameworks and tools that help guide them toward doing the right thing - even if they don't take the time to understand the philosophy and patterns at work behind the scenes.

 

I don't remember when I did the ARCast interview, but I was either referring to the 2005 editions of my business objects books which came out in April/May, or to the ebook I'm finishing now, which covers version 2.1 of my CSLA .NET framework. Odds are it is the former, and it isn't the book you are looking for - though you might enjoy Chapter 6.

 

In general I think you can take a couple approaches to thinking about objects.

 

One approach, and I think the right one, is to realize that all objects are behavior-driven and have a single responsibility. Sometimes that responsibility is to merely act as a data container (DTO or entity object). Other times it is to act as a rich binding source that implements validation, authorization and other behaviors necessary to enable the use of RAD tools. Yet other times it is to implement pure, non-interactive business behavior (though this last area is being partially overrun by workflow technologies like WF).

 

Another way to think about objects is to say there are different kinds of object, with different design techniques for each. So entity objects are designed quasi-relationally, process objects are designed somewhat like a workflow, etc. I personally think this is a false abstraction that misses the underlying truth, which is that all objects must be designed around responsibility and behavior as they fit into a use case and architecture.

 

But sticking with the pure responsibility/behavior concept, CSLA .NET helps address a gaping hole. ORM tools (and similar tools) help create entity objects. Workflow is eroding the need for pure process objects. But there remains this need for rich objects that support a RAD development experience for interactive apps. And CSLA .NET helps fill this gap by making it easier for a developer to create objects that implement rich business behaviors and also directly support Windows Forms, Web Forms and WPF interfaces – leveraging the existing RAD capabilities provided by .NET and Visual Studio.

 

Whether a developer (mis)uses CSLA .NET to create data-centric objects, or follows my advice and creates responsibility-driven objects is really their choice. But either way, I think good support for the RAD capabilities of the environment is key to attaining high levels of productivity when building interactive applications.

Wednesday, January 03, 2007 10:40:32 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, September 05, 2006

Several people have asked me about my thoughts on the Microsoft .NET 3.0 Workflow Foundation (WF) technology.

 

My views certainly don’t correspond with Microsoft's official line. But the “official line” comes from the WF marketing team, and they'll tell you that WF is the be-all-and-end-all, and that's obviously silly. Microsoft product teams are always excited about their work, which is good and makes sense. We all just need to apply an "excitement filter" to anything they say, bring it back to reality and decide what really works for us. ;)

 

Depending on who you talk to, WF should be used to do almost anything and everything. It can drive your UI, replace your business layer, orchestrate your processes and workflow, manage your data access and solve world hunger…

 

My view on WF is a bit more modest:

 

Most applications have a lot of highly interactive processes - where users edit, view and otherwise interact with the system. These applications almost always also have some non-interactive processes - where the user initiates an action, but then a sequence of steps are followed without the user's input, and typically without even telling the user about each step.

 

Think about an inventory system. There's lots of interaction as the user adds products, updates quantities, moves inventory around, changes cost/price data, etc. Then there's almost always a point at which a "pick list" gets generated so someone can go into the warehouse and actually get the stuff so it can be shipped or used or whatever. Generating a pick list is a non-trivial task, because it requires looking at demand (orders, etc.), evaluating what products to get, where they are and ordering the list to make the best use of the stock floor personnel's time. This is a non-interactive process.

 

Today we all write these non-interactive processes in code. Maybe with a set of objects working in concert, but more often as a linear or procedural set of code. If a change is needed to the process, we have to alter the code itself, possibly introducing unintended side-effects, because there's little isolation between steps.

 

Personally I think this is where WF fits in. It is really good at helping you create and manage non-interactive processes.

 

Yes, you have to think about those non-interactive processes in a different way to use WF. But it is probably worth it, because in the end you'll have divided each process into a set of discrete, autonomous steps. WF itself will invoke each step in order, and you have the pleasure (seriously!) of creating each step as an independent unit of code.

 

From an OO design perspective it is almost perfect, because each step is a use case, that can be designed and implemented in isolation - which is a rare and exciting thing!

 

Note that getting to this point really does require rethinking of the non-interactive process. You have to break the process down into a set of discrete steps, ensuring that each step has very clearly defined inputs and outputs, and the implementation of each step must arbitrarily ensure any prerequisites are met, because it can't know in what order things will eventually occur.

 

The great thing (?) about this design process is that the decomposition necessary to pull it off is exactly the same stuff universities were teaching 25 years ago to COBOL and FORTRAN students. This is procedural programming "done right". To me though, the cool think is that each "procedure" now becomes a use case, and so we're finally in a position to exploit the power of procedural AND object-oriented design and programming! (and yes, I am totally serious)

 

So in the end, I think that most applications have a place for WF, because most applications have one or more of these non-interactive processes. The design effort is worth it, because the end result is a more flexible and maintainable process within your application.

Tuesday, September 05, 2006 11:24:24 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, May 03, 2006

Microsoft is holding an architect contest at Tech Ed this year. Click here for details.

Wednesday, May 03, 2006 8:44:59 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, March 13, 2006

Somebody woke up on the wrong side of the bed, fortunately it wasn't me this time :)

This is a nice little rant about the limitations of OO-as-a-religion. I think you could substitute any of the following for OO and the core of the rant would remain quite valid:

  • procedural programming/design
  • modular programming/design
  • SOA/SO/service-orientation
  • Java
  • .NET

Every concept computer science has developed over the past many decades came into being to solve a problem. And for the most part each concept did address some or all of that problem. Procedures provided reuse. Modules provided encapsulation. Client/server provided scalability. And so on...

The thing is, that very few of these new concepts actually obsolete any previous concepts. For instance, OO doesn't elminate the value of procedural reuse. In fact, using the two in concert is typically the best of both worlds.

Similarly, SO is a case where a couple ideas ran into each other while riding bicycle. "You got messaging in my client/server!" "No! You got client/server in my messaging!" "Hey! This tastes pretty good!" SO is good stuff, but doesn't replace client/server, messaging, OO, procedural design or any of the previous concepts. It merely provides an interesting lense through which we can view these pre-existing concepts, and perhaps some new ways to apply them.

Anyway, I enjoyed the rant, even though I remain a staunch believer in the power of OO.

Monday, March 13, 2006 11:44:39 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, March 01, 2006

I was recently asked whether I thought it was a good idea to avoid using the Validation events provided by Microsoft (in the UI), in favor of putting the validation logic into a set of objects. I think the answer to the question (to some degree) depends on whether you are doing Windows or Web development.

 

With Windows development, Windows Forms provides all the plumbing you need to put all your validation in the business objects and still have a very rich, expressive UI - with virtually no code in the UI at all. This is particularly true in .NET 2.0, but is quite workable in .NET 1.1 as well.

 

With Web development life isn't as nice. Of course there's no way to run real code in the browser, so you are stuck with JavaScript. While the real authority for any and all business logic must reside on the web server (because browsers are easily compromised), many web applications duplicate validation into the browser to give the user a better experience. While this is expensive and unfortunate, it is life in the world of the Web.

 

(Of course what really happens with a lot of Web apps is that the validation is only put into the browser - which is horrible, because it is too easy to bypass. That is simply a flawed approach to development...)

 

At least with ASP.NET there are the validation controls, which simplify the process of creating and maintaining the duplicate validation logic in the browser. You are still left to manually keep the logic in sync, but at least it doesn't require hand-coding JavaScript in most cases.

 

 

Obviously Windows Forms is an older and more mature technology (or at least flows from an older family of technologies), so it is no surprise that it allows you to do more things with less effort. But in most cases the effort to create Web Forms interfaces isn't bad either.

 

In any case, I do focus greatly on keeping code out of the UI. There's nothing more expensive than a line of code in the UI - because you _know_ it has a half-life of about 1-2 years. Everyone is rewriting their ASP.NET 1.0 UI code to ASP.NET 2.0. Everyone is tweaking their Windows Forms 1.0 code for 2.0. And all of it is junk when WinFX comes out, since WPF is intended to replace both Windows and Web UI development in most cases. Thus code in the UI is expensive, because you'll need to rewrite it in less than 2 years in most cases.

 

Code in a business object, on the other hand, is far less expensive because most business processes don't change nearly as fast as the UI technologies provided by our vendors... As long as your business objects conform to the basic platform interfaces for data binding, they tend to flow forward from one UI technology to the next. For instance, WPF uses the same interfaces as Windows Forms, so reusing the same objects from Windows Forms behind WPF turns out to be pretty painless. You just redesign the UI and away you go.

 

A co-worker at Magenic has already taken my ProjectTracker20 sample app and created a WPF interface for it – based on the same set of CSLA .NET 2.0 business objects as the Windows Forms and Web Forms interfaces. Very cool!

 

So ultimately, I strongly believe that validation (and all other business logic) should be done in a set of business objects. That’s much of the focus in my upcoming Expert VB 2005 and C# 2005 Business Objects books. While you might opt to duplicate some of the validation in the UI for a rich user experience, that’s merely an unfortunate side-effect of the immature (and stagnant) state of the HTML world.

Wednesday, March 01, 2006 3:44:33 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

This recent MSDN article talks about SPOIL: Stored Procedure Object Interface Layer.

This is an interesting, and generally good idea as I see it. Unfortunately this team, like most of Microsoft, apparently just doesn't understand the concept of data hiding in OO. SPOIL allows you to use your object's properties as data elements for a stored procedure call, which is great as long as you only have public read/write properties. But data hiding requires that you will have some private fields that simply aren't exposed as public read/write properties. If SPOIL supported using fields as data elements for a stored procedure call it would be totally awesome!

The same is true for LINQ. It works against public read/write properties, which means it is totally useless if you want to use it to load "real" objects that employ basic concepts like encapsulation and data hiding. Oh sure, you can use LINQ (well, dlinq really) to load a DTO (data transfer object - an object with only public read/write properties and no business logic) and then copy the data from the DTO into your real object. Or you could try to use the DTO as the "data container" inside your real object rather than using private fields. But frankly those options introduce complexity that should be simply unnecessary...

While it is true that loading private fields requires reflection - Microsoft could solve this. They do own the CLR after all... It is surely within their power to provide a truly good solution to the problem, that supports data mapping and also allows for key OO concepts like encapsulation and data hiding.

Wednesday, March 01, 2006 10:00:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, January 23, 2006

[Warning, the following is a bit of a rant...]

 

Custom software development is too darn hard. As an industry, we spend more time discussing “plumbing” issues like Java vs .NET or .NET Remoting vs Web Services than we do discussing the design of the actual business functionality itself.

 

And even when we get to the business functionality, we spend most of our time fighting with the OO design tools, the form designer tools that “help” create the UI, the holes in data binding, the data access code (transactions, concurrency, field/column mapping) and the database design tools.

 

Does the user care, or get any benefit out of any of this? No. They really don’t. Oh sure, we convince ourselves that they do, or that they’d “care if they knew”. But they really don’t.

 

The users want a system that does something useful. I think Chris Stone gets it right in this editorial.

 

I think the underlying problem is that just building business software to solve users’ problems is ultimately boring. Most of us get into computers to express our creative side, not to assemble components into working applications.

 

If we wanted to do widget assembly, there’s a whole lot of other jobs that’d provide more visceral satisfaction. Carpentry, plumbing (like with pipes and water), cabinet making and many other jobs provide the ability to assembly components to build really nice things. And those jobs have a level of creativity too – just like assembly programs do in the computer world.

 

But they don’t offer the level of creativity and expression you get by building your own client/server, n-tier, SOA framework. Using data binding isn’t nearly as creative or fun as building a replacement for data binding…

 

I think this is the root of the issue. The computer industry is largely populated by people who want to build low-level stuff, not solve high-level business issues. And even if you go with conventional wisdom; that Mort (the business-focused developer) outnumbers the Elvis/Einstein framework-builders 5 to 1, there are still enough Elvis/Einstein types in every organization to muck things up for the Morts.

 

Have you tried building an app with VB3 (yes, Visual Basic 3.0) lately? A couple years ago I dusted that relic off and tried it. VB3 programs are fast. I mean blindingly fast! It makes sense, they were designed for the hardware of 1993… More importantly, a data entry screen built in VB3 is every bit as functional to the user as a data entry screen built with today’s trendy technologies.

 

Can you really claim that your Windows Forms 2.0, ASP.NET 2.0 or Flash UI makes the user’s data entry faster or more efficient? Do you save them keystrokes? Seconds of time?

 

While I think the primary fault lies with us, as software designers and developers, there’s no doubt that the vendors feed into this as well.

 

We’ve had remote-procedure-call technology for a hell of a long time now. But the vendors keep refining it every few years: got to keep the hype going. Web services are merely the latest incarnation of a very long sequence of technologies that all you to call a procedure/component/method/service on another computer. Does using Web services make the user more efficient than DCOM? Does it save them time or keystrokes? No.

 

But we justify all these things by saying it will save us time. They’ll make us more efficient. So ask your users. Do your users think you are doing a better job, that you are more efficient and responsive, that you are giving them better software than you were before Web Services? Before .NET? Before Java?

 

Probably not. My guess: users don’t have a clue that the technology landscape changed out from under them over the past 5 years. They see the same software, with the same mis-match against the business needs and the same inefficient data entry mechanisms they’ve seen for at least the past 15 years…

 

No wonder they offshore the work. We (at least in the US and Europe) have had a very long time to prove that we can do better work, that all this investment in tools and platforms will make our users’ lives better. Since most of what we’ve done hasn’t lived up to that hype, can it be at all surprising that our users feel they can get the same crappy software at a tiny fraction of the price by offshoring?

 

Recently my friend Kathleen Dollard made the comment that all of Visual Basic (and .NET in general) should be geared toward Mort. I think she’s right. .NET should be geared toward making sure that the business developer is exceedingly productive and can provide tremendous business value to their organizations, with minimal time and effort.

 

If our tools did what they were supposed to do, the Elvis/Einstein types could back off and just let the majority of developers use the tools – without wasting all that time building new frameworks. If our tools did what they were supposed to do, the Mort types would be so productive there’d be no value in offshoring. Expressing business requirements in software would be more efficiently done by people who work in the business, who understand the business, language and culture; and who have tools that allow them to build that software rapidly, cheaply and in a way that actually meets the business need.

Monday, January 23, 2006 11:41:13 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, August 05, 2005

Ted Neward believes that “distributed objects” are, and always have been, a bad idea, and John Cavnar-Johnson tends to agree with him.

 

I also agree. "Distributed objects" are bad.

 

Shocked? You shouldn’t be. The term “distributed objects” is most commonly used to refer to one particular type of n-tier implementation: the thin client model.

 

I discussed this model in a previous post, and you’ll note that I didn’t paint it in an overly favorable light. That’s because the model is a very poor one.

 

The idea of building a true object-oriented model on a server, where the objects never leave that server is absurd. The Presentation layer still needs all the data so it can be shown to the user and so the user can interact with it in some manner. This means that the “objects” in the middle must convert themselves into raw data for use by the Presentation layer.

 

And of course the Presentation layer needs to do something with the data. The ideal is that the Presentation layer has no logic at all, that it is just a pass-through between the user and the business objects. But the reality is that the Presentation layer ends up with some logic as well – if only to give the user a half-way decent experience. So the Presentation layer often needs to convert the raw data into some useful data structures or objects.

 

The end result with “distributed objects” is that there’s typically duplicated business logic (at least validation) between the Presentation and Business layers. The Presentation layer is also unnecessarily complicated by the need to put the data into some useful structure.

 

And the Business layer is complicated as well. Think about it. Your typical OO model includes a set of objects designed using OOD sitting on top of an ORM (object-relational mapping) layer. I typically call this the Data Access layer. That Data Access layer then interacts with the real Data layer.

 

But in a “distributed object” model, there’s the need to convert the objects’ data back into raw data – often quasi-relational or hierarchical – so it can be transferred efficiently to the Presentation layer. This is really a whole new logical layer very akin to the ORM layer, except that it maps between the Presentation layer’s data structures and the objects rather than between the Data layer’s structures and the objects.

 

What a mess!

 

Ted is absolutely right when he suggests that “distributed objects” should be discarded. If you are really stuck on having your business logic “centralized” on a server then service-orientation is a better approach. Using formalized message-based communication between the client application and your service-oriented (hence procedural, not object-oriented) server application is a better answer.

 

Note that the terminology changed radically! Now you are no longer building one application, but rather you are building at least two applications that happen to interact via messages. Your server doesn't pretend to be object-oriented, but rather is service-oriented - which is a code phrase for procedural programming. This is a totally different mindset from “distributed objects”, but it is far better.

 

Of course another model is to use mobile objects or mobile agents. This is the model promoted in my Business Objects books and enabled by CSLA .NET. In the mobile object model your Business layer exists on both the client machine (or web server) and application server. The objects physically move between the two machines – running on the client when user interaction is required and running on the application server to interact with the database.

 

The mobile object model allows you to continue to build a single application (rather than 2+ applications with SO), but overcomes the nasty limitations of the “distributed object” model.

Friday, August 05, 2005 9:12:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, July 24, 2005

Mike has requested my thoughts on 3-tier and the web – a topic I avoided in my previous couple entries (1 and 2) because I don’t find it as interesting as a smart/intelligent client model. But he’s right, the web is widely used and a lot of poor people are stuck building business software in that environment, so here’s the extension of the previous couple entries into the web environment.

 

In my view the web server is an application server, pure and simple. Also, in the web world it is impractical to run the Business layer on the client because the client is a very limited environment. This is largely why the web is far less interesting.

 

To discuss things in the web world I break the “Presentation” layer into two parts – Presentation and UI. This new Presentation layer is purely responsible for display and input to/from the user – it is the stuff that runs on the browser terminal. The UI layer is responsible for all actual user interaction – navigation, etc. It is the stuff that runs on the web server: your aspx pages in .NET.

 

In web applications most people consciously (or unconsciously) duplicate most validation code into the Presentation layer so they can get it to run in the browser. Thus is expensive to create/maintain, but is an unfortunate evil required to have a half-way decent user experience in the web environment. You must still have that logic in your actual Business layer of course, because you can never trust the browser - it is too easily bypassed (Greasemonkey anyone?). This is just the way it is on the web, and will be until we get browsers that can run complete code solutions in .NET and/or Java (that's sarcasm btw).

 

On the server side, the web server IS an application server. It fills the exact same role of the mainframe or minicomputer over the past 30 years of computing. For "interactive" applications, it is preferable to run the UI layer, Business layer and Data Access layer all on the web server. This is the simplest (and thus cheapest) model, and provides the best performance[1]. It can also provide very good scalability because it is relatively trivial to create a web farm to scale out to many servers. By creating a web farm you also get very good fault tolerance at a low price-point. Using ISA as a reverse proxy above the web farm you can get good security.

 

In many organizations the reverse proxy idea isn’t acceptable (not being a security expert I can’t say why…) and so they have a policy saying that the web server is never allowed to interact directly with the database server – thus forcing the existence of an application server that at a minimum runs the Data Access layer. Typically this application server is behind a second firewall. While this security approach hurts performance (often by as much as 50%), it is relatively easily achieved with CSLA .NET or similar architecture/frameworks.

 

In other situations people prefer to put the Business layer and Data Access layer on the application server behind the second firewall. This means that the web server only runs the UI layer. Any business processing, validation, etc. must be deferred across the network to the application server. This has a much higher impact on performance (in a bad way).

 

However, this latter approach can have a positive scalability impact in certain applications. Specifically applications where there’s not much interactive content, but instead there’s a lot of read-only content. Most read-only content (by definition) has no business logic and can often be served directly from the UI layer. In such applications the IO load for the read-only content can be quite enough to keep the web server very busy. By offloading all business processing to an application server overall scalability may be improved.

 

Of course this only really works if the interactive (OLTP) portions of the application are quite limited in comparison to the read-only portions.

 

Also note that this latter approach suffers from the same drawbacks as the thin client model discussed in my previous post. The most notable problem is that you must come up with a way to do non-chatty communication between the UI layer and the Business layer, without compromising either layer. This is historically very difficult to pull off. What usually happens is that the “business objects” in the Business layer require code to externalize their state (data) into a tabular format such as a DataSet so the UI layer can easily use the data. Of course externalizing object state breaks encapsulation unless it is done with great care, so this is an area requiring extra attention. The typical end result are not objects in a real OO sense, but rather are “objects” comprised of a set of atomic, stateless methods. At this point you don’t have objects at all – you have an API.

 

In the case of CSLA .NET, I apply the mobile object model to this environment. I personally believe it makes things better since it gives you the flexibility to run some of your business logic on the web application server and some on the pure application server as appropriate. Since the Business layer is installed on both the web and application servers, your objects can run in either place as needed.

 

In short, to make a good web app it is almost required that you must compromise the integrity of your layers and duplication some business logic into the Presentation layer. It sucks, but its life in the wild world of the web. If you can put your UI, Business and Data Access layers on the web application server that’s best. If you can’t (typically due to security) then move only the Data Access layer and keep both UI and Business layers on the web application server. Finally, if you must put the Business layer on a separate application server I prefer to use a mobile object model for flexibility, but recognize that a pure API model on the application server will scale higher and is often required for applications with truly large numbers of concurrent users (like 2000+).

 

 

[1] As someone in a previous post indirectly noted, there’s a relationship between performance and scalability. Performance is the response time of the system for a user. Scalability is what happens to performance as the number of users and/or transactions is increased.

Sunday, July 24, 2005 8:47:26 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Saturday, July 23, 2005

In my last post I talked about logical layers as compared to physical tiers. It may be the case that that post (and this one) are too obvious or basic. But I gotta say that I consistently am asked about these topics at conferences, user groups and via email. The reality is that none of this is all that obvious or clear to the vast majority of people in our industry. Even for those that truly grok the ideas, there’s far from universal agreement on how an application should be layered or how those layers should be deployed onto tiers.

 

In one comment on my previous post Magnus points out that my portrayal of the application server merely as a place for the Data Access layer flies in the face of Magnus’ understanding of n-tier models. Rather, Magnus (and many other people) is used to putting both the Business and Data Access layers on the application server, with only the Presentation layer on the client workstation (or presumably the web server in the case of a web application, though the comment didn’t make that clear).

 

The reality is that there are three primary models to consider in the smart/rich/intelligent client space. There are loose analogies in the web world as well, but personally I don’t find that nearly as interesting, so I’m going to focus on the intelligent client scenarios here.

 

Also one quick reminder – tiers should be avoided. This whole post assumes you’ve justified using a 3-tier model to obtain scalability or security as per my previous post. If you don’t need 3-tiers, don’t use them – and then you can safely ignore this whole entry :)

 

There’s the thick client model, where the Presentation and Business layers are on the client, and the Data Access is on the application server. Then there’s the thin client model where only the Presentation layer is on the client, with the Business and Data Access layers on the application server. Finally there’s the mobile object model, where the Presentation and Business layers are on the client, and the Business and Data Access layers are on the application server. (Yes, the Business layer is in two places) This last model is the one I discuss in my Expert VB.NET and C# Business Objects books and which is supported by my CSLA .NET framework.

 

The benefit to the thick client model is that we are able to provide the user with a highly interactive and responsive user experience, and we are able to fully exploit the resources of the client workstation (specifically memory and CPU). At the same time, having the Data Access layer on the application server gives us database connection pooling. This is a very high scaling solution, with a comparatively low cost, because we are able to exploit the strengths of the client, application and database servers very effectively. Moreover, the user experience is very good and development costs are relatively low (because we can use the highly productive Windows Forms technology).

 

The drawback to the thick client model is a loss of flexibility – specifically when it comes to process-oriented tasks. Most applications have large segments of OLTP (online transaction processing) functionality where a highly responsive and interactive user experience is of great value. However, most applications also have some important segments of process-oriented tasks that don’t require any user interaction. In most cases these tasks are best performed on the application server, or perhaps even directly in the database server itself. This is because process-oriented tasks tend to be very data intensive and non-interactive, so the closer we can do them to the database the better. In a thick client model there’s no natural home for process-oriented code near the database – the Business layer is way out on the client after all…

 

Another perceived drawback to the thick client is deployment. I dismiss this however, given .NET’s no-touch deployment options today, and ClickOnce coming in 2005. Additionally, any intelligent client application requires deployment of our code – the Presentation layer at least. Once you solve deployment of one layer you can deploy other layers as easily, so this whole deployment thing is a non-issue in my mind.

 

In short, the thick client model is really nice for interactive applications, but quite poor for process-oriented applications.

 

The benefit to the thin client model is that we have greater control over the environment into which the Business and Data Access layers are deployed. We can deploy them onto large servers, multiple servers, across disparate geographic locations, etc. Another benefit to this model is that it has a natural home for process-oriented code, since the Business layer is already on the application server and thus is close to the database.

 

Unfortunately history has shown that the thin client model is severely disadvantaged compared to the other two models. The first disadvantage is scalability in relationship to cost.  With either of the other two models as you add more users you intrinsically add more memory and CPU to your overall system, because you are leveraging the power of the client workstation. With a thin client model all the processing is on the servers, and so client workstations add virtually no value at all – their memory and CPU is wasted. Any scalability comes from adding larger or more numerous server hardware rather than by adding cheaper (and already present) client workstations.

 

The other key drawback to the thin client model is the user experience. Unless you are willing to make “chatty” calls from the thin Presentation layer to the Business layer across the network on a continual basis (which is obviously absurd), the user experience will not be interactive or responsive. By definition the Business layer is on a remote server, so the user’s input can’t be validated or processed without first sending it across the network. The end result is roughly equivalent to the mainframe user experiences users had with 3270 terminals, or the experience they get on the web in many cases. Really not what we should expect from an “intelligent” client…

 

Of course deployment remains a potential concern in this model, because the Presentation layer must still be deployed to the client. Again, I dismiss this as a main issue any longer due to no-touch deployment and ClickOnce.

 

In summary, the thin client model is really nice for process-oriented (non-interactive) applications, but is quite inferior for interactive applications.

 

This brings us to the mobile object model. You’ll note that neither the thick client nor thin client model is optimal, because almost all applications have some interactive and some non-interactive (process-oriented) functionality. Neither of the two “purist” models really addresses both requirements effectively. This is why I am such a fan of the mobile object (or mobile agent, or distributed OO) model, as it provides a compromise solution. I find this idea so compelling that it is the basis for my books.

 

The mobile object model literally has us deploy the Business layer to both the client and application server. Given no-touch deployment and/or ClickOnce this is quite practical to achieve in.NET (and in Java interestingly enough). Coupled with .NET’s ability to pass objects across the network by value (another ability shared with Java), all the heavy lifting to make this concept work is actually handled by .NET itself, leaving us to merely enjoy the benefits.

 

The end result is that the client has the Presentation and Business layers, meaning we get all the benefits of the thick client model. The user experience is responsive and highly interactive. Also we are able to exploit the power of the client workstation, offering optimal scalability at a low cost point.

 

But where this gets really exciting is the flexibility offered. Since the Business layer also runs on the application server, we have all the benefits of the thin client model. Any process-oriented tasks can be performed by objects running on the application server, meaning that all the power of the thin client model is at our disposal as well.

 

The drawback to the mobile object approach is complexity. Unless you have a framework to handle the details of moving an object to the client and application server as needed this model can be hard to implement. However, given a framework that supports the concept the mobile object approach is no more complex than either the thick or thin client models.

 

In summary, the mobile object model is great for both interactive and non-interactive applications. I consider it a “best of both worlds” model and CSLA .NET is specifically designed to make this model comparatively easy to implement in a business application.

 

At the risk of being a bit esoteric, consider the broader possibilities of a mobile object environment. Really a client application or an application server (Enterprise Services or IIS) are merely hosts for our objects. Hosts provide resources that our objects can use. The client “host” provides access to the user resource, while a typical application server “host” provides access to the database resource. In some applications you can easily envision other hosts such as a batch processing server that provides access to a high powered CPU resource or a large memory resource.

 

Given a true mobile object environment, objects would be free to move to a host that offers the resources an object requires at any point in time. This is very akin to grid computing. In the mobile object world objects maintain both data and behavior and merely move to physical locations in order to access resources. Raw data never moves across the network (except between the Data Access and Data Storage layers), because data without context (behavior) is meaningless.

 

Of course some very large systems have been built following both the thick client and thin client models. It would be foolish to say that either is fatally flawed. But it is my opinion that neither is optimal, and that a mobile object approach is the way to go.

Saturday, July 23, 2005 11:12:25 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, July 21, 2005

I am often asked whether n-tier (where n>=3) is always the best way to go when building software.

 

Of course the answer is no. In fact, it is more likely that n-tier is not the way to go!

 

By the way, this isn’t the first time I’ve discussed this topic – you’ll find previous blog entries on this blog and an article at www.devx.com where I’ve covered much of the same material. Of course I also cover it rather a lot in my Expert VB.NET and C# Business Objects books.

 

Before proceeding further however, I need to get some terminology out of the way. There’s a huge difference between logical tiers and physical tiers. Personally I typically refer to logical tiers as layers and physical tiers as tiers to avoid confusion.

 

Logical layers are merely a way of organizing your code. Typical layers include Presentation, Business and Data – the same as the traditional 3-tier model. But when we’re talking about layers, we’re only talking about logical organization of code. In no way is it implied that these layers might run on different computers or in different processes on a single computer or even in a single process on a single computer. All we are doing is discussing a way of organizing a code into a set of layers defined by specific function.

 

Physical tiers however, are only about where the code runs. Specifically, tiers are places where layers are deployed and where layers run. In other words, tiers are the physical deployment of layers.

 

Why do we layer software? Primarily to gain the benefits of logical organization and grouping of like functionality. Translated to tangible outcomes, logical layers offer reuse, easier maintenance and shorter development cycles. In the final analysis, proper layering of software reduces the cost to develop and maintain an application. Layering is almost always a wonderful thing!

 

Why do we deploy layers onto multiple tiers? Primarily to obtain a balance between performance, scalability, fault tolerance and security. While there are various other reasons for tiers, these four are the most common. The funny thing is that it is almost impossible to get optimum levels of all four attributes – which is why it is always a trade-off between them.

 

Tiers imply process and/or network boundaries. A 1-tier model has all the layers running in a single memory space (process) on a single machine. A 2-tier model has some layers running in one memory space and other layers in a different memory space. At the very least these memory spaces exist in different processes on the same computer, but more often they are on different computers. Likewise, a 3-tier model has two boundaries. In general terms, an n-tier model has n-1 boundaries.

 

Crossing a boundary is expensive. It is on the order of 1000 times slower to make a call across a process boundary on the same machine than to make the same call within the same process. If the call is made across a network it is even slower. It is very obvious then, that the more boundaries you have the slower your application will run, because each boundary has a geometric impact on performance.

 

Worse, boundaries add raw complexity to software design, network infrastructure, manageability and overall maintainability of a system. In short, the more tiers in an application, the more complexity there is to deal with – which directly increases the cost to build and maintain the application.

 

This is why, in general terms tiers should be minimized. Tiers are not a good thing, they are a necessary evil required to obtain certain levels of scalability, fault tolerance or security.

 

As a good architect you should be dragged kicking and screaming into adding tiers to your system. But there really are good arguments and reasons for adding tiers, and it is important to accommodate them as appropriate.

 

The reality is that almost all systems today are at least 2-tier. Unless you are using an Access or dBase style database your Data layer is running on its own tier – typically inside of SQL Server, Oracle or DB2. So for the remainder of my discussion I’ll primarily focus on whether you should use a 2-tier or 3-tier model.

 

If you look at the CSLA .NET architecture from my Expert VB.NET and C# Business Objects books, you’ll immediately note that it has a construct called the DataPortal which is used to abstract the Data Access layer from the Presentation and Business layers. One key feature of the DataPortal is that it allows the Data Access layer to run in-process with the business layer, or in a separate process (or machine) all based on a configuration switch. It was specifically designed to allow an application to switch between a 2-tier or 3-tier model as a configuration option – with no changes required to the actual application code.

 

But even so, the question remains whether to configure an application for 2 or 3 tiers.

 

Ultimately this question can only be answered by doing a cost-benefit analysis for your particular environment. You need to weigh the additional complexity and cost of a 3-tier deployment against the benefits it might bring in terms of scalability, fault tolerance or security.

 

Scalability flows primarily from the ability to get database connection pooling. In CSLA .NET the Data Access layer is entirely responsible for all interaction with the database. This means it opens and closes all database connections. If the Data Access layer for all users is running on a single machine, then all database connections for all users can be pooled. (this does assume of course, that all users employ the same database connection string include the same database user id – that’s a prerequisite for connection pooling in the first place)

 

The scalability proposition is quite different for web and Windows presentation layers.

 

In a web presentation the Presentation and Business layers are already running on a shared server (or server farm). So if the Data Access layer also runs on the same machine database connection pooling is automatic. In other words, the web server is an implicit application server, so there’s really no need to have a separate application server just to get scalability in a web setting.

 

In a Windows presentation the Presentation and Business layers (at least with CSLA .NET) run on the client workstation, taking full advantage of the memory and CPU power available on those machines. If the Data Access layer is also deployed to the client workstations then there’s no real database connection pooling, since each workstation connects to the database directly. By employing an application server to run the Data Access layer all workstations offload that behavior to a central machine where database connection pooling is possible.

 

The big question with Windows applications is at what point to use an application server to gain scalability. Obviously there’s no objective answer, since it depends on the IO load of the application, pre-existing load on the database server and so forth. In other words it is very dependant on your particular environment and application. This is why the DataPortal concept is so powerful, because it allows you to deploy your application using a 2-tier model at first, and then switch to a 3-tier model later if needed.

 

There’s also the possibility that your Windows application will be deployed to a Terminal Services or Citrix server rather than to actual workstations. Obviously this approach totally eliminates the massive scalability benefits of utilizing the memory and CPU of each user’s workstation, but does have the upside of reducing deployment cost and complexity. I am not an expert on either server environment, but it is my understanding that each user session has its own database connection pool on the server, thus acting the same as if each user has their own separate workstation. If this is actually the case, then an application server would have benefit by providing database connection pooling. However, if I’m wrong and all user sessions share database connections across the entire Terminal Services or Citrix server then having an application server would offer no more scalability benefit here than it does in a web application (which is to say virtually none).

 

Fault tolerance is a bit more complex than scalability. Achieving real fault tolerance requires examination of all failure points that exist between the user and the database – and of course the database itself. And if you want to be complete, you just also consider the user to be a failure point, especially when dealing with workflow, process-oriented or service-oriented systems.

 

In most cases adding an application server to either a web or Windows environment doesn’t improve fault tolerance. Rather it merely makes it more expensive because you have to make the application server fault tolerant along with the database server, the intervening network infrastructure and any client hardware. In other words, fault tolerance is often less expensive in a 2-tier model than in a 3-tier model.

 

Security is also a complex topic. For many organizations however, security often comes down to protecting access to the database. From a software perspective this means restricting the code that interacts with the database and providing strict controls over the database connection strings or other database authentication mechanisms.

 

Security is a case where 3-tier can be beneficial. By putting the Data Access layer onto its own application server tier we isolate all code that interacts with the database onto a central machine (or server farm). More importantly, only that application server needs to have the database connection string or the authentication token needed to access the database server. No web server or Windows workstation needs the keys to the database, which can help improve the overall security of your application.

 

Of course we must always remember that switching from 2-tier to 3-tier decreases performance and increases complexity (cost). So any benefits from scalability or security must be sufficient to outweigh these costs. It all comes down to a cost-benefit analysis.

 

Thursday, July 21, 2005 3:51:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, May 03, 2005

Over the past few weeks I've given a number of presentations on service-oriented design and how it relates to both OO and n-tier models. One constant theme has been my recommendation that service methods use a request/response or just request method signature:

 

   response = f(request)

or

   f(request)

 

"request" and "response" are both messages, defined by a type or schema (with a little "s", not necessarily XSD).

 

The actual procedure, f, is then implemented as a Message Router, routing each call to an appropriate handler method depending on the specific type of the request message.

 

In concept this is an ideal model, as it helps address numerous issues around decoupling the client and service, and around versioning of a service over time. I discussed many of these issues in my article on SOA Covenants.

 

Of course the devil is in the details, or in this case the implementation. Today's web service technologies don't make it particularly easy to attach multiple schemas to a given parameter - which we must do to the "request" parameter in order to allow our single endpoint to accept multiple request messages.

 

One answer is to manually create the XSD and/or WSDL. Personally I find that answer impractical. Angle-brackets weren't meant for human creation or consumption, and programmers shouldn't have to see (much less type) things like XSD or WSDL.

 

Fortunately there's another answer. In this blog entry, Microsoft technology expert Bill Wagner shows how to implement "method overloading" in asmx using C# to do the heavy lifting.

 

While the outcome isn't totally seamless or perfect, it is pretty darn good. In my mind Bill's answer is a decent compromise between getting the functionality we need, and avoiding the manual creation of cryptic WSDL artifacts.

 

The end result is that it is totally possible to construct flexible, maintainable and broadly useful web services using VB.NET or C#. Web services that follow a purely message-based approach, implement a message router on the server and thus provide a good story for both versioning and multiple message format stories.

 

Indigo, btw, offers a similar solution to what Bill talks about. In the Indigo situation however, it appears that we’ll have more fine-grained control over the generation of the public schema through the use of the DataContract idea, combined with inheritance in our VB or C# code.

Tuesday, May 03, 2005 10:15:53 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, April 18, 2005

Indigo is Microsoft’s code name for the technology that will bring together the functionality in today’s .NET Remoting, Enterprise Services, Web services (including WSE) and MSMQ. Of course knowing what it is doesn’t necessarily tell us whether it is cool, compelling and exciting … or rather boring.

 

Ultimately beauty is in the eye of the beholder. Certainly the Indigo team feels a great deal of pride in their work and they paint this as a very big and compelling technology.

 

Many technology experts I’ve talked to outside of Microsoft are less convinced that it is worth getting all excited.

 

Personally, I must confess that I find Indigo to be a bit frustrating. While it should provide some absolutely critical benefits, in my view it is merely laying the groundwork for the potential of something actually exciting to follow a few years later.

 

Why do I say this?

 

Well, consider what Indigo is again. It is a technology that brings together a set of existing technologies. It provides a unified API model on top of a bunch of concepts and tools we already have. To put it another way, it lets us do what we can already do, but in a slightly more standardized manner.

 

If you are a WSE user, Indigo will save you tons of code. But that’s because WSE is experimental stuff and isn’t refined to the degree Remoting, Enterprise Services or Web services are. If you are using any of those technologies, Indigo won’t save you much (if any) code – it will just subtly alter the way you do the things you already do.

 

Looking at it this way, it doesn’t sound all that compelling really does it?

 

But consider this. Today’s technologies are a mess. We have at least five different technologies for distributed communication (Remoting, ES, Web services, MSMQ and WSE). Each technology shines in different ways, so each is appropriate in different scenarios. This means that to be a competent .NET architect/designer you must know all five reasonably well. You need to know the strengths and weaknesses of each, and you must know how easy or hard they are to use and to potentially extend.

 

Worse, you can’t expect to easily switch between them. Several of these options are mutually exclusive.

 

But the final straw (in my mind) is this: the technology you pick locks you into a single architectural world-view. If you pick Web services or WSE you are accepting the SOA world view. Sure you can hack around that to do n-tier or client/server, but it is ugly and dangerous. Similarly, if you pick Enterprise Services you get a nice set of client/server functionality, but you lose a lot of flexibility. And so forth.

 

Since the architectural decisions are so directly and irrevocably tied to the technology, we can’t actually discuss architecture. We are limited to discussing our systems in terms of the technology itself, rather than the architectural concepts and goals we’re trying to achieve. And that is very sad.

 

By merging these technologies into a single API, Indigo may allow us to elevate the level of dialog. Rather than having inane debates between Web services and Remoting, we can have intelligent discussions about the pros and cons of n-tier vs SOA. We can apply rational thought as to how each distributed architecture concept applies to the various parts of our application.

 

We might even find that some parts of our application are n-tier, while others require SOA concepts. Due to the unified API, Indigo should allow us to actually do both where appropriate. Without irrational debates over protocol, since Indigo natively supports concepts for both n-tier and SOA.

 

Now this is compelling!

 

As compelling as it is to think that we can start having more intelligent and productive architectural discussions, that isn’t the whole of it. I am hopeful that Indigo represents the groundwork for greater things.

 

There are a lot of very hard problems to solve in distributed computing. Unfortunately our underlying communications protocols never seem to stay in place long enough for anyone to really address the more interesting problems. Instead, for many years now we’ve just watched as vendors reinvent the concept of remote procedure calls over and over again: RPC, IIOP, DCOM, RMI, Remoting, Web services, Indigo.

 

That is frustrating. It is frustrating because we never really move beyond RPC. While there’s no doubt that Indigo is much easier to use and more clear than any previous RPC scheme, it is also quite true that Indigo merely lets us do what we could already do.

 

What I’m hoping (perhaps foolishly) is that Indigo will be the end. That we’ll finally have an RPC technology that is stable and flexible enough that it won’t need to be replaced so rapidly. And being stable and flexible, it will allow the pursuit of solutions to the harder problems.

 

What are those problems? They are many, and they include semantic meaning of messages and data. They include distributed synchronization primitives and concepts. They include standardization and simplification of background processing – making it as easy and natural as synchronous processing is today. They include identity and security issues, management of long-running processes, simplification of compensating transactions and many other issues.

 

Maybe Indigo represents the platform on which solutions to these and other problems can finally be built. Perhaps in another 5 years we can look back and say that Indigo was the turning point that finally allowed us to really make distributed computing a first-class concept.

Monday, April 18, 2005 10:13:06 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Projecting js code from a .NET business layer
Using the MVVM pattern requires a framework
Silverlight 6 doesn’t matter
Why MVVM?
Architect training in May 2011
Talking Business and Objectification with Rocky Lhotka
What makes a framework ‘heavyweight’?
IASA training in Minneapolis
Some thoughts on Application Architecture
Rules for Software Architects
On the use and misuse of patterns
CSLA 4 and MVVM video series
A “computer free” future?
Do Architects Need to Code?
Setting up a basic n-tier CSLA .NET for Silverlight project
Application Architecture Guide 2.0
Windows Azure and the value of restricted platforms/architectures
Some thoughts on Windows Azure
Permission-based authorization vs role-based authorization
Using CSLA Light, Part 1
Repost from an MSDN forum discussion
CSLA Light object serialization
Prototype Silverlight data portal
My philosophy on using new technologies
Why enable the data portal behind a web service?
The costs and security benefits of n-tier over 2-tier
Primary motivations to move from 2- to 3-tier
Who is responsible for thread synchronization
Serializing business objects to Silverlight/AJAX?
ADO.NET Entity Framework, LINQ and CSLA .NET
Does workflow compete with CSLA .NET?
Should you change values in a property set block?
Rambling thoughts on pragmatic development
Where Workflow Foundation really shines
Iron Architect Contest
An (accurate) rant on OO
Should validation be in the UI or in business objects
SPOIL - interesting ideas
Software is too darn hard
“Distributed Objects” are bad?
Extending the 3-tier conversation to the web
A variety of physical n-tier options
Should all apps be n-tier?
Message-based WebMethod "overloading"
Is Indigo compelling or boring?
Search
Archives
Feed your aggregator (RSS 2.0)
July, 2014 (2)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In