Thursday, October 28, 2004

Pat Helland reminds me (us?) that technology is fun, but there are much bigger and more important things in life.

Not only did his blog entry bring tears to my eyes, but it reminded me that discussions of VB vs C# or Java vs .NET are intellectually enjoyable, but pale in comparison next to things like Bush vs Kerry or my mother's recovery from last week's hip surgery or the excitement of my son at Best Buy last night when he bought the expansion for Star Wars Galaxies. Those things have real and tangible meaning and value.

Whether we program with or without semi-colons or with a J or an N really don't matter at all in comparison...

Thursday, October 28, 2004 4:46:29 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, October 21, 2004

This is actually a shameless plug regarding me being quoted in the press...

In the very same Enterprise Architect magazine whose article I maligned in my previous blog entry I am actually quoted in a different article. Check in the Editors Note and you'll find that Jim Fawcette does a good job of paraphrasing a statement I made about SOA during a discussion panel at VS Live in Orlando this fall.

Thursday, October 21, 2004 9:24:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

In the just published Enterprise Architect magazine is an article titled “SOA: Debunking 3 Common Myths”. The first and third myth I generally agree with. The second is problematic on several levels.

The myth in question is that SOA implies a distributed system. The author explains that SOA does not imply a distributed system, but rather is merely a message-based architecture of providers and consumers. So far I’m in agreement. SOA can be done in a single process. Whether that’s wise or not is another discussion, but it certainly can be done.

Then the author goes on to suggest that a good architecture should include a Locator service so service providers can be moved to other machines if needed. Again, so far so good. This kind of flexibility is very important.

But then we get my point of disgreement. The closing statement is that “services (and SOA) do not imply any distribution semantics”. Say what?

If you are going to actually have the option of running a service on another machine, then you must design it to follow distributed semantics from day one. You must design it with course-grained, message-based interfaces. You must ensure that the service provider never trusts the service consumer and visa versa. You must ensure that there is no shared state between provider and consumer - including that they can't share a database or any other common resource.

In short, you must design the service provider and consumer with the assumption that they are running on separate machines. Then, having written them with distributed semantics, you can choose to collapse the physical deployment so they run in the same process on the same machine.

Failure to view SOA as a distributed architecture is a big mistake. In SOA there’s the absolute implication that the provider and consumer may be distributed. Philosophically we are designing for a distributed scenario. The fact that we might not be distributed in some cases is immaterial. SOA means you always design using distribution semantics.

Thursday, October 21, 2004 12:48:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I’ve been involved in yet another discussion about whether people should use Remoting (vs DCOM/Enterprise Services or ASMX/WSE), and I thought I’d share some more of my thoughts in this regard.


One key presupposition in this discussion is often than we’re talking about communication across service boundaries. The train of thought goes that Microsoft (at least the Indigo team) seems to indicate that the only cross-network communication we’ll ever do in the future is across service boundaries. In short, n-tier is dead and will never happen. Anywhere n-tier would exist should be treated as crossing a service (and thus trust) boundary.


I consider that assumption to be fundamentally flawed.


Just because the Indigo team sees a future entirely composed of services doesn't mean that is a correct answer. When MSMQ first came out the MSMQ team was telling everyone that all communication should be done through MSMQ. Filter through the absurd bias of the product teams and apply some basic judgment and it is clear that not all cross-network communication will be service-oriented.


The idea of crossing trust boundaries between parts of my application incurs a terribly high cost in terms of duplicate logic and overhead in converting data into/out of messages. The complexity and overhead is tremendous and makes no sense for communication between what would have been trusted layers in my application…


In a world where service-orientation exists, but is not the only story, there's a big difference between creating a n-layer/n-tier application and creating a set of applications that work together via services. They are totally different things. 


I totally agree that neither Remoting nor DCOM/ES are appropriate across service boundaries. Those technologies are not appropriate between applications in general.


However, I obviously do not for a minute buy the hype that service-orientation will obviate the need for n-tier architectures. And between tiers of your application, ASMX/WSE is a very limited and poor choice. As an example, ASMX/WSE uses the highly limited XmlSerializer, which is virtually useless for transferring objects by value. Between tiers of your application either Remoting or DCOM/ES are good choices.


Application tiers are just application layers that happen to be separated across a network boundary. By definition they are tightly coupled, because they are just layers of the same application. There are many techniques you can use here, all of which tend to be tightly coupled because you are inside the same service and trust boundary even though you are crossing tiers.


I’ve personally got no objection to using DCOM/ES between tiers running on different servers (like web server to app server). The deployment and networking complexity incurred is relatively immaterial in such scenarios.


But if you are creating rich or smart client tiers that talk to an app server then the loss of no-touch deployment and the increased networking complexity incurred by DCOM/ES is an overwhelming negative. In most cases this complexity and cost is far higher than the (rather substantial) performance benefit. In this scenario Remoting is usually the best option.


Of course, if you can’t or won’t use Remoting in this case, you can always hack ASMX/WSE to simulate Remoting to get the same result (with a small performance penalty). You can wrap both ends of the process manually so you can use the BinaryFormatter and actually get pass-by-value semantics with ASMX/WSE. The data on the wire is a Base64 byte array, but there's no value in being "open" between layers of your own application anyway so it doesn't matter.


There's a performance hit due to the Base64 encoding, but it may be worth it to gain the benefits of WSE. And it may be worth it so you can say that you are conforming to recommendations against Remoting.


For better or worse, Remoting does some things that neither DCOM/ES nor ASMX/WSE do well. You have the choice of either using Remoting (with its commensurate issues) or of hacking to make DCOM or ASMX/WSE do what Remoting can do. Neither choice is particularly appealing, and hopefully Indigo will provide a better solution overall. Only time will tell.

Thursday, October 21, 2004 7:34:31 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, October 20, 2004

Jason Bock has been trying strange stuff with generics.


I have as well, with the intent of addressing some uses for CSLA .NET 2.0. Here’s one important concept I’ve figured out:


Public Class BaseClass

  Public Function BaseAnswer() As Integer

    Return 42

  End Function

End Class


Public Class AddMethod(Of T As {New, AddMethod(Of T)})

  Inherits BaseClass


  Public Shared Function GetObject() As T

    Return New T

  End Function

End Class


Public Class StrangeMethod

  Inherits AddMethod(Of StrangeMethod)


  Public Function GetStrangeAnswer() As Integer

    Return 123321

  End Function

End Class


Public Class TestAddMethod

  Public Sub Test()

    Dim extendedBaseClass As StrangeMethod = StrangeMethod.GetObject



  End Sub

End Class


The primary benefits are:


1)      The consumer (the Test method) doesn’t have to deal with generic syntax at all – yea!

2)      The code inside the generic can be minimized – very little code needs to be in the AddMethod class. This is good, because debugging inside generics is limited compared to inside regular classes. Most code can be put into BaseClass rather than AddMethod, and AddMethod can include only code requiring the use of generics (unlike my trivial example here).

3)      By keeping as much code as possible in BaseClass we gain better polymorphism. Generics aren’t polymorphic, so the fewer public methods in a generic the better. Putting public methods into a base class or interface is far preferable in most cases.

4)      Note that AddMethod is constrained to only be used through inheritance. I thought this was a neat trick that helps enforce this particular model for using generics. AddMethod is only useful for creation of a subclass.

Wednesday, October 20, 2004 7:45:02 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, October 14, 2004

I'm listening to Pat Helland (a serious thought leader in the service-oriented space) speak and it has me seriously thinking.


I think that one fundamental difference between service-oriented thinking and n-tier is this:


We've spent years trying to make distributed systems look and feel like they run in a single memory space on a single computer. But they DON'T. Service-oriented thinking is all about recognizing that we really are running across multiple systems. It is recognition that you can’t trivialize, abstract or simplify this basic fact. It is acknowledgement that we just can’t figure it out.


Things like distributed transactions that are designed to make many computers look like one. Heck, CSLA .NET is all about making multiple computers look like a single hosting space for objects.


And these things only fit into a service-oriented setting inside a service, never between them. The whole point of service-orientation is that services are autonomous and thus don’t trust each other. Transactions (including work-arounds like optimistic concurrency) are high-trust operations and thus can not exist between autonomous services – they can only exist inside them.


Personally I don’t think this rules out n-layer or n-tier applications, or n-layer and n-tier services. But I do think it reveals how the overall service-oriented may be fundamentally different from n-tier (or OO for that matter).

Thursday, October 14, 2004 1:12:59 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Earlier this week I spoke at the Vermont .NET user group in Burlington, VT. I count myself fortunate to have been in VT in early October, as the fall colors were out in their full splendor. Many parts of the world have their legends, and one of Vermont’s is the majesty of the fall colors. I have got to say that in this case the reality is at least as good as the hype, and probably better. I saw views that were simply breathtaking, and the day was cloudy! I can only imagine how impressive it would be under bright sunshine.


At the end of my presentation to the user group a gentleman (who’d driven all the way up from southern Connecticut!) asked whether I thought VB or C# was the better language. This is not an uncommon question, and it has been a perennial issue since .NET was in alpha release. Normally I tend to brush the question off by either answering with the typical Microsoft wishy-washy stuff about culture and freedom of choice, but it turns out that I’ve been giving this issue quite a lot of thought over the past few months and my views have been changing.


To fully understand my perspective, you need to realize that I spent many years working on DEC VAX computers using the VMS and then OpenVMS operating system.


It was, by the way, renamed to OpenVMS as a marketing response to the “open” Unix world. Yes, the same Unix world that is now being eaten alive by the more open Linux world. The humor to be found in our industry is often subtle yet truly impressive.


One of the key attributes of VMS was language neutrality. The operating system was written in assembly and FORTRAN, and there was some small benefit to using FORTAN as a programming language over other languages. However, other languages such as Pascal and VAX Basic were adapted to the platform and were easily equals to FORTRAN for virtually any task. In fact VAX Basic was often better, because it had taken language concepts from Pascal, FORTRAN and other languages and was basicall a powerful hybrid with the best of all the other languages. All the OS-level power of FORTRAN with the elegance of Pascal.


As a side note, the only second-class language I ever encountered under VMS was C++. It turns out that C and C++ were/are way too tied to the stream-based Unix world-view to be good general use languages across multiple platforms. Things that were trivial in any other VMS language were insanely difficult in C++, especially as they related to any sort of IO operation. Since most programs do some IO, this made C++ really nasty…


For some years I was a manager in the IT department of a manufacturing company. At that time one of my key hiring requirements was that a developer had to know at least two languages. Knowledge of just one language was an absolute sign of immaturity and/or closed-mindedness which would lead to a dangerous lack of perspective.


Once Windows NT and Visual Basic came out I started migrating from VMS to Windows – basically following David Cutler (the creator of both VMS and Windows NT).


In many ways Windows NT lost a great deal of the language neutrality offered by VMS. The primary language of NT was C++, and the language warped the OS in some ways that made it more like Unix and less like VMS. The idea that FORTRAN or Pascal would be comparable to C++ in Windows was (and is) absurd. This lack of language neutrality certainly slowed the adoption of NT as a primary development target for the first several years of its existence. Just look back at the books and code of the early 90’s – writing even simple applications was ridiculously complex. Especially when compared to more mature environments like Unix or OpenVMS.


Enter Visual Basic (and a whole host of competing products). Vendors, including Microsoft, Borland, IBM and others rapidly realized that C++ would never be the vehicle for most people to develop software on Windows. Many languages/tools were created in the early 90’s to make Windows actually useful from a business development perspective. The common theme in all these products was abstraction. Since the Windows OS itself wasn’t language neutral, every one of these tools added an abstraction layer of some sort to “get rid” of the Windows OS and provide a platform that was more programmable by the language in question.


In the end three tools emerged as being dominant. In order of popularity they were Visual Basic, Powerbuilder and Delphi. While each of these had their own abstraction layers to make Windows programmable, there was no commonality between their abstractions. And C++ had grown its own abstraction too – in the form of MFC (and I should mention OWL as well I suppose). Even so, C++ was not competitive with VB or Powerbuilder, though it may have been more popular than Delphi on the whole.


In an effort to shore up the Windows OS, Microsoft created COM. In many ways this was an attempt to provide some common programming model akin to what we had in OpenVMS in the first place. And COM had its good points – chief among them being that there was finally some common programming scheme that would work across languages. Sure, each language out there could mess up and create components that weren’t interoperable, but at least it was possible to interact between languages.


Really this is all we asked. Under OpenVMS too, it was possible to violate the common language calling standard and create libraries that couldn’t be used by other languages but yours. Most people weren’t dense enough to do such a thing of course, because the benefits of cross-language usage were too high. Part of this, I think, flowed from the fact that most OpenVMS developers worked in multiple programming languages as a matter of course.


In Windows COM development in the mid to late 90’s most people didn’t intentionally write components that could only be used by C++ or VB or whatever. The benefits of cross-language usage were too high.


There were two exceptions to this. The first were developers that only worked in C++. They’d often create COM components that could only be used by C++ through sheer ignorance. The second were developers that only worked in VB. They’d often expose VB-only types (like collections) from their components, making life difficult or impossible for users of other languages. In both cases, the developers simply didn’t have the perspective of working in multiple languages and so they had no clue that they’d written inane code. Incompetence through ignorance.


But COM had its limitations in other areas. With the advent of Java the limitations of COM became painfully apparent by contrast and Microsoft had to do something. They could have taken the Java route and created yet another platform that was language-focused like Windows or the JVM. Almost certainly this would have given rise to another round of VB/Powerbuilder tools that abstracted the platform into something that could support more productive business development. (you can see some of this belatedly starting in the Java world even today with things like Java Faces, etc.)


Fortunately Microsoft decided to go with a language neutral approach. Though on its surface .NET is very unlike OpenVMS, it has the common trait of language neutrality. Like OpenVMS, the neutrality has its limits, but it is pretty darn good. Also like OpenVMS, .NET did create a set of second-class languages (ironically including C++).


So here we sit today, working with a language neutral platform that actually has multiple viable languages. Like the languages on VMS, some of the .NET languages have runtime libraries, but by and large the focus is less on the language than it is on the platform. To me this is familiar territory, and I must say that it makes me happy in many ways.


It also means that this silly language debate is rapidly losing my interest. As a corollary, I am beginning to think that my hiring criteria from VMS is valid on .NET. Namely that a prerequisite for hiring a developer is that they should know at least two languages. Only knowing one language (such as C# or VB) means that a developer has a seriously limited perspective and will be far less effective than one who knows more than one language. Such developers are likely to be incompetent through ignorance.


And I’m not talking only about VB or C# specifically here. COBOL.NET or J# or any other language is fine. The point is that a language gives a person a perspective on the platform, and having only one perspective is simply too limiting.


A while ago I posted a list of VB features that I think should also be in C#. I got numerous comments from C# developers who obviously had no perspective. Many of the comments showed that they simply didn’t understand what they are missing. I pity those people.


Likewise, there are features of C#, J# and COBOL.NET that VB should incorporate. People who live entirely in the VB world would likely disagree, and I pity them as well.


The whole idea behind having a language neutral platform is to have multiple languages that compete and try new and innovative ideas. The whole idea is to compare and contrast and take the better ideas and improve each language over time.


And for this discussion to be meaningful I think we need to accept the reality of “language families”. Knowing both C++ and C# is generally meaningless, as they are in the C family. C, C++, Java and C# follow the same fundamental philosophy and so limit the perception of people who never branch into the rest of the languages.


If you only know one language or language family then your ability to compare and contrast language features is severely restricted.


Now I’m not saying that absolute mastery of multiple languages is required. That makes no sense. I am very good at VB. I am competent in C#, and with a bit of brushing up I can probably do a good enough job in FORTRAN, Pascal, Modula II, C, rexx, awk, DCL and a handful of other languages I’ve learned over the years.


The point is that knowledge is power, and in the case of languages this power comes in the form of perspective. And perspective provides flexibility of thought and improves your ability to do your job.


But the thing that scares me the most at this point is that C# is just VB with semi-colons. And VB is just C# without semi-colons. And Java is just C# with a different runtime library. How can we have language innovation when the majority-usage languages have such little variation between them? We desperately need some real innovation in at least one of these languages, because I can’t believe that C#, VB or Java are the best that the human race can come up with…


Wednesday, October 13, 2004 11:38:29 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, October 13, 2004

Jason Bock, a fellow Magenic employee, has put together a web site with information about any and all .NET languages. Since I love programming languages and language innovations, I think this site is really cool!

Thanks Jason!


Wednesday, October 13, 2004 1:10:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Saturday, October 09, 2004

I try to keep this site focused on interesting technology issues, but is just too interesting and useful at the moment (at least to people in the US), so I wanted to call it out. takes daily polling data from around the nation and feeds it into a US map showing how the electoral votes would fall out were the polls to correspond exactly to votes.

According to the site, two days ago neither candidate had the required 270 electoral votes. Subsequent to the most recent debate Kerry has 280 electoral votes, so the debates certainly have an impact on people's polling responses (and presumably voting responses come November).

Also, here's a graph over time showing the total electoral votes for each candidate.

Saturday, October 09, 2004 9:25:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, October 08, 2004

The adoguy is wondering if the browser is a "client tier". I think the web world is a flip-flopper.

To many the browser is a terminal. Pure and simple. Using it for anything outside of the basic capabilities is non-standard and should be avoided. In this worldview the browser is no tier. It is just a terminal, and granting it “tier“ status would mean granting 3270 mainframe terminals that same status - thus saying that the COBOL guys from 30 years ago did n-tier development. Go there if you want, but I'm skeptical.

To many others the browser is a programming platform. The “new rich client”, against which we can and should target rich, interactive programs. In this worldview the browser is a place to run code. It is not a terminal, it is a programming platform just like Windows, Linux or anything else. In this worldview you bet the browser can be a tier. Heck, you could probably write your UI and business layers into the browser tier if you really wanted to...

Honestly I thought this debate died with the dot-bomb. The debate was a growing concern back in 2000 or so, and then it rather died off when the excess money evaporated at the end of the Clinton era boom.

Though certainly some organizations have undertaken the massive pain and cost associated with targeting the browser as a programming platform (examples include Microsoft with Outlook Web Access, and as someone pointed out, Google with gmail)... But it is a losing battle for most of us, because there are no development tools.

If Microsoft wanted to support the browser as a programming platform, we'd have Internet Explorer project types in Visual Studio and the experience would be comparable to programming Windows Forms (though probably in JavaScript of all things).

But the only truly viable tool today (for developers who want productivity and reasonable development cost) is Flash. And with the recent advent of laszlo it looks like Flash is seriously on the move as a community effort. Now if only there was a laszlo equivelent that could be hosted in ASP.NET we'd be talking!!

So my take on it is this.

If you are willing to actually write UI code in JavaScript or Flash then you can consider the browser to be a tier. To be a tier it has to DO something beyond basic terminal functions. (and the ASP.NET validation controls just can't count as enough to elevate it to tier status)

But if you are like most people, the browser is a terminal - a glorified, colorful mainframe terminal just like we've had for decades. In this worldview it can't be considered a tier because no meaningful code runs closer to the user than your web server.

Friday, October 08, 2004 10:32:33 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, October 04, 2004

MSDN has taken to organizing a week of webcasts on ASP.NET. Last year it was their most popular event, and probably will be again this year.

As part of the event I am presenting a webcast on implementation of custom authentication and authorization in ASP.NET.

Click here to register for the event.

Monday, October 04, 2004 8:29:45 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Saturday, October 02, 2004

A couple people appreciated me mentioning Vernor Vinge's books in a previous post, so I thought I'd toss out another author I've been enjoying quite a lot lately: Alastair Reynolds.

His books are reasonably hard science-fiction, meaning that he sticks (mostly) to hard science. The exceptions are granted based on extrapolations from today's knowledge of nanotechnology and various theories of physics floating around out there.

The books are also relatively dark, painting a very cool but not entirely comfortable picture of the future. Somewhat like cyberpunk meets hard SF in outer space.

The books are:

Three of the books form a trilogy. Chasm City is a related book in the same universe, but isn't part of the trilogy. However, it is probably best to read them in the order listed above, since Chasm City tends to make some of Redemption Ark more meaningful.

Unlike Vinge's books, I wouldn't say that these provide any great insight into SOA :)  They are however, a very good read.

I'm also nearly done with Neal Stephenson's Quicksilver, the first book in his new trilogy. The other two are out as well. These are prequels to Cryptonomicon, which I still think is the best book I've ever read.

If you have any interest in the history and origins of modern scientific thought, the conflict between Catholicism, early Protestant churches, Islam and all the related politics of kings then Quicksilver is your cup of tea. How can you beat a book that uses Isaac Newton and his contemporaries as major characters?

My one warning – Cryptonomicon and its prequels are dense. Stephenson makes wonderful, even masterful use of the English language in his writing. But the density of information means that I haven’t whipped through any of these books like I do with most other books. There’s no skimming over paragraphs of description to get to the action, since Stephenson’s descriptions are as meaningful and interesting as any conflict or dialog.

You should also be aware that, from a literary perspective, these are cyberpunk books. While none of them have cyber, and precious little punk, the literary form of the books follows the typical cyberpunk approach of telling multiple, vaguely interrelated tales that all collapse together into a single thread in the last few pages. Not everyone likes this style, but I personally love it.

Saturday, October 02, 2004 8:59:19 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, October 01, 2004

It seems that has a debate going this week about O/R Mapping. As part of the discussion, one of the invited debaters made the following comment about CSLA .NET:


“…everyone just "loves" CSLA, afterall its got every feature imaginable, was created by one of the best "experts", is the subject of books and user groups -- and yes there are code gen templates for it. Now don't get me wrong, I think Rocky is a great speaker and teacher (and great in person too -- I have met him), but I think he would agree that CSLA is an example for teaching how to do something and should not really be the expected to be perfect out-of-the-box -- and its not. I've personally seen a project spend way too much time and effort doing things because they chose this route -- they had to constantly go and add/remove/modify pieces to their chosen framework -- and I've heard many others that have similar problems.” (link)

- Paul Wilson


It is not the most professionally phrased comment, but this gentleman has previously made similar comments in other forums as well, so I thought I should provide an answer.


I think it is important to note up front that Mr. Wilson sells an O/R Mapper tool, and so has some bias. It is also important to note that I have written numerous books on distributed object-oriented architecture, and so I have some bias.


CSLA .NET is intended to be a learning or teaching vehicle. It is often also useful in its own right as a development framework.


Different people write different types of books. For instance, Martin Fowler’s excellent Patterns of Enterprise Application Architecture came out at about the same time as my .NET Business Objects book. In many ways he and I are targeting the same demographic, but in different ways. Fowler’s book stays at a more abstract, conceptual level and teaches from there. My book is more pragmatic and applied, teaching from there.


As an aside, I find it wonderful how some of the patterns in Fowler’s book can be found in CSLA .NET. I think this goes to show the power of the design pattern concept – how patterns emerge in the wild and are documented to help provide a more common language we can all use in future discussions.


But back to books. We need books that give us broad theory and concepts. We also need reference-oriented books that give us basic usage of .NET, C# and VB. And we need books that try to span the gulf between the two. This is where I’ve tried to live for the most part – between the theory and reference content. To do this is challenging, and requires that the theory be distilled down into something concrete, but yet rich enough that it isn’t just more reference material.


The way I’ve chosen to do this is to pick a way of creating distributed object-oriented applications. Are there other ways? Oh yeah, tons. Are some of them good? Absolutely! Are some of them bad? Again, absolutely. Is CSLA .NET good? It depends.


Certainly I think CSLA .NET and the rest of the content in my business objects books are good for learning how to apply a lot of theory into a complex problem space. I think that many readers use ideas and concepts from my books in conjunction with ideas from Fowler’s books, the GoF book and lots of other sources.


A side-effect of writing a book on applied theory is that the theory does actually get applied. This means that the book includes a pretty functional framework that can help build distributed object-oriented applications. This framework, CSLA .NET, can be useful as-is in some scenarios. There’s a vibrant online forum for discussion of my books and CSLA .NET where you can find numerous examples where people have applied the framework to help solve their problems.


Many of those people have also modified the framework to better suit their needs or requirements. This is nothing but good. In several cases, modifications made by readers of the books have been applied back into the framework, which is why the framework is currently at version 1.4, where the book described version 1.0.


Does CSLA .NET fit every need? Of course not. No tool meets every need.


It is particularly good at creating performant, scalable line-of-business systems that involve a lot of data entry and business rules and relationships. It supports people who build a good object-oriented business entity model, because it provides for virtually unlimited flexibility in translating the object data to arbitrary (relational and/or non-relational) persistence targets (databases, etc).


It is not particularly good at creating reporting systems, large batch processing systems or systems that are very data-centric and have little business logic. There’s also a cost to the data mapping flexibility I mentioned as a benefit – because with that flexibility comes more work on the developer. If you have either a non-complex data source or you are willing to tie your object model to your data model then CSLA .NET isn’t ideal because it requires that you do more work. If you want your object model to mirror your data model then use a DataSet, that’s what it is designed for.


But the real thing to keep in mind, above all else, is this: there is a set of functionality that must exist to build distributed object-oriented enterprise applications. This includes data persistence, UI design support, business logic, organization of layers and tiers, state management and more.


The issues that I address in my business objects books and CSLA .NET are addressed in every enterprise application. Whether formally or informally, whether through reusable or ad-hoc implementations – everything I address happens in every enterprise app. (And a whole lot more things I don't address are in every enterprise app!!)


The idea that CSLA .NET has “got every feature imaginable” seems to imply that it has extra stuff you don’t need. The fact is that you will address every issue it covers one way or another, plus numerous other issues. You might address them differently that I did, and that’s great. But you will address them.


You can address them in an ad-hoc manner in each app you write. You can address them through a framework, or you can address them through tools. You might address them through a combination of the above.


But I’ll say this: the ad-hoc approach is poor. You should opt for a solution that provides reuse and consistency. There are numerous ways to do this, CSLA .NET is one and I imagine that Mr. Wilson’s O/R mapper tool is another. And there are various commercial frameworks and other O/R mappers out there as well. You have a lot of choices – which is indicative that this is a healthy segment of our industry.


The fact is that at some point you need to make hard decisions within your organization that move from the theory of design patterns and architecture down to the practical and pragmatic level of code, and you only want to do that once. It is expensive, and it does nothing to solve your actual business needs. All this code that does data persistence, layering, tiering, data binding and whatever else is just plumbing. It is necessary, it is costly, it is time-consuming, and it provides no direct business value.


And be warned – if you buy or otherwise acquire a framework or tool set (CSLA .NET, O/R mappers, whatever) you will almost certainly need to modify, extend, add or remove bits and pieces to make it fit your needs. If you do this by buying a black-box component of any sort, you’ll end up extending or modifying around the component, while if you acquire one that includes code you may opt to make the changes inside or outside the component.


Obviously you can acquire a component that does less work for you – something targeted at a narrow task. In that case you might not need to modify it, but you must face up to the fact that you will need to write all the other stuff that it doesn’t do. It isn’t like you can escape the requirement for all this functionality. Every app has it, whether you do it in a reusable form or not is up to you!


So when it comes to frameworks, O/R mappers and all this other plumbing: do it once – build it, buy it, cobble together products to do it. Whatever suits your fancy.


Then get on with building business apps and providing actual value to your organization.

Friday, October 01, 2004 2:54:40 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I work for Magenic Technologies. Magenic is one of the nation’s leading software development and consulting companies focused on Microsoft technology, and I feel comfortable saying that we employ some of the best consultants in the industry.

Today Magenic announced that they acquired Empowered Software Solutions (ESS) in Chicago.

Personally I'm very excited about this event. I've known Keith, Tammy and Norm (the founders of ESS) for many years and it is very exciting to get to work with them and all the Chicago-based consultants. ESS has a great (and well-deserved) reputation in Chicago. That reputation will help spread Magenic's national presence into the Chicago area.


Friday, October 01, 2004 9:59:47 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  |