Wednesday, December 17, 2008



I have released version 3.6 of CSLA .NET for Windows and CSLA .NET for Silverlight.

This is a major update to CSLA .NET, because it supports Silverlight, but also because there are substantial enhancements to the Windows/.NET version.

To my knowledge, this is also the first release of a major development framework targeting the Silverlight platform.

Here are some highlights:

  • Share more than 90% of your business object code between Windows and Silverlight
  • Powerful new UI controls for WPF, Silverlight and Windows Forms
  • Asynchronous data portal, to enable object persistence on a background thread (required in Silverlight, optional in Windows)
  • Asynchronous validation rules
  • Enhanced indexing in LINQ to CSLA
  • Numerous performance enhancements

This version of CSLA .NET for Windows is covered in my new Expert C# 2008 Business Objects book (Expert VB 2008 Business Objects is planned for February 2009).

At this time there is no book covering CSLA .NET for Silverlight. However, most business object code (other than data access) is the same for Windows and Silverlight, so you will find the Expert C# 2008 Business Objects book very valuable for Silverlight as well. I am in the process of creating a series of training videos around CSLA .NET for Silverlight, watch for those in early 2009.

Version 3.6 marks the first time major development was done by a team of people. My employer, Magenic, has been a patron of CSLA .NET since the year 2000, but with this version they provided me with a development team and that is what enabled such a major version to be created in such a relatively short period of time. You can see a full list of contributors, but I specifically want to thank Sergey Barskiy, Justin Chase and Nermin Dibek because they are the primary contributors to 3.6. I seriously couldn't have asked for a better team!

This is also the first version where the framework code is only in C#. The framework can be used from any .NET language, including VB, C# and others, but the actual framework code is in C#.

There is a community effort underway to bring the VB codebase in sync with version 3.6, but at this time that code will not build. In any case, the official version of the framework is C#, and that is the version I recommend using for any production work.

In some ways version 3.6 is one of the largest release of the CSLA .NET framework ever. If you are a Windows/.NET developer this is a natural progression from previous versions of the framework, providing better features and capabilities. If you are a Silverlight developer the value of CSLA .NET is even greater, because it provides an incredible array of features and productivity you won't find anywhere else today.

As always, if you are a new or existing CSLA .NET user, please join in the discussion on the CSLA .NET online forum.

Wednesday, December 17, 2008 4:13:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 12, 2008

My previous blog post (Some thoughts on Windows Azure) has generated some good comments. I was going to answer with a comment, but I wrote enough that I thought I’d just do a follow-up post.

Jamie says “Sounds akin to Architecture:"convention over configuration" I'm wholly in favour of that.”

Yes, this is very much a convention over configuration story, just at an arguably larger scope that we’ve seen thus far. Or at least showing a glimpse of that larger scope. Things like Rails are interesting, but I don't think they are as broad as the potential we're seeing here.

It is my view that the primary job of an architect is to eliminate choices. To create artificial restrictions within which applications are created. Basically to pre-make most of the hard choices so each application designer/developer doesn’t need to.

What I’m saying with Azure, is that we’re seeing this effect at a platform level. A platform that has already eliminated most choices, and that has created restrictions on how applications are created – thus ensuring a high level of consistency, and potential portability.

Jason is confused by my previous post, in that I say Azure has a "lock-in problem", but that the restricted architecture is a good thing.

I understand your confusion, as I probably could have been more clear. I do think Windows Azure has a lock-in problem - it doesn't scale down into my data center, or onto my desktop. But the concept of a restricted runtime is a good one, and I suspect that concept may outlive this first run at Azure. A restricted architecture (and runtime to support it) doesn't have to cause lock-in at the hosting level. Perhaps at the vendor level - but those of us who live in the Microsoft space put that concern behind us many years ago.

Roger suggests that most organizations may not have the technical savvy to host the Azure platform in-house.

That may be a valid point – the current Azure implementation might be too complex for most organizations to administer. Microsoft didn't build it as a server product, so they undoubtedly made implementation choices that create complexity for hosting. This doesn’t mean the idea of a restricted runtime is bad. Nor does it mean that someone (possibly Microsoft) could create such a restricted runtime that could be deployed within an organization, or in the cloud. Consider that there is a version of Azure that runs on a developer's workstation already - so it isn't hard to imagine a version of Azure that I could run in my data center.

Remember that we’re talking about a restricted runtime, with a restricted architecture and API. Basically a controlled subset of .NET. We’re already seeing this work – in the form of Silverlight. Silverlight is largely compatible with .NET, even though they are totally separate implementations. And Moonlight demonstrates that the abstraction can carry to yet another implementation.

Silverlight has demonstrated that most business applications only use 5 of the 197 megabytes in the .NET 3.5 framework download to build the client. Just how much is really required to build the server parts? A different 5 megabytes? 10? Maybe 20 tops?

If someone had a defined runtime for server code, like Silverlight is for the client, I think it becomes equally possible to have numerous physical implementations of the same runtime. One for my laptop, one for my enterprise servers and one for Microsoft to host in the cloud. Now I can write my app in this .NET subset, and I can not only scale out, but I can scale up or down too.

That’s where I suspect this will all end up, and spurring this type of thinking is (to me) the real value of Azure in the short term.

Finally, Justin rightly suggests that we can use our own abstraction layer to be portable to/from Azure even today.

That's absolutely true. What I'm saying is that I think Azure could light the way to a platform that already does that abstraction.

Many, many years ago I worked on a project to port some software from a Prime to a VAX. It was only possible because the original developers had (and I am not exaggerating) abstracted every point of interaction with the OS. Everything that wasn't part of the FORTRAN-77 spec was in an abstraction layer. I shudder to think of the expense of doing that today - of abstracting everything outside the C# language spec - basically all of .NET - so you could be portable.

So what we need, I think, is this server equivalent to Silverlight. Azure is not that - not today - but I think it may start us down that path, and that'd be cool!

Friday, December 12, 2008 11:33:03 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, December 11, 2008

At PDC, Microsoft announced Windows Azure - their Windows platform for cloud computing. There's a lot we don't know about Azure, including the proper way to pronounce the word. But that doesn't stop me from being both impressed and skeptical and about what I do know.

In the rest of this post, as I talk about Azure, I'm talking about the "real Azure". One part of Azure is the idea that Microsoft would host my virtual machine in their cloud - but to me that's not overly interesting, because that's already a commodity market (my local ISP does this, Amazon does this - who doesn't do this??). What I do think is interesting is the more abstract Azure platform, where my code runs in a "role" and has access to a limited set of pre-defined resources. This is the Azure that forces the use of a scale-out architecture.

I was impressed by just how much Microsoft was able to show and demo at the PDC. For a fledgling technology that is only partially defined, it was quite amazing to see end-to-end applications built on stage, from the UI to business logic to data storage. That was unexpected and fun to watch.

I am skeptical as to whether anyone will actually care. I think the primary determination about whether people care will be determined by the price point Microsoft sets. But I also think it will be determined by how Microsoft addresses the "lock-in question".

I expect the pricing to end up in one of three basic scenarios:

  1. Azure is priced for the Fortune 500 (or 100) - in which case the vast majority of us won't care about it
  2. Azure is priced for the small to mid-sized company space (SMB) - in which case quite a lot of us might be interested
  3. Azure is priced to be accessible for hosting your blog, or my blog, or or other small/personal web sites - in which case the vast majority of us may care a lot about it

These aren't entirely mutually exclusive. I think 2 and 3 could both happen, but I think 1 is by itself. Big enterprises have such different needs than people or SMB, and they do business in such different ways compared to most businesses or people, that I suspect we'll see Microsoft either pursue 1 (which I think would be sad) or 2 and maybe 3.

But there's also the lock-in question. If I built my application for Azure, Microsoft has made it very clear that I will not be able to run my app on my servers. If I need to downsize, or scale back, I really can't. Once you go to Azure, you are there permanently. I suspect this will be a major sticking point for many organizations. I've seen quotes by Microsoft people suggesting that we should all factor our applications into "the part we host" and "the part they host". But even assuming we're all willing to go to that work, and introduce that complexity, this still means that part of my app can never run anywhere but on Microsoft's servers.

I suspect this lock-in issue will be the biggest single roadblock to adoption for most organizations (assuming reasonable pricing - which I think is a given).

But I must say that even if Microsoft doesn't back down on this. And even if it does block the success of "Windows Azure" as we see it now, that's probably OK.


Because I think the biggest benefit to Azure is one important concept: an abstract runtime.

If you or I write a .NET web app today, what are the odds that we can just throw it on a random Windows Server 200x box and have it work? Pretty darn small. The same is true for apps written for Unix, Linux or any other platform.

The reason apps can't just "run anywhere" is because they are built for a very broad and ill-defined platform, with no consistently defined architecture. Sure you might have an architecture. And I might have one too. But they are probably not the same. The resulting applications probably use different parts of the platform, in different ways, with different dependencies, different configuration requirements and so forth. The end result is that a high level of expertise is required to deploy any one of our apps.

This is one reason the "host a virtual machine" model is becoming so popular. I can build my app however I choose. I can get it running on a VM. Then I can deploy the whole darn VM, preconfigured, to some host. This is one solution to the problem, but it is not very elegant, and it is certainly not very efficient.

I think Azure (whether it succeeds or fails) illustrates a different solution to the problem. Azure defines a limited architecture for applications. It isn't a new architecture, and it isn't the simplest architecture. But it is an architecture that is known to scale. And Azure basically says "if you want to play, you play my way". None of this random use of platform features. There are few platform features, and they all fit within this narrow architecture that is known to work.

To me, this idea is the real future of the cloud. We need to quit pretending that every application needs a "unique architecture" and realize that there are just a very few architectures that are known to work. And if we pick a good one for most or all apps, we might suffer a little in the short term (to get onto that architecture), but we win in the long run because our apps really can run anywhere. At least anywhere that can host that architecture.

Now in reality, it is not just the architecture. It is a runtime and platform and compilers and tools that support that architecture. Which brings us back to Windows Azure as we know it. But even if Azure fails, I suspect we'll see the rise of similar "restricted runtimes". Runtimes that may leverage parts of .NET (or Java or whatever), but disallow the use of large regions of functionality, and disallow the use of native platform features (from Windows, Linux, etc). Runtimes that force a specific architecture, and thus ensure that the resulting apps are far more portable than the typical app is today.

Thursday, December 11, 2008 6:31:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 05, 2008

CSLA .NET for Windows and CSLA .NET for Silverlight version 3.6 RC1 are now available for download.

The Expert C# 2008 Business Objects book went to the printer on November 24 and should be available very soon.

I'm working to have a final release of 3.6 as quickly as possible. The RC1 release includes a small number of bug fixes based on issues reported through the CSLA .NET forum. Barring any show-stopping bug reports over the next few days, I expect to release version 3.6 on December 15.

If you intend to use version 3.6, please download this release candidate and let me know if you encounter any issues. Thank you!

Friday, December 05, 2008 10:30:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, December 03, 2008

One of the topic areas I get asked about frequently is authorization. Specifically role-based authorization as supported by .NET, and how to make that work in the "real world".

I get asked about this because CSLA .NET (for Windows and Silverlight) follow the standard role-based .NET model. In fact, CSLA .NET rests directly on the existing .NET infrastructure.

So what's the problem? Why doesn't the role-based model work in the real world?

First off, it is important to realize that it does work for some scenarios. It isn't bad for course-grained models where users are authorized at the page or form level. ASP.NET directly uses this model for its authorization, and many people are happy with that.

But it doesn't match the requirements of a lot of organizations in my experience. Many organizations have a slightly more complex structure that provides better administrative control and manageability.


Whether a user can get to a page/form, or can view a property or edit a property is often controlled by a permission, not a role. In other words, users are in roles, and a role is essentially a permission set: a list of permissions the role has (or doesn't have).

This doesn't map real well into the .NET IPrincipal interface, which only exposes an IsInRole() method. Finding out if the user is in a role isn't particularly useful, because the application really needs to call some sort of HasPermission() method.

In my view the answer is relatively simple.

The first step is understanding that there are two concerns here: the administrative issues, and the runtime issues.

At administration time the concepts of "user", "role" and "permission" are all important. Admins will associate permissions with roles, and roles with users. This gives them the kind of control and manageability they require.

At runtime, when the user is actually using the application, the roles are entirely meaningless. However, if you consider that IsInRole() can be thought of as "HasPermission()", then there's a solution. When you load the .NET principal with a list of "roles", you really load it with a list of permissions. So when your application asks "IsInRole()", it does it like this:

bool result = currentPrincipal.IsInRole(requiredPermission);

Notice that I am "misusing" the IsInRole() method by passing in the name of a permission, not the name of a role. But that's ok, assuming that I've loaded my principal object with a list of permissions instead of a list of roles. Remember, the IsInRole() method typically does nothing more than determine whether the string parameter value is in a list of known values. It doesn't really matter if that list of values are "roles" or "permissions".

And since, at runtime, no one cares about roles at all, there's no sense loading them into memory. This means the list of "roles" can instead be a list of "permissions".

The great thing is that many people store their users, roles and permissions in some sort of relational store (like SQL Server). In that case it is a simple JOIN statement to retrieve all permissions for a user, merging all the user's roles together to get that list, and not returning the actual role values at all (because they are only useful at admin time).

Wednesday, December 03, 2008 11:19:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  |