Rockford Lhotka

 Wednesday, December 17, 2008

csla_logo1_42

csla-lt_logo_42

I have released version 3.6 of CSLA .NET for Windows and CSLA .NET for Silverlight.

This is a major update to CSLA .NET, because it supports Silverlight, but also because there are substantial enhancements to the Windows/.NET version.

To my knowledge, this is also the first release of a major development framework targeting the Silverlight platform.

Here are some highlights:

  • Share more than 90% of your business object code between Windows and Silverlight
  • Powerful new UI controls for WPF, Silverlight and Windows Forms
  • Asynchronous data portal, to enable object persistence on a background thread (required in Silverlight, optional in Windows)
  • Asynchronous validation rules
  • Enhanced indexing in LINQ to CSLA
  • Numerous performance enhancements

This version of CSLA .NET for Windows is covered in my new Expert C# 2008 Business Objects book (Expert VB 2008 Business Objects is planned for February 2009).

At this time there is no book covering CSLA .NET for Silverlight. However, most business object code (other than data access) is the same for Windows and Silverlight, so you will find the Expert C# 2008 Business Objects book very valuable for Silverlight as well. I am in the process of creating a series of training videos around CSLA .NET for Silverlight, watch for those in early 2009.

Version 3.6 marks the first time major development was done by a team of people. My employer, Magenic, has been a patron of CSLA .NET since the year 2000, but with this version they provided me with a development team and that is what enabled such a major version to be created in such a relatively short period of time. You can see a full list of contributors, but I specifically want to thank Sergey Barskiy, Justin Chase and Nermin Dibek because they are the primary contributors to 3.6. I seriously couldn't have asked for a better team!

This is also the first version where the framework code is only in C#. The framework can be used from any .NET language, including VB, C# and others, but the actual framework code is in C#.

There is a community effort underway to bring the VB codebase in sync with version 3.6, but at this time that code will not build. In any case, the official version of the framework is C#, and that is the version I recommend using for any production work.

In some ways version 3.6 is one of the largest release of the CSLA .NET framework ever. If you are a Windows/.NET developer this is a natural progression from previous versions of the framework, providing better features and capabilities. If you are a Silverlight developer the value of CSLA .NET is even greater, because it provides an incredible array of features and productivity you won't find anywhere else today.

As always, if you are a new or existing CSLA .NET user, please join in the discussion on the CSLA .NET online forum.

Wednesday, December 17, 2008 4:13:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, December 12, 2008

My previous blog post (Some thoughts on Windows Azure) has generated some good comments. I was going to answer with a comment, but I wrote enough that I thought I’d just do a follow-up post.

Jamie says “Sounds akin to Architecture:"convention over configuration" I'm wholly in favour of that.”

Yes, this is very much a convention over configuration story, just at an arguably larger scope that we’ve seen thus far. Or at least showing a glimpse of that larger scope. Things like Rails are interesting, but I don't think they are as broad as the potential we're seeing here.

It is my view that the primary job of an architect is to eliminate choices. To create artificial restrictions within which applications are created. Basically to pre-make most of the hard choices so each application designer/developer doesn’t need to.

What I’m saying with Azure, is that we’re seeing this effect at a platform level. A platform that has already eliminated most choices, and that has created restrictions on how applications are created – thus ensuring a high level of consistency, and potential portability.

Jason is confused by my previous post, in that I say Azure has a "lock-in problem", but that the restricted architecture is a good thing.

I understand your confusion, as I probably could have been more clear. I do think Windows Azure has a lock-in problem - it doesn't scale down into my data center, or onto my desktop. But the concept of a restricted runtime is a good one, and I suspect that concept may outlive this first run at Azure. A restricted architecture (and runtime to support it) doesn't have to cause lock-in at the hosting level. Perhaps at the vendor level - but those of us who live in the Microsoft space put that concern behind us many years ago.

Roger suggests that most organizations may not have the technical savvy to host the Azure platform in-house.

That may be a valid point – the current Azure implementation might be too complex for most organizations to administer. Microsoft didn't build it as a server product, so they undoubtedly made implementation choices that create complexity for hosting. This doesn’t mean the idea of a restricted runtime is bad. Nor does it mean that someone (possibly Microsoft) could create such a restricted runtime that could be deployed within an organization, or in the cloud. Consider that there is a version of Azure that runs on a developer's workstation already - so it isn't hard to imagine a version of Azure that I could run in my data center.

Remember that we’re talking about a restricted runtime, with a restricted architecture and API. Basically a controlled subset of .NET. We’re already seeing this work – in the form of Silverlight. Silverlight is largely compatible with .NET, even though they are totally separate implementations. And Moonlight demonstrates that the abstraction can carry to yet another implementation.

Silverlight has demonstrated that most business applications only use 5 of the 197 megabytes in the .NET 3.5 framework download to build the client. Just how much is really required to build the server parts? A different 5 megabytes? 10? Maybe 20 tops?

If someone had a defined runtime for server code, like Silverlight is for the client, I think it becomes equally possible to have numerous physical implementations of the same runtime. One for my laptop, one for my enterprise servers and one for Microsoft to host in the cloud. Now I can write my app in this .NET subset, and I can not only scale out, but I can scale up or down too.

That’s where I suspect this will all end up, and spurring this type of thinking is (to me) the real value of Azure in the short term.

Finally, Justin rightly suggests that we can use our own abstraction layer to be portable to/from Azure even today.

That's absolutely true. What I'm saying is that I think Azure could light the way to a platform that already does that abstraction.

Many, many years ago I worked on a project to port some software from a Prime to a VAX. It was only possible because the original developers had (and I am not exaggerating) abstracted every point of interaction with the OS. Everything that wasn't part of the FORTRAN-77 spec was in an abstraction layer. I shudder to think of the expense of doing that today - of abstracting everything outside the C# language spec - basically all of .NET - so you could be portable.

So what we need, I think, is this server equivalent to Silverlight. Azure is not that - not today - but I think it may start us down that path, and that'd be cool!

Friday, December 12, 2008 11:33:03 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, December 11, 2008

At PDC, Microsoft announced Windows Azure - their Windows platform for cloud computing. There's a lot we don't know about Azure, including the proper way to pronounce the word. But that doesn't stop me from being both impressed and skeptical and about what I do know.

In the rest of this post, as I talk about Azure, I'm talking about the "real Azure". One part of Azure is the idea that Microsoft would host my virtual machine in their cloud - but to me that's not overly interesting, because that's already a commodity market (my local ISP does this, Amazon does this - who doesn't do this??). What I do think is interesting is the more abstract Azure platform, where my code runs in a "role" and has access to a limited set of pre-defined resources. This is the Azure that forces the use of a scale-out architecture.

I was impressed by just how much Microsoft was able to show and demo at the PDC. For a fledgling technology that is only partially defined, it was quite amazing to see end-to-end applications built on stage, from the UI to business logic to data storage. That was unexpected and fun to watch.

I am skeptical as to whether anyone will actually care. I think the primary determination about whether people care will be determined by the price point Microsoft sets. But I also think it will be determined by how Microsoft addresses the "lock-in question".

I expect the pricing to end up in one of three basic scenarios:

  1. Azure is priced for the Fortune 500 (or 100) - in which case the vast majority of us won't care about it
  2. Azure is priced for the small to mid-sized company space (SMB) - in which case quite a lot of us might be interested
  3. Azure is priced to be accessible for hosting your blog, or my blog, or www.lhotka.net or other small/personal web sites - in which case the vast majority of us may care a lot about it

These aren't entirely mutually exclusive. I think 2 and 3 could both happen, but I think 1 is by itself. Big enterprises have such different needs than people or SMB, and they do business in such different ways compared to most businesses or people, that I suspect we'll see Microsoft either pursue 1 (which I think would be sad) or 2 and maybe 3.

But there's also the lock-in question. If I built my application for Azure, Microsoft has made it very clear that I will not be able to run my app on my servers. If I need to downsize, or scale back, I really can't. Once you go to Azure, you are there permanently. I suspect this will be a major sticking point for many organizations. I've seen quotes by Microsoft people suggesting that we should all factor our applications into "the part we host" and "the part they host". But even assuming we're all willing to go to that work, and introduce that complexity, this still means that part of my app can never run anywhere but on Microsoft's servers.

I suspect this lock-in issue will be the biggest single roadblock to adoption for most organizations (assuming reasonable pricing - which I think is a given).

But I must say that even if Microsoft doesn't back down on this. And even if it does block the success of "Windows Azure" as we see it now, that's probably OK.

Why?

Because I think the biggest benefit to Azure is one important concept: an abstract runtime.

If you or I write a .NET web app today, what are the odds that we can just throw it on a random Windows Server 200x box and have it work? Pretty darn small. The same is true for apps written for Unix, Linux or any other platform.

The reason apps can't just "run anywhere" is because they are built for a very broad and ill-defined platform, with no consistently defined architecture. Sure you might have an architecture. And I might have one too. But they are probably not the same. The resulting applications probably use different parts of the platform, in different ways, with different dependencies, different configuration requirements and so forth. The end result is that a high level of expertise is required to deploy any one of our apps.

This is one reason the "host a virtual machine" model is becoming so popular. I can build my app however I choose. I can get it running on a VM. Then I can deploy the whole darn VM, preconfigured, to some host. This is one solution to the problem, but it is not very elegant, and it is certainly not very efficient.

I think Azure (whether it succeeds or fails) illustrates a different solution to the problem. Azure defines a limited architecture for applications. It isn't a new architecture, and it isn't the simplest architecture. But it is an architecture that is known to scale. And Azure basically says "if you want to play, you play my way". None of this random use of platform features. There are few platform features, and they all fit within this narrow architecture that is known to work.

To me, this idea is the real future of the cloud. We need to quit pretending that every application needs a "unique architecture" and realize that there are just a very few architectures that are known to work. And if we pick a good one for most or all apps, we might suffer a little in the short term (to get onto that architecture), but we win in the long run because our apps really can run anywhere. At least anywhere that can host that architecture.

Now in reality, it is not just the architecture. It is a runtime and platform and compilers and tools that support that architecture. Which brings us back to Windows Azure as we know it. But even if Azure fails, I suspect we'll see the rise of similar "restricted runtimes". Runtimes that may leverage parts of .NET (or Java or whatever), but disallow the use of large regions of functionality, and disallow the use of native platform features (from Windows, Linux, etc). Runtimes that force a specific architecture, and thus ensure that the resulting apps are far more portable than the typical app is today.

Thursday, December 11, 2008 6:31:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, December 5, 2008

CSLA .NET for Windows and CSLA .NET for Silverlight version 3.6 RC1 are now available for download.

The Expert C# 2008 Business Objects book went to the printer on November 24 and should be available very soon.

I'm working to have a final release of 3.6 as quickly as possible. The RC1 release includes a small number of bug fixes based on issues reported through the CSLA .NET forum. Barring any show-stopping bug reports over the next few days, I expect to release version 3.6 on December 15.

If you intend to use version 3.6, please download this release candidate and let me know if you encounter any issues. Thank you!

Friday, December 5, 2008 10:30:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 3, 2008

One of the topic areas I get asked about frequently is authorization. Specifically role-based authorization as supported by .NET, and how to make that work in the "real world".

I get asked about this because CSLA .NET (for Windows and Silverlight) follow the standard role-based .NET model. In fact, CSLA .NET rests directly on the existing .NET infrastructure.

So what's the problem? Why doesn't the role-based model work in the real world?

First off, it is important to realize that it does work for some scenarios. It isn't bad for course-grained models where users are authorized at the page or form level. ASP.NET directly uses this model for its authorization, and many people are happy with that.

But it doesn't match the requirements of a lot of organizations in my experience. Many organizations have a slightly more complex structure that provides better administrative control and manageability.

image

Whether a user can get to a page/form, or can view a property or edit a property is often controlled by a permission, not a role. In other words, users are in roles, and a role is essentially a permission set: a list of permissions the role has (or doesn't have).

This doesn't map real well into the .NET IPrincipal interface, which only exposes an IsInRole() method. Finding out if the user is in a role isn't particularly useful, because the application really needs to call some sort of HasPermission() method.

In my view the answer is relatively simple.

The first step is understanding that there are two concerns here: the administrative issues, and the runtime issues.

At administration time the concepts of "user", "role" and "permission" are all important. Admins will associate permissions with roles, and roles with users. This gives them the kind of control and manageability they require.

At runtime, when the user is actually using the application, the roles are entirely meaningless. However, if you consider that IsInRole() can be thought of as "HasPermission()", then there's a solution. When you load the .NET principal with a list of "roles", you really load it with a list of permissions. So when your application asks "IsInRole()", it does it like this:

bool result = currentPrincipal.IsInRole(requiredPermission);

Notice that I am "misusing" the IsInRole() method by passing in the name of a permission, not the name of a role. But that's ok, assuming that I've loaded my principal object with a list of permissions instead of a list of roles. Remember, the IsInRole() method typically does nothing more than determine whether the string parameter value is in a list of known values. It doesn't really matter if that list of values are "roles" or "permissions".

And since, at runtime, no one cares about roles at all, there's no sense loading them into memory. This means the list of "roles" can instead be a list of "permissions".

The great thing is that many people store their users, roles and permissions in some sort of relational store (like SQL Server). In that case it is a simple JOIN statement to retrieve all permissions for a user, merging all the user's roles together to get that list, and not returning the actual role values at all (because they are only useful at admin time).

Wednesday, December 3, 2008 11:19:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, November 19, 2008

As the Expert C# 2008 Business Objects book becomes available (now in alpha ebook form, and in paper by the end of this year), I'm getting more questions about what book to buy for different versions of CSLA .NET, and for different .NET technologies.

(Expert VB 2008 Business Objects should be out in February 2009. The one unknown with this effort is how quickly the team of volunteers can get the VB port of CSLA .NET 3.6 complete, as I don't think we can release the book until the code is also available for download.)

I do have a summary of book editions that is often helpful in understanding what book(s) cover which version of CSLA .NET. But I thought I'd slice and dice the information a little differently to help answer some of the current questions.

First, by version of CSLA .NET:

CSLA .NET Version Book(s)
3.6 Expert 2008 Business Objects
3.5 <no book>
3.0 Expert 2005 Business Objects &
CSLA .NET Version 2.1 Handbook &
Using CSLA .NET 3.0
2.1 Expert 2005 Business Objects &
CSLA .NET Version 2.1 Handbook
2.0 Expert 2005 Business Objects

 

Next, by .NET technology:

.NET Technology Book(s)
WPF Expert 2008 Business Objects
WCF Expert 2008 Business Objects
Silverlight <no book yet>
Expert 2008 Business Objects gets you 80% there though
WF Expert 2008 Business Objects &
Using CSLA .NET 3.0 (for WF example)
ASP.NET MVC <no book yet>
ASP.NET Web Forms Expert 2008 Business Objects
Windows Forms Expert 2008 Business Objects &
Using CSLA .NET 3.0 (for important data binding info)
LINQ to CSLA Expert 2008 Business Objects
LINQ to SQL Expert 2008 Business Objects
ADO.NET Entity Framework Expert 2008 Business Objects (limited coverage)
.NET Remoting Expert 2005 Business Objects
asmx Web Services Expert 2005 Business Objects

 

As always, here are the locations to download CSLA .NET for Windows and to download CSLA .NET for Silverlight.

Wednesday, November 19, 2008 9:17:58 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, November 12, 2008

Expert C# 2008 Business Objects book coverYou can now purchase the "alpha" of Expert C# 2008 Business Objects online as an ebook directly from the Apress web site. This gets you early access to the content, and access to the final ebook content when it is complete.

While their website indicates you can provide feedback during the writing process, the reality with this book is that I'm only three chapters from being completely done with the final review, so it is terribly close to being final already :)

Wednesday, November 12, 2008 8:12:05 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, November 5, 2008

Magenic is sponsoring a webcast about CSLA .NET for Silverlight on November 13. The presenter is my colleague, Sergey Barskiy. Sergey is one of the lead developers on the CSLA .NET 3.6 project, and this should be a very educational webcast.

Wednesday, November 5, 2008 1:31:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, November 1, 2008

image I'm giving four talks at VS Live in Dallas, plus participating in a community panel Tuesday evening. You can see my list of talks here, which includes a talk specifically focused on CSLA .NET (due to popular demand from attendees at previous VS Live shows).

I have a special promo code you can use when registering to get $300 off registration. Just use code SPLHO when registering online.

Saturday, November 1, 2008 10:54:48 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, October 31, 2008

PDC 2008 was a lot of fun - a big show, with lots of announcements, lots of sessions and some thought-provoking content. I thought I'd through out a few observations. Not really conclusions, as those take time and reflection, so just some observations.

Windows Azure, the operating system for the cloud, is intriguing. For a first run at this, the technology seems surprisingly complete and contains a pretty reasonable set of features. I can easily see how web sites, XML services and both data-centric and compute-centric processing could be built for this platform. For that matter, it looks like it would be perhaps a week's work to get my web site ported over to run completely in Azure.

The real question is whether that would even make sense, and that comes down to the value proposition. One big component of value is price. Like anyone else, I pay a certain amount to run my web site. Electricity, bandwidth, support time, hardware costs, software costs, etc. I've never really sorted out an exact cost, but it isn't real high on a per-month basis. And I could host on any number of .NET-friendly hosting services that have been around for years, and some of them are pretty inexpensive. So the question becomes whether Azure will be priced in such a way that it is attractive to me. If so, I'm excited about Azure!! If not, then I really don't care about Azure.

I suspect most attendees went through a similar thought process. If Microsoft prices Azure for "the enterprise" then 90% of the developers in the world simply don't care about Azure. But if Microsoft prices Azure for small to mid-size businesses, and for the very small players (like me) then 90% of the developers in the world should (I think) really be looking at this technology

Windows 7 looks good to me. After the Tuesday keynote I was ready to install it now. As time goes by the urgency has faded a bit - Vista has stabilized nicely over the past 6-8 months and I really like it now. Windows 7 has some nice-sounding new features though. Probably the single biggest one is reduced system resource requirements. If Microsoft can deliver on that part of the promise I'll be totally thrilled. Though I really do want multi-monitor RDP and the ability to manage, mount (and even boot from) vhd files directly from the host OS.

In talking to friends of mine that work at Microsoft, my level of confidence in W7 is quite high. A couple of them have been running it for some time now, and while it is clearly pre-beta, they have found it to be a very satisfying experience. When I get back from all my travels I do think I'll have to buy a spare HD for my laptop and give it a try myself.

The Oslo modeling tools are also interesting, though they are more future-looking. Realistically this idea of model-driven development will require a major shift in how our industry thinks about and approaches custom software development. Such a massive shift will take many years to occur, regardless of whether the technology is there to enable it. It is admirable that Microsoft is taking such a gamble - building a set of tools and technologies for something that might become acceptable to developers in the murky future. Their gamble will pay off if we collectively decide that the world of 3GL development really is at an end and that we need to move to higher levels of abstraction. Of course we could decide to stick with what has (and hasn't) worked for 30+ years, in which case modeling tools will go the way of CASE.

But even if some of the really forward-looking modeling ideas never become palatable, many of the things Microsoft is doing to support modeling are immediately useful. Enhancements to Windows Workflow are a prime example, as is the M language. I've hard a hard time getting excited about WF, because it has felt like a graphical way to do FORTRAN. But some of the enhancements to WF directly address my primary concerns, and I can see myself getting much more interested in WF in the relatively near future. And the ability of the M language to define other languages (create DSLs), where I can create my own output generator to create whatever I need - now that is really, really cool!

Once I get done with my book and all my fall travel, you can bet I'll be exploring the use of M to create a specialized language to simplify the creation of CSLA .NET business classes :)

There were numerous talks about .NET 4.0 and the future of C# and VB.

Probably the single biggest thing on the language front is that Microsoft has finally decided to sync VB and C# so they have feature parity. Enough of this back-and-forth with different features, the languages will now just move forward together. A few years ago I would have argued against this, because competition breeds innovation. But today I don't think it matters, because the innovation is coming from F#, Ruby, Python and various other languages and initiatives. Both VB and C# have such massive pre-existing code-bases (all the code we've written) that they can't move rapidly or explore radical ideas - while some of these other languages are more free to do just that.

The framework itself has all sorts of changes and improvements. I spent less time looking at this than at Azure and Oslo though, so I honestly just don't have a lot to say on it right now. I look at .NET 4.0 and Visual Studio 2010 as being more tactical - things I'll spend a lot of time on over the next few months anyway - so I didn't see so much need to spend my time on it during PDC.

Finally, there were announcements around Silverlight and WPF. If anyone doubts that XAML is the future of the UI on Windows and (to some degree) the web, now is the time to wake up and smell the coffee. I'm obviously convinced Silverlight is going to rapidly become the default technology for building business apps, with WPF and Ajax as fallback positions, and everything at the PDC simply reinforced this viewpoint.

The new Silverlight and WPF toolkits provide better parity between the two XAML dialects, and show how aggressively Microsoft is working to achieve true parity.

But more important is the Silverlight intersection with Azure and Live Mesh. The fact that I can build smart client apps that totally host in Azure or the Mesh is compelling, and puts Silverlight a notch above WPF in terms of being the desired start-point for app development. Yes, I really like WPF, but even if it can host in Azure it probably won't host in Mesh, and in neither case will it be as clean or seamless.

So while I fully appreciate that WPF is good for that small percentage of business apps that need access to DirectX or rich client-side resources, I still think most business apps will work just fine with access to the monitor/keyboard/mouse/memory/CPU provided by Silverlight.

A couple people asked why I think Silverlight is better than Ajax. To me this is drop-dead simple. I can write a class in C# or VB that runs on the client in Silverlight. I can write real smart client applications that run in the browser. And I can run that exact same code on the server too. So I can give the user a very interactive experience, and then re-run that same code on the server because I don't trust the client.

To do that in Ajax you'd either have to write your code twice (in C# and in Javascript), or you'd have to do tons of server calls to simulate the interactivity provided by Silverlight - and that obviously won't scale nearly the same as the more correct Silverlight solution.

To me it is a no-brainer - Ajax loses when it comes to building interactive business apps like order entry screens, customer maintenance screens, etc.

That's not to say Ajax has no home. The web and browser world is really good at displaying data, and Ajax makes data display more interesting that simple HTML. I strongly suspect that most "Silverlight" apps will make heavy use of HTML/Ajax for data display, but I just can't see why anyone would willingly choose to create data entry forms or other interactive parts of their app outside of Silverlight.

And that wraps up my on-the-flight-home summary of thoughts about PDC.

Next week I'm speaking at the Patterns and Practices Summit in Redmond, and then I'll be at Tech Ed EMEA in Barcelona. I'm doing a number of sessions at both events, but what's cool is that at each event I'm doing a talk specifically about CSLA .NET for Silverlight. And in December I'll be at VS Live in Dallas, where I'll also give a talk directly on using CSLA .NET.

Friday, October 31, 2008 7:10:16 PM (Central Standard Time, UTC-06:00)  #    Disclaimer