Rockford Lhotka

 Wednesday, January 28, 2009

I just ran into an odd “feature” of Silverlight data binding, that I assume must actually be a bug.

I have an object with a string property that has a public get, but private set:

public string Name
{
  get { return GetProperty(NameProperty); }
  private set { SetProperty(NameProperty, value); }
}

You would think that no code outside the business class could call the set block, because it is private. Certainly in .NET this would appear as a read-only property to any code outside the class.

But in Silverlight, data binding is perfectly capable of calling this code. Worse, the reflection PropertyInfo object for this property returns true for CanWrite, so this appears as a read-write property to any code.

I don’t think the problem is in the C# compiler, because if I write code that tries to set the property I get a compile-time error saying that’s not allowed.

Also, it isn’t a problem with reflection, because trying to set the value using the SetValue() method of a PropertyInfo object fails with the expected MethodAccessException (Silverlight reflection doesn’t allow you to manipulate private members). This, in particular, is weird, because if CanWrite returns true it should be safe to write to a property…

Update: I just checked .NET (being the suspicious sort) and it turns out that CanWrite returns true for a read-only set block in .NET too. And of course reflection won't set the property due to the scope issue, just like SL. But WPF data binding also doesn't call the private set block, where Silverlight somehow cheats and does call it - so at least the scope of the issue is narrowed a bit.

Which begs the question then: how is data binding bypassing the normal property scope protections so it can manipulate a private member? With the next question being how can I do it too? :) 

Seriously, this is the first time I’ve found where Microsoft has code in Silverlight that totally bypasses the otherwise strict rules, and it is a bit worrisome (and a pain in the @$$).

Anyway, the workaround at the moment is to use the old-fashioned approach and create a separate mutator method:

public string Name
{
  get { return GetProperty(NameProperty); }
}

private void SetName(string value)
{
  SetProperty(NameProperty, value);
}

This works, because when there’s no set block at all the property is actually read-only to everyone, inside and outside the class. Kind of inconvenient for the other code inside the business class that wants to set the property, because that code must use this non-standard mutator method – but at least it works…

Wednesday, January 28, 2009 3:35:02 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, January 23, 2009

For some years now there’s been this background hum about “domain specific languages” or DSLs. Whether portrayed as graphical cartoon-to-code, or specialized textual constructs, DSLs are programming languages designed to abstract the concepts around a specific problem domain.

A couple weeks ago I delivered the “Lap around Oslo” talk for the MDC in Minneapolis. That was fun, because I got to demonstrate MSchema (a DSL for creating SQL tables and inserting data), and show how it can be used as a building-block to create a programming language that looks like this:

“Moving Pictures” by “Rush” is awesome!

It is hard to believe that this sentence is constructed using a programming language, but that’s the power of a DSL. That’s just fun!

I also got to demonstrate MService, a DSL for creating WCF services implemented using Windows Workflow (WF). The entire thing is textual, even the workflow definition. There’s no XML, no C#-class-as-a-service, nothing awkward at all. The entire thing is constructed using terms you’d use when speaking aloud about building a service.

A couple years ago I had a discussion with some guys in Microsoft’s connected systems division. My contention in that conversation was that a language truly centered around services/SOA would have a first-class construct for a service. That C# or VB are poor languages for this, because we have to use a class and fake the service construct through inheritance or interfaces.

This MService DSL is exactly the kind of thing I was talking about, and in this regard DSLs are very cool!

So they are fun. They are cool. So why might DSLs be a bad idea?

If you’ve been in the industry long enough, you may have encountered companies who built their enterprise systems on in-house languages. Often a variation on Basic, C or some other common language. Some hot-shot developer decided none of the existing languages at the time could quite fit the bill. Or some adventurous business person didn’t want the vendor lock-in that came with using VendorX’s compiler. So these companies built their own languages (usually interpreted, sometimes compiled). And they built entire enterprise systems on this one-off language.

As a consultant through the 1990’s, I encountered a number of these companies. You might think this was rare, but it was not all that rare – surprising but true. They were all in a bad spot, having a lot of software built on a language and tools that were known by absolutely no one outside that company. To hire a programmer, they had to con someone into learning a set of totally dead-end skills. And if a programmer left, the company not only lost domain knowledge, but a very large percentage of the global population of programmers who knew their technology.

How does this relate to DSLs?

Imagine you work at a bio-medical manufacturing company. Suppose some hot-shot developer falls in love with Microsoft’s M language and creates this really awesome programming language for bio-medical manufacturing software development. A language that abstracts concepts, and allows developers to write a line of this DSL instead of a page of C#. Suppose the company loves this DSL, and it spreads through all the enterprise systems.

Then fast-forward maybe 5 years. Now this company has major enterprise systems written in a language known only by the people who work there. To hire a programmer, they need to con someone into learning a set of totally dead-end skills. And if a programmer leaves, the company has not only lost domain knowledge, but also one of the few people in the world who know this DSL.

To me, this is the dark side of the DSL movement.

It is one thing for a vendor like Microsoft to use M to create a limited set of globally standard DSLs for things like creating services, configuring servers or other broad tasks. It is a whole different issue for individual companies to invent their own one-off languages.

Sure, DSLs can provide amazing levels of abstraction, and thus productivity. But that doesn’t come for free. I suspect this will become a major issue over the next decade, as tools like Oslo and related DSL concepts work their way into the mainstream.

Friday, January 23, 2009 5:30:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

One of my kid’s machines just died – hard drive crash. In the past, this has been a pain, because I’d have to reinstall the OS (including finding and installing all the drivers) and he’d have to reinstall all his games, find the keys, all that stuff. It could literally take days or weeks to get the computer back to normal.

However, a few months ago I picked up the HP Windows Home Server appliance. It does regular (at least weekly, if not daily) automatic image backups of all the machines in my house. I bought it because a couple colleagues of mine had machines crash and they were singing the praises of WHS in terms of getting themselves back online quickly and easily.

I am now officially joining the chorus!

Here’s what I did: Pop out the bad hard drive, and put in an empty new one. Boot off the system rescue CD, walk through a simple wizard, wait 90 minutes for the restore and that’s it – he’s totally up and running as though nothing happened. Better actually, because this new hard drive is 3x bigger than the original – WHS simply restored to the new, bigger, drive without a complaint.

I guess it can be a little more complex with nonstandard network or hard drive drivers (How to restore a PC from a WHS after hard drive fails), but even that doesn’t look too bad. But in my case, WHS found the hard drive and network card automatically, so it was a total no-brainer.

The thing is, I’m not used to computers acting like or being like an appliance. But the HP WHS box really is an appliance – the kind of thing a regular home user could install. The machine comes with a fold-out instruction poster. 6 steps to install (things like “plug in power”, “plug in network”, “push on button”, etc). And it does these automatic backups, in a way where it deals with increasing volumes of data by warning you BEFORE the server runs out of space (unlike Vista’s built-in backup, which is terrible).

Start running out of space? Just pop in a new hard drive – without even shutting down the server. I’ve added two since I got the box. All PCs should work this way!!

The backups appear to be very smart. I’m backing up numerous machines, and the total backups are using less space than if you add all the backed up content together. I assume they are using compression, but I also think they are doing smart things like not backing up Windows XP and Vista each time, because those are the same across numerous machines. As are many of the games played by I and my kids.

What’s even better, is that WHS does video and audio streaming. I’ve been putting all our media on the box, and watching it from the xbox or media PC in other rooms.

There are more features, but I don’t want to sound like a spec sheet.

The point is that I’ve been entirely impressed by the simplicity and consumer-friendliness of this product since I took it out of the box (did I mention it is a really nice-looking mini-tower?). The fact that the computer restore feature works exactly as advertised is just further confirmation that it was a great purchase.

I seriously think that every home that has one or more computers with any data that shouldn’t be lost needs a WHS. Yes, that probably means you! ;)

Friday, January 23, 2009 3:35:40 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, January 15, 2009

If you read the comments on my blog at all, you may have periodically run across a multi-page diatribe from a guy calling himself “Rich” or “Tony”. This guy continually reposts the same comment on my blog (among others). The comment, at first glance, appears to be meaningful and valid – it discusses the merits of the Strangeloop AppScaler, a great product that is provided by the company of a friend of mine.

I assume this spammer is somehow disgruntled with Strangeloop, or has a personal vendetta against my friend, wants to give Aussies a bad name, or is simply unhinged. Probably a combination of these.

Anyway, I keep deleting the spam. He keeps adding more. While it is a small thing in the scheme of things, other than providing as much information as possible (including IP addresses and other information) to the Australian authorities (his IP addresses are consistently Australian), there’s not a lot I can do directly.

So I figured I’d just let the world at large know that this person exists, so if you run across these comments, or if you are another blogger who’s targeted by this criminal, that you know what is going on.

And if you are another blogger who’s been targeted, please contact me, and I’ll help you get in touch with the people investigating the issue so you can help provide tracking information. I know they’ve already narrowed the search dramatically, and I’m sure continued trace information will help identify the person so they can take care of him.

And if you are the spammer, and I’m sure he’ll read this at some point, all I can say is: please get a life.

Thursday, January 15, 2009 9:32:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, January 12, 2009

AppArch2.0_small.jpgThe Microsoft Patterns and Practices group has a new Application Architecture Guide available.

Architecture, in my view, is primarily about restricting options.

Or to put it another way, it is about making a set of high level, often difficult, choices up front. The result of those choices is to restrict the options available for the design and construction of a system, because the choices place a set of constraints and restrictions around what is allowed.

When it comes to working in a platform like Microsoft .NET, architecture is critical. This is because the platform provides many ways to design and implement nearly anything you’d like to do. There are around 9 ways to talk to a database – from Microsoft, not counting the myriad 3rd party options. The number of ways to build web apps continues to grow, etc. The point I’m making is that if you just throw the entire .NET framework at a dev group you’ll get a largely random result that may or may not actually meet the short, medium and long-term needs of your business.

Developing an architecture first allows you to rationally evaluate the various options, discard those that don’t fit the business and application requirements and only allow use of those that do meet the needs.

An interesting side-effect of this process is that your developers may disagree. They may only see short-term issues, or purely technical concerns, and may not understand some of the medium/long term issues or broader business concerns. And that’s OK. You can either say “buck up and do what you are told”, or you can try to educate them on the business issues (recognizing that not all devs are particularly business-savvy). But in the end, you do need some level of buy-in from the devs or they’ll work against the architecture, often to the detriment of the overall system.

Another interesting side-effect of this process is that an ill-informed or disconnected architect might create an architecture that is really quite impractical. In other words, the devs are right to be up in arms. This can also lead to disaster. I’ve heard it said that architects shouldn’t code, or can’t code. If your application architect can’t code, they are in trouble, and your application probably is too. On the other hand, if they don’t know every nuance of the C# compiler, that’s probably good too! A good architect can’t afford to be that deep into any given tool, because they need more breadth than a hard-core coder can achieve.

Architects live in the intersection between business and technology.

As such they need to be able to code, and to have productive meetings with business stakeholders – often both in the same day. Worse, they need to have some understanding of all the technology options available from their platform – and the Microsoft platform is massive and complex.

Which brings me back to the Application Architecture Guide. This guide won’t solve all the challenges I’m discussing. But it is an invaluable tool in any .NET application architect’s arsenal. If you are, or would like to be, an architect or application designer, you really must read this book!

Monday, January 12, 2009 11:29:13 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 6, 2009

VS Live San Francisco is coming up soon – February 23-27.

San Francisco

As one of the co-chairs for this conference, I’m really pleased with the speaker and topic line-up. In my view, this is one of the best sets of content and speakers VS Live has ever put forward.

Even better, this VS Live includes the MSDN Developer Conference (MDC) content. So you can get a recap of the Microsoft PDC, with all its forward-looking content, and then enjoy the core of VS Live with its pragmatic and independent information about how you can be productive using Microsoft’s technologies today, and into the future.

Perhaps best of all are the keynotes. Day one starts with a keynote containing secret content. Seriously – we can’t talk about it, and Microsoft isn’t saying – but it is going to be really cool. The kind of thing you’ll want to hear in person so you can say “I was there when...”. And Day two contains some of the best WPF/XAML apps on the planet. We’re talking about apps that not only show off the technology, but show how the technology can be used in ways that will literally change the world – no exaggeration! Truly inspirational stuff, on both a personal and professional level!

I know travel budgets are tight, and the economy is rough. All I can say, is that you should seriously consider VS Live SF if you have the option. I think you’ll thank me later.

Tuesday, January 6, 2009 11:43:42 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, January 3, 2009

I am not a link blogger, but I can’t help but post this link that shows so clearly why the Single Responsibility Principle is important.

I got this link from another blog I just discovered, The Morning Brew. I’m not sure why I only found out about the Brew now, because it appears to be the perfect resource, for me at least.

I mostly quit reading blogs about 6 months ago – I discovered that reading blogs had largely supplanted doing real work, and that could only lead to really bad results!!

Interestingly, I also mostly quit listening to any new music about 6 months ago. Browsing through the Zune.net catalog for interesting music had largely supplanted being entertained by music (there’s a lot of crap music out there, and I got tired of listening to it).

Is there a parallel here? I think so.

In October (or so), Zune.net introduced a Pandora-like feature where the Zune service creates a “virtual radio station” based on the music you listen to most. This is exactly what I want! I want a DJ (virtual or otherwise) who filters out the crap and provides me with good (and often new) music. The whole Channels idea in Zune.net kept me from canceling my subscription, and the more recent addition of 10 track purchases per month for free clinched the deal.

And I want the same thing for blogs. I don’t want to filter through tons of blog posts about good restaurants, pictures of kids or whatever else. (I’m sure those are interesting to some people, but I read blogs for technical content, and only read friends blogs for non-technical content). And this is where The Morning Brew comes in. It is like a “DJ” for .NET blogs. Perfect!!

I should point out that on twitter I also subscribe to Silverlight News – for the same reason – it is a pre-filtered set of quality links, though I think I may start following this via RSS instead of twitter, because I prefer the RSS reading ability in Outlook to twitter for this particular type of information.

In any case, I hope everyone reading this post has had a wonderful last couple weeks.

For the many of you who had a religious holiday in here, I hope it was fulfilling and you felt the full meaning of the holiday, and that it brought you closer to your god/goddess/gods/ultimate truth. If you had a secular holiday in here, I hope it was fulfilling and meaningful, preferably filled with family and fun and a warm sense of community.

Happy New Year, and welcome to 2009!

Saturday, January 3, 2009 10:26:25 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 17, 2008

csla_logo1_42

csla-lt_logo_42

I have released version 3.6 of CSLA .NET for Windows and CSLA .NET for Silverlight.

This is a major update to CSLA .NET, because it supports Silverlight, but also because there are substantial enhancements to the Windows/.NET version.

To my knowledge, this is also the first release of a major development framework targeting the Silverlight platform.

Here are some highlights:

  • Share more than 90% of your business object code between Windows and Silverlight
  • Powerful new UI controls for WPF, Silverlight and Windows Forms
  • Asynchronous data portal, to enable object persistence on a background thread (required in Silverlight, optional in Windows)
  • Asynchronous validation rules
  • Enhanced indexing in LINQ to CSLA
  • Numerous performance enhancements

This version of CSLA .NET for Windows is covered in my new Expert C# 2008 Business Objects book (Expert VB 2008 Business Objects is planned for February 2009).

At this time there is no book covering CSLA .NET for Silverlight. However, most business object code (other than data access) is the same for Windows and Silverlight, so you will find the Expert C# 2008 Business Objects book very valuable for Silverlight as well. I am in the process of creating a series of training videos around CSLA .NET for Silverlight, watch for those in early 2009.

Version 3.6 marks the first time major development was done by a team of people. My employer, Magenic, has been a patron of CSLA .NET since the year 2000, but with this version they provided me with a development team and that is what enabled such a major version to be created in such a relatively short period of time. You can see a full list of contributors, but I specifically want to thank Sergey Barskiy, Justin Chase and Nermin Dibek because they are the primary contributors to 3.6. I seriously couldn't have asked for a better team!

This is also the first version where the framework code is only in C#. The framework can be used from any .NET language, including VB, C# and others, but the actual framework code is in C#.

There is a community effort underway to bring the VB codebase in sync with version 3.6, but at this time that code will not build. In any case, the official version of the framework is C#, and that is the version I recommend using for any production work.

In some ways version 3.6 is one of the largest release of the CSLA .NET framework ever. If you are a Windows/.NET developer this is a natural progression from previous versions of the framework, providing better features and capabilities. If you are a Silverlight developer the value of CSLA .NET is even greater, because it provides an incredible array of features and productivity you won't find anywhere else today.

As always, if you are a new or existing CSLA .NET user, please join in the discussion on the CSLA .NET online forum.

Wednesday, December 17, 2008 4:13:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, December 12, 2008

My previous blog post (Some thoughts on Windows Azure) has generated some good comments. I was going to answer with a comment, but I wrote enough that I thought I’d just do a follow-up post.

Jamie says “Sounds akin to Architecture:"convention over configuration" I'm wholly in favour of that.”

Yes, this is very much a convention over configuration story, just at an arguably larger scope that we’ve seen thus far. Or at least showing a glimpse of that larger scope. Things like Rails are interesting, but I don't think they are as broad as the potential we're seeing here.

It is my view that the primary job of an architect is to eliminate choices. To create artificial restrictions within which applications are created. Basically to pre-make most of the hard choices so each application designer/developer doesn’t need to.

What I’m saying with Azure, is that we’re seeing this effect at a platform level. A platform that has already eliminated most choices, and that has created restrictions on how applications are created – thus ensuring a high level of consistency, and potential portability.

Jason is confused by my previous post, in that I say Azure has a "lock-in problem", but that the restricted architecture is a good thing.

I understand your confusion, as I probably could have been more clear. I do think Windows Azure has a lock-in problem - it doesn't scale down into my data center, or onto my desktop. But the concept of a restricted runtime is a good one, and I suspect that concept may outlive this first run at Azure. A restricted architecture (and runtime to support it) doesn't have to cause lock-in at the hosting level. Perhaps at the vendor level - but those of us who live in the Microsoft space put that concern behind us many years ago.

Roger suggests that most organizations may not have the technical savvy to host the Azure platform in-house.

That may be a valid point – the current Azure implementation might be too complex for most organizations to administer. Microsoft didn't build it as a server product, so they undoubtedly made implementation choices that create complexity for hosting. This doesn’t mean the idea of a restricted runtime is bad. Nor does it mean that someone (possibly Microsoft) could create such a restricted runtime that could be deployed within an organization, or in the cloud. Consider that there is a version of Azure that runs on a developer's workstation already - so it isn't hard to imagine a version of Azure that I could run in my data center.

Remember that we’re talking about a restricted runtime, with a restricted architecture and API. Basically a controlled subset of .NET. We’re already seeing this work – in the form of Silverlight. Silverlight is largely compatible with .NET, even though they are totally separate implementations. And Moonlight demonstrates that the abstraction can carry to yet another implementation.

Silverlight has demonstrated that most business applications only use 5 of the 197 megabytes in the .NET 3.5 framework download to build the client. Just how much is really required to build the server parts? A different 5 megabytes? 10? Maybe 20 tops?

If someone had a defined runtime for server code, like Silverlight is for the client, I think it becomes equally possible to have numerous physical implementations of the same runtime. One for my laptop, one for my enterprise servers and one for Microsoft to host in the cloud. Now I can write my app in this .NET subset, and I can not only scale out, but I can scale up or down too.

That’s where I suspect this will all end up, and spurring this type of thinking is (to me) the real value of Azure in the short term.

Finally, Justin rightly suggests that we can use our own abstraction layer to be portable to/from Azure even today.

That's absolutely true. What I'm saying is that I think Azure could light the way to a platform that already does that abstraction.

Many, many years ago I worked on a project to port some software from a Prime to a VAX. It was only possible because the original developers had (and I am not exaggerating) abstracted every point of interaction with the OS. Everything that wasn't part of the FORTRAN-77 spec was in an abstraction layer. I shudder to think of the expense of doing that today - of abstracting everything outside the C# language spec - basically all of .NET - so you could be portable.

So what we need, I think, is this server equivalent to Silverlight. Azure is not that - not today - but I think it may start us down that path, and that'd be cool!

Friday, December 12, 2008 11:33:03 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, December 11, 2008

At PDC, Microsoft announced Windows Azure - their Windows platform for cloud computing. There's a lot we don't know about Azure, including the proper way to pronounce the word. But that doesn't stop me from being both impressed and skeptical and about what I do know.

In the rest of this post, as I talk about Azure, I'm talking about the "real Azure". One part of Azure is the idea that Microsoft would host my virtual machine in their cloud - but to me that's not overly interesting, because that's already a commodity market (my local ISP does this, Amazon does this - who doesn't do this??). What I do think is interesting is the more abstract Azure platform, where my code runs in a "role" and has access to a limited set of pre-defined resources. This is the Azure that forces the use of a scale-out architecture.

I was impressed by just how much Microsoft was able to show and demo at the PDC. For a fledgling technology that is only partially defined, it was quite amazing to see end-to-end applications built on stage, from the UI to business logic to data storage. That was unexpected and fun to watch.

I am skeptical as to whether anyone will actually care. I think the primary determination about whether people care will be determined by the price point Microsoft sets. But I also think it will be determined by how Microsoft addresses the "lock-in question".

I expect the pricing to end up in one of three basic scenarios:

  1. Azure is priced for the Fortune 500 (or 100) - in which case the vast majority of us won't care about it
  2. Azure is priced for the small to mid-sized company space (SMB) - in which case quite a lot of us might be interested
  3. Azure is priced to be accessible for hosting your blog, or my blog, or www.lhotka.net or other small/personal web sites - in which case the vast majority of us may care a lot about it

These aren't entirely mutually exclusive. I think 2 and 3 could both happen, but I think 1 is by itself. Big enterprises have such different needs than people or SMB, and they do business in such different ways compared to most businesses or people, that I suspect we'll see Microsoft either pursue 1 (which I think would be sad) or 2 and maybe 3.

But there's also the lock-in question. If I built my application for Azure, Microsoft has made it very clear that I will not be able to run my app on my servers. If I need to downsize, or scale back, I really can't. Once you go to Azure, you are there permanently. I suspect this will be a major sticking point for many organizations. I've seen quotes by Microsoft people suggesting that we should all factor our applications into "the part we host" and "the part they host". But even assuming we're all willing to go to that work, and introduce that complexity, this still means that part of my app can never run anywhere but on Microsoft's servers.

I suspect this lock-in issue will be the biggest single roadblock to adoption for most organizations (assuming reasonable pricing - which I think is a given).

But I must say that even if Microsoft doesn't back down on this. And even if it does block the success of "Windows Azure" as we see it now, that's probably OK.

Why?

Because I think the biggest benefit to Azure is one important concept: an abstract runtime.

If you or I write a .NET web app today, what are the odds that we can just throw it on a random Windows Server 200x box and have it work? Pretty darn small. The same is true for apps written for Unix, Linux or any other platform.

The reason apps can't just "run anywhere" is because they are built for a very broad and ill-defined platform, with no consistently defined architecture. Sure you might have an architecture. And I might have one too. But they are probably not the same. The resulting applications probably use different parts of the platform, in different ways, with different dependencies, different configuration requirements and so forth. The end result is that a high level of expertise is required to deploy any one of our apps.

This is one reason the "host a virtual machine" model is becoming so popular. I can build my app however I choose. I can get it running on a VM. Then I can deploy the whole darn VM, preconfigured, to some host. This is one solution to the problem, but it is not very elegant, and it is certainly not very efficient.

I think Azure (whether it succeeds or fails) illustrates a different solution to the problem. Azure defines a limited architecture for applications. It isn't a new architecture, and it isn't the simplest architecture. But it is an architecture that is known to scale. And Azure basically says "if you want to play, you play my way". None of this random use of platform features. There are few platform features, and they all fit within this narrow architecture that is known to work.

To me, this idea is the real future of the cloud. We need to quit pretending that every application needs a "unique architecture" and realize that there are just a very few architectures that are known to work. And if we pick a good one for most or all apps, we might suffer a little in the short term (to get onto that architecture), but we win in the long run because our apps really can run anywhere. At least anywhere that can host that architecture.

Now in reality, it is not just the architecture. It is a runtime and platform and compilers and tools that support that architecture. Which brings us back to Windows Azure as we know it. But even if Azure fails, I suspect we'll see the rise of similar "restricted runtimes". Runtimes that may leverage parts of .NET (or Java or whatever), but disallow the use of large regions of functionality, and disallow the use of native platform features (from Windows, Linux, etc). Runtimes that force a specific architecture, and thus ensure that the resulting apps are far more portable than the typical app is today.

Thursday, December 11, 2008 6:31:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer