Rockford Lhotka

 Tuesday, July 21, 2009

I have released CSLA .NET version 3.7, with support for Silverlight 3.

This is really version 3.6.3, just tweaked slightly to build on Silverlight 3. It turns out that the Visual Studio upgrade wizard didn’t properly update the WCF service reference from Silverlight 2 to Silverlight 3, so I had to rereference the data portal service. I also took this opportunity to fix a couple minor bugs, so check the change logs.

  1. If you are a CSLA .NET for Windows user, there’s just one minor change to CslaDataProvider, otherwise 3.7 really is 3.6.3.
  2. If you are a CSLA .NET for Silverlight user, there are a couple minor bug fixes in CslaDataProvider and Navigator, and 3.7 builds with Silverlight 3. Otherwise 3.7 is 3.6.3.

In other words, moving from 3.6.3 to 3.7 should be painless.

Downloads are here:

At this point the 3.6.x branch is frozen, and all future work will occur in 3.7.x until I start working on 4.0 later this year.

Tuesday, July 21, 2009 9:45:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, July 19, 2009

Every year Magenic gives its employees a cool tech gift around the holidays. This year it was a Kindle 2, and so I’ve been using mine for several months now and have some thoughts.

On the upside, the Kindle allows me to bring a number of books with me everywhere I go. This is very nice, given how much I travel for work, and the fact that I do most of my reading while sitting in airplanes.

Also, I like the form factor of the device, including the size, shape and weight. And the screen is easy on the eyes. I really think I read faster on the Kindle than with a paper book because of the clarity, consistency and the nice way the device moves forward from page to page.

But there are some things I dislike too.

I like to share books. My wife and I have a respectable library in our home, and we’re constantly loaning books to friends. And borrowing books in return – it is a great way to interact socially and share common interests.

The Kindle entirely destroys the concept of book sharing. With a real book I spend $8-$25 to get something I can read and share with friends. With the Kindle I pay the same price, but only I can read the result. All I can do is tell my friends it was good, or not.

As a content creator, I suppose I should be cheering on the idea of books that can’t be shared, but I’m afraid I’m a reader first, and author second, and this is a really serious drawback for me. To the point that, since getting the Kindle, I’ve purchased a couple paper books because I know I’ll want to share them. Obviously I didn’t waste the money to buy them on the Kindle too, as that seems rather silly to me, so there I was back to lugging around paper books on the airplane.

The other major problem I have with the Kindle is the same one I have with buying music online.

While Amazon (and music vendors) portray the transaction as a “purchase”, it is really a “lease”.

I’ve lost several CDs worth of music over the years, as a music vendor went out of business and their licenses expired and the music I “bought” was rendered unplayable. I’ve long since decided that I’ll never buy DRM “protected” music. Such a “purchase” is a hoax – a total scam. Personally I think it should be illegal to portray it as a purchase transaction (false advertising or whatever), but I guess we live in a laissez-faire enough world that it is up to each of us to get ripped of a few times before we rebel against the dishonest corporate entities.

I hadn’t really thought about the Kindle being in the same category until I read this recent article. It turns out that Amazon can yank your license to read a book if they desire. And of course it is true that if Amazon folds, or gets bored with the Kindle idea, that all the books I “purchased” will disappear.

With music I’ve been paying a monthly fee to the Zune service to lease access to all the music they have. And that makes me reasonably happy, because it is an honest, up-front transaction that is what it says it is. I get access to amazing amounts of music as long as I keep paying my lease. If I stop paying, or if the Zune service goes away, the music disappears. This makes me happy overall, and I view this as a reasonably value.

Maybe Amazon should do this with the Kindle too. Be more honest and up-front about the nature of the transaction and the relationship. Charge a monthly fee for access to their entire online library of books, and have the books disappear off the device if/when I stop paying the monthly fee.

Of course this still doesn’t solve the problem where I can’t share books I “purchase” with my friends, I still don’t know of a good solution to this issue.

Sunday, July 19, 2009 5:35:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, July 10, 2009

You’ve probably noticed that Microsoft released Silverlight 3 today. More information can be found at

I’ve put a very stable beta of CSLA .NET 3.7 online as well

Version 3.7 is really the same code as 3.6.3, but it builds with the Silverlight 3 tools for Visual Studio 2008. So while I’m calling it a beta, it is really release-quality code for all intents and purposes.

If you are a Windows/.NET user of CSLA .NET, you’ll find that all future bug fixes, enhancements and so forth will be in version 3.7. Version 3.6.3 is the last 3.6 version.

If you are a Silverlight user of CSLA .NET, and continue to use the Silverlight 2 tools, 3.6.x is still for you, but I plan to address only the most critical bugs in that version – and I’m not currently aware of anything at that level. So 3.6.3 is really the end for Silverilght 2 as well.

In short, 3.7 is now the active development target for any real work, and you should plan to upgrade to it when possible.

Friday, July 10, 2009 11:46:31 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, July 8, 2009

I’ve observed, in broad strokes, a pattern over the past 20 years or so.

At one time there was DOS (though I was a happy VAX/VMS developer at the time, and so didn’t suffer the pain of DOS).

Then there was Windows, which ran on DOS (sadly I did have to deal with this world).

Then we booted into Windows, which had a “DOS command prompt” window, so DOS ran on Windows (and the world got better).

Windows lasted a long time, but then came .NET, which ran on Windows.

I always expected that we’d see the time come when we’d boot into .NET, and Windows would run on .NET.

I no longer think that’s likely. Which is too bad, because that would have been cool.

But in the late 1990’s there was the very real possibility that the browser would become the OS. The dot-bomb crash eliminated that debate, robbing the browser-based company’s of their funding and allowing Windows, Linux and Mac OS to continue to dominate.

In the past 2-3 years though, we’ve seen something different. Flash/Flex/Air, Silverlight and Google Gears have shown that the “browser as the OS” idea didn’t go away, it just went to sleep for a while.

Now I’ll suggest that there are really two camps here. There’s the Google camp that really wants the browser as the OS. And there’s the Adobe/Microsoft camp (can I put them together?) that realizes that the technology of the web is really a stack of hacks, and that it is high time to move on to the next big thing. My bias is clearly toward the next big thing, and has been since the web started being misused as an app platform in the 1990’s. They are proposing the mini-runtime-as-the-OS concept, using the browser as a simple launch-pad for the real runtime.

But either way, the concept of a “client OS” has been fading for some years now in my view. Or conversely, the concept of “all my apps run in ‘the browser’” has been ascending steadily and rapidly over the past 2-3 years.

And I’m OK with this, because we’re not talking about reducing all apps to stupid browser-based mainframe-with-color architectures like the web has tried to do for over a decade.

No! We’re talking about full-blown smart-client technologies like Silverlight, where the client application can be an edge application in a service-oriented system, or the client tier of an n-tier application. We’re talking about the resurgence of the incredible productivity and scalability offered by the n-tier technologies of 1997-98. We’re talking about getting our industry back on track, so we can look back at 1999-2009 as nothing more than a “lost decade”.

(can you tell I think the terminal-based browser model is really lame?)

Does this mean that Windows, Mac OS and Linux become less relevant? I think so. Does this mean that Silverlight is the most important technology Microsoft is currently building? I think so.

Consider that Silverlight, in a <5 meg runtime, provides everything you need to build most types of powerful smart-client business applications. And consider that these applications auto-deploy and auto-update in a way that the user never sees the deploy/update process. Totally smooth and transparent.

Consider that you can write your C# or VB code and run it on the client. Or on the server. Or both (see CSLA .NET for Silverlight if you want to know more). You don’t need to learn JavaScript. You don’t need a totally different language and runtime on the client from the server. Silverlight provides a decent level of consistency between client and server, so you can reuse code, and absolutely reuse skills, knowledge and tools.

Google is totally on the bandwagon of course. Many people have seen their strategy for months, if not years. And we’re now seeing it come through, as they start to make their browser is the OS play in earnest.

What’s interesting to me, is that in either case – the browser-as-OS (Google) or the mini-runtime-as-OS (Silverlight) – leaves the previous generation OS concept (Windows/Mac/Linux) out of the loop.

Windows 7, perhaps the best version of Windows ever created, might be the last great hurrah for the legacy OS concept.

I mean all we need now is a Silverlight-based version of Office (to compete with Google Apps) and the underlying “OS” beneath Silverlight becomes rapidly irrelevant...

Wednesday, July 8, 2009 10:45:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

As anyone who’s used VB any time after 1994 knows, dynamic languages are really powerful for certain problems. Of course it used to be called “late binding”, but the point is that you can write code against a type you don’t know at development time (or compile time).

This concept is all the rage now, and it is fun (imo) to be able to nod sagely and say “Yes, I’ve been saying this for 14 years now.”

Not that I’d say that to Chris, but it is really nice that he is now an “it getter”*:

But the concept is really, really powerful. And many of the related concepts and capabilities that have come in more recent languages, including VB 9, 10 and C# 4, really enhance the capabilities of “late binding” and open up entire new scenarios.

This is along the lines of the C# people who talk about the couple features that are not in VB, and then dismiss XML literals as being of little or no value. That’s just ignorance talking. XML literals do for XML coding what dynamic languages do for using objects to abstract service/system boundaries.

I have been doing nearly all C# for over a year, but when I need to mess with XML I still create a VB project. Who’d choose to put themselves through the pain incurred in C# to do something that is so much simpler in VB?

In the end, my point is simple: don’t be a language bigot. Every language out there, VB, Python, C# and more, brings something unique and beautiful to the world. We can only hope that our language of choice (whatever that is for you) has an architect with an open mind, who is willing to bring the best ideas into each language…

(and yes, I hope that means things like XML literals, WithEvents and the Implements clause work their way into C# – because those concepts are really powerful and would benefit the language greatly)

Regardless, it is nice to see C# gaining “late binding”, which should provide great benefit a lot of people.


* thank you Steven Colbert

Wednesday, July 8, 2009 9:50:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, July 7, 2009

If you’ve read the news in the last day or two, you’ve probably run across an article talking about an “unprecedented step” taken by Microsoft, in that they are talking about a Windows vulnerability before they have a patch or fix available.

When I read the article on it mentioned that there is a workaround (not a fix – but a way to be safer), and that information could be found on Microsoft’s web site.

So off I went to – Microsoft’s web site. Where I found nothing on the topic, but I did find a link to the security home page.

So off I went to the security home page. Where I found nothing that was obviously on the topic. Yes, there’s a lot of information there, including some information on viruses, infection attacks and an apparent rise in fake attacks (so I started wondering if MSNBC had been faked out?).

At no point in here did I realize that one of the articles on the security home page actually was the article I was looking for! It turns out that this particular vulnerability is through an ActiveX video component, a fact not mentioned in the MSNBC article. So while I saw information about such a thing on the Microsoft site, I had no way to link it to the vague mainstream press article that started this whole adventure…

Fortunately I know people :)

The vulnerability is an ActiveX video component issue. And the workaround is documented here:

And now that I know I’m looking for information related to an ActiveX video component issue, it is clear that there are relevant bits of information on these sites too:

Microsoft Security Response Center blog:

Microsoft TechNet Security alerts:

I still think the communication here is flawed. The mainstream press screwed up by providing insufficient and vague information, making it virtually impossible to find the correct documentation from Microsoft on the issue. But perhaps Microsoft was vague with the press too – hard to say.

And I think Microsoft could have been much more clear on their sites, providing some conceptual “back link” to indicate which bits of information pertain to this particular issue.

There’s no doubt in my mind that my neighbors, for example, would never find the right information based on the mainstream articles in the press. So Microsoft’s “unprecedented step” of talking about this issue will, for most people, just cause fear, without providing any meaningful way to address that fear. And that’s just sad – lowering technology issues to the level typically reserved for political punditry.

Tuesday, July 7, 2009 9:06:16 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, July 1, 2009

I’ve updated my prototype MCsla project to work on the “Olso” May CTP. The update took some effort, because there are several subtle changes in the syntax of “Oslo” grammars and instance data. What complicated this a little, is that I am using a custom DSL compiler because the standard mgx.exe utility can’t handle my grammar.

Still, I spent less than 8 hours getting my grammar, schemas, compiler and runtime fixed up and working with the CTP (thanks to some help from the “Oslo” team).

I chose at this point, to put the MCsla project into my public code repository. You can use the web view to see the various code elements if you are interested.

The prototype has limited scope – it supports only the CSLA .NET editable root stereotype, which means it can be used to create simple CRUD screens over single records of data. But even that is pretty cool I think, because it illustrates the end-to-end flow of the whole “Oslo” platform concept.

A business developer writes DSL code like this:

Object Product in Test
  Public ReadOnly int Id;
  Public string Name;
  Public double ListPrice;
} Identity Id;

(this is the simplest form – the DSL grammar also allows per-type and per-property authorization rules, along with per-property business and validation rules)

Then they run a batch file to compile this code and insert the resulting metadata into the “Oslo” repository.

The user runs the MCslaRuntime WPF application, which reads the metadata from the repository and dynamically creates a XAML UI, CSLA .NET business object and related data access object that talks to a SQL Server database.


The basic functionality you get automatically from CSLA .NET is all used by the runtime. This includes application of authorization, business and validation rules, automatic enable/disable for the Save/Cancel buttons based on the business object’s rules and so forth.

If the business developer “recompiles” their DSL code, the new metadata goes into the repository. The user can click a Refresh App button to reload the metadata, immediately enjoying the new or changed functionality provided by the business developer.

The point is that the business developer writes that tiny bit of DSL code instead of pages of XAML and C#. If you calculate the difference in terms of lines of code, the business developer writes perhaps 5% of the code they’d have written by hand. That 95% savings in effort is what makes me so interested in the overall “Oslo” platform story!

Wednesday, July 1, 2009 5:15:07 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, June 29, 2009

I use for my email – though after today that may have to change…

Why? Because the email service is down, and has been for several hours. There was a brief moment earlier this afternoon when I thought they had it fixed, because a few emails squeaked through, but otherwise it is deader than a doornail. is apparently doing some sort of email system upgrade – fancier AJAX web UI, etc. And that’d be fine, but all I really care about is reliable POP/SMTP service, and the “upgrade” appears to have been a major step backward in that regard…

This affects my personal email, the email for the CSLA .NET forum and email for my online store.

So if you sent me email and expect/need a response, don’t get your hopes too high. Maybe they’ll get it fixed, but I’m beginning to suspect that they really messed themselves up. This may be the push I need to explore other email options – probably ones that are cheaper and better (since is not a great value in that regard – they are just convenient).

Monday, June 29, 2009 2:32:01 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, June 26, 2009

I’ve had quite the experience over the past couple weeks.

Three weeks ago I was in Las Vegas speaking at VS Live. While there, I realized I’d forgotten to copy some key files to my laptop before leaving home, but Windows Home Server made that a non-issue, since it provides a secure web interface to my files. Awesome!

Then I got home and discovered that one of the two additional hard drives I added to my WHS machine was failing. This was unpleasant, but not cause for alarm since all my key files are set to duplicate.

(I only discovered the failure because WHS started crashing, and I looked in the Windows system event log to find the drive failure notifications – they’d been occurring for several days, but I don’t check my system event log daily, so I didn’t know – this is the one place where WHS really let me down – I still don’t know why Windows knew the drives were going to fail, but WHS blindly ignored this clear intelligence…)

Unfortunately I couldn’t get WHS to dismount (remove) the failing hard drive. After 3-4 tries, it finally did remove the drive. This took 2.5 days, since each failure took 12-24 hours, as did the final success.

I should also note that I was under serious time pressure, because I was flying out to Norway for the NDC conference and only had about 3.5 days to solve the problem!

After the failed drive was removed, things were obviously not right on the WHS machine. Clearly the remove didn’t work right or something. Poking around a bit further, I found that the second additional hard drive was also failing. What are the odds of two drives failing at once? Small, but yet there I was.

I quickly bought and installed a brand new hard drive (Seagate this time, since the dual failures were Western Digitals) and tried to remove the second failing drive. The attempt was still running when I flew to Norway.

Fortunately Live Mesh allowed me to use remote desktop to get back into my network, and I kept trying to remove the drive (failure after failure) while in Norway.

When I returned from Norway I manually removed the drive. Clearly it wasn’t going to remove through software. I can’t say this made matters worse, but it sure didn’t make them better either. Now WHS still wouldn’t remove the drive even though it was shown as “missing”. It had “locked files” and couldn’t be removed.

Thanks to some excellent help from the Microsoft WHS forum (thanks Ken!) I came to realize that my only option at this point was to repair the WHS OS – basically do a reinstall. I have the cute little HP appliance, and it comes with a server restore disk – pop it into my desktop machine, run the wizard and in very little time I had my server back, just like when I bought it originally.

OK, so now I have a functioning WHS again, but it is empty, blank – all my data is gone!

I’ve been here before (a couple times) with other servers though, so I have backups for my backups. All “critical” data is always in 3 places. So I just restored my server backup and got back my “critical” files – everything for my work, all the family photos and home videos of the kids, etc.

Here’s the catch though – I rapidly discovered that my “non-critical” data is actually pretty critical. Things like music, videos and miscellaneous files.

The music I was able to recover from a Zune device. I tried my Zune device, but that was a mistake. As soon as I connected it to my desktop machine it synced – and it discovered I’d “deleted” all my music and so it cleared the device. Damn!

Fortunately my son also has a fully-synced Zune, and I connected his to my desktop machine as a guest. No automatic sync, and so I was able to highlight all music on his device and say “copy to my collection”. Just like that all our music was back on the server.

I still don’t have any videos or miscellaneous files. They are gone. Arguably this isn’t the end of the world, as technically I can get back anything that really matters – by re-downloading, or getting files from friends, etc. But that’s all a pain in the butt and a waste of time, so it is unfortunate.

(it might be that I can recover some of them from the two defunct hard drives – using various data recovery tools I may be able to connect them to my desktop machine and retrieve some of the files – but that’s also a big hassle and may not be worth the effort)

So what did I learn out of all this?

  1. WHS is awesome, and I still really love it
  2. WHS can’t handle two hard drives failing at once – if that happens you better have a backup for your server
  3. “Critical” files include things that aren’t really critical like music and maybe videos – external hard drives to backup the server are relatively cheap – just get a 2 TB external drive and back up everything – that’s my new motto!

Oh, and I’m now using IDrive to get offsite backups for my truly critical files. I know, I didn’t need it in this case, but the whole experience got me thinking about floods, tornadoes, fire, etc. What if I did lose my family photos or home videos? The last 15 years of my life is digital, and nearly all record would be lost in such a case. Having automatic backups of that data, along with other important documents and files seems really wise.

So now my super-critical files are in at least 4 places (one offsite). My critical files (using my newly expanded definition) are in at least 3 places. And my non-critical files are in 2 places. I’m so redundant I’m starting to feel like NASA :)

Friday, June 26, 2009 3:37:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer