Rockford Lhotka

 Friday, April 5, 2019

One thing I’ve observed in my career is something I call the “pit of success”. People (often including me when I was younger) write quick-and-dirty software because what we’re doing is a “simple project” or “for just a couple users” or a one-off or a stopgap or a host of other diminutive descriptions.

What so often happens is that the software works well - it is successful. And months later you get a panicked phone call in the middle of the night because your simple app for a couple users that was only a stop-gap until the “real solution came online” is now failing for some users in Singapore.

You ask how it has users in Singapore, the couple users you wrote it for were in Chicago? The answer: oh, people loved it so much we rolled it out globally and it is used by a few hundred users.

OF COURSE IT FAILS, because you wrote it as a one-off (no architecture or thoughtful implementation) for a couple users. You tell them that fixing the issues requires a complete rearchitect and implementation job. And that it’ll take a team of 4 people 9 months. They are shocked, because you wrote this thing initially in 3 weeks.

This is often a career limiting move, bad news all around.

Now if you’d put a little more thought into the original architecture and implementation, perhaps using some basic separation of concerns, a little DDD or real OOD (not just using an ORM) the original system may well have scaled globally to a hundred users.

Even if you do have to enhance it to support the fact that you fell into the pit of success, at least the software is maintainable and can be enhanced without a complete rewrite.

This "pit of success" concept was one of the major drivers behind the design of CSLA .NET and the data portal. If you follow the architecture prescribed by CSLA you'll have clear separation of concerns:

  1. Interface
  2. Interface control
  3. Business logic
  4. Data access
  5. Data storage

And you'll be able to deploy your initial app as a 1- or 2-tier quick-and-easy thing for those couple users. Better yet, when that emergency call comes in the night, you can just:

  1. Stand up an app server
  2. Change configuration on all clients to use the app server

And now your quick app for a couple users is capable of global scaling for hundreds of users. No code changes. Just an app server and a configuration change.

(and that just scratches the surface - with a little more work you could stand up a Kubernetes cluster instead of just an app server and support tens of thousands of users)

So instead of a career limiting move, you are a hero. You get a raise, extra vacation, and the undying adoration of your users 😃

Friday, April 5, 2019 10:38:52 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

How can a .NET developer remain relevant in the industry?

Over my career I've noticed that all technologies require developers to work to stay relevant. Sometimes that means updating your skills within the technology, sometimes it means shifting to a whole new technology (which is much harder).

Fortunately for .NET developers, the .NET platform is advancing rapidly, keeping up with new software models around containers, cloud-native, and cross-platform client development.

First it is important to recognize that the .NET Framework is not the same as .NET Core. The .NET Framework is effectively now in maintenance mode, and all innovation is occurring in the open source .NET Core now and into the future. So step one to remaining relevant is to understand .NET Core (and the closely related .NET Standard).

Effectively, you should plan for .NET Framework 4.8 to become stable and essentially unchanging for decades to come, while all new features and capabilities are built into .NET Core.

Right now, you should be working to have as much of your code as possible target .NET Standard 2.0, because that makes your code compatible with .NET Framework, .NET Core, and mono. See Migrating from .NET to .NET Standard.

Second, if you are a client-side developer (Windows Forms, WPF, Xamarin) you need to watch .NET Core 3, which is slated to support Windows Forms and WPF. This will require migration of existing apps, but is a way forward for Windows client developers. Xamarin is a cross-platform client technology, and in this space you should learn Xamarin.Forms because it lets you write a single app that can run on iOS, Android, Mac, Linux desktop, and Windows.

Third, if you are a client-side developer (web or smart client) you should be watching WebAssembly. Right now .NET has experimental support for WebAssembly via the open source mono runtime. And also a wasm UI framework called Blazor, and another XAML-based UI framework called the Uno Platform. This is an evolving space, but in my opinion WebAssembly has great promise and is something I’m watching closely.

Fourth, if you are a server-side developer it is important to understand the major industry trends around containers and container orchestration. Kubernetes, Microsoft Azure, Amazon AWS, and others all have support for containers, and containers are rapidly becoming the defacto deployment model for server-side code. Fortunately .NET Core and ASP.NET both have very good support for cloud-native development.

Finally, that’s all from a technology perspective. In a more general sense, regardless of technology, modern developers need good written and verbal communication skills, the ability to understand business and user requirements, at least a basic understanding of agile workflows, and a good understanding of devops.

I follow my own guidance, by the way. CSLA .NET 4.10 (the current version) targets .NET Standard 2.0, supports WebAssembly, and has a bunch of cool new features to help you leverage container-based cloud-native server environments.

Friday, April 5, 2019 10:19:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, April 5, 2017

It seems to me that blockchain today is where XML was at the beginning. A low level building block on which people are constructing massive hopes and dreams, mentally bypassing the massive amounts of work necessary to get from there to the goals. What I mean by this is perhaps best illustrated by a work environment I was in just prior to XML coming on the scene.

The business was in the bio-chemical agriculture sector, so they dealt with all the major chemical manufacturer/providers in the world. They'd been part of an industry working group composed of these manufacturers and various competitors for many years at that point. The purpose of the working group was to develop a standard way of describing the products, components, parts, and other aspects of the various "products" being manufactured, purchased, resold, and applied to farm fields.

You'll note that I used the word "product" twice, and put it in quotes. This is because, after all those years, the working group never did figure out a common definition for the word "product".

One more detail that's relevant, which is that everyone had agreed to transfer data via COBOL-defined file structures. I suppose that dated back to when they traded reels of magnetic tape, but carried forward to transferring files via ftp, and subsequently the web.

Along comes XML, offering (to some) a miracle solution. Of course XML only solved the part of the problem that these people had already solved, which was how to devise a common data transfer language. Was XML better than COBOL headers? Probably. Did it solve the actual problem of what the word "product" meant? Not at all.

I think blockchain is in the same position today. It is a distributed, append-only, reliable database. It doesn't define what goes in the database, just that whatever you put in there can't be altered or removed. So in that regard it is a lot like XML, which defined an encoding structure for data, but didn't define any semantics around that data.

The concept of XML then, and blockchain today, is enough to inspire people's imaginations in some amazing ways.

Given amazing amounts of work (mostly not technical work either, but at a business level) over many, many years XML became something useful via various XML languages (e.g. XAML). And a lot of the low-level technical benefits of XML have been superseded by JSON, but that's really kind of irrelevant, since all the hard work of devising standardized data definitions applies to JSON as well as XML.

I won't be at all surprised if the same general path isn't followed by blockchain. We're at the start of years and years of hard non-technical work to devise ways to use the concept of a distributed, append-only, reliable database. Along the way the underlying technology will become standardized and will merge into existing platforms like .NET, Java, AWS, Azure, etc. And I won't be surprised if some better technical solution is discovered (like JSON was) along the way, but that better technical solution probably won't really matter because the hard work is in figuring out the business-level models and data structures necessary to make use of this underlying database concept.

Wednesday, April 5, 2017 11:40:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 3, 2017

We're having a conversation on Magenic's internal forum where we're discussing the current JavaScript community reaction to all these frameworks. Some people in the industry are looking at the chaos of frameworks and libraries and deciding to just write everything in vanilla js - eschewing the use of external dependencies.

And I get that, I really do. However, I'm also an advocate of using frameworks - which shouldn't be surprising coming from the author of CSLA .NET.

Many years ago I spoke at a Java conference (they were trying to expand into the .NET space too).

At lunch I listened to a conversation between some other folks at the table; they were discussing the use of Spring (which was fairly new at the time).

Their conclusion was that although Spring did a ton of useful and powerful things, it was too big/complex and so they'd rather not use it and solve all those problems themselves (the problems solved by Spring).

I see the same thing all the time with CSLA .NET. People look at it and see something that is big and complex, and thing "those problems can't be that hard to solve", so they end up rewriting (usually poorly) large parts of CSLA.

I say "usually poorly" because their job isn't to create a well-tested and reusable framework. Their job is to solve some business problem. So they solve some subset of each problem that Spring or CSLA solves in-depth, and then wonder why their resulting app is unreliable, or performs badly, or whatever.

As the author of a widely used OSS framework, my job is to create a framework that solves and abstracts away key problems that business developers would otherwise encounter. Because of this, I'm able to solve those problems in a broader and deeper way than a business developer, who's goal is to put as little effort into solving the lower level problem, because it is just a distraction from solving the actual business problem.

So yeah, I do understand that some of these frameworks, like Angular, Spring, CSLA .NET, etc. are complex, and they have their own learning curve. But they exist because they solve a bunch of lower level non-business related problems that you will otherwise have to solve yourself. And the time you spend solving those problems provides zero business value, and does ultimately add to the long-term maintenance cost of your resulting business software.

There's not a perfect answer here to be sure. But for my part, I like to think that the massive amounts of time and energy spent by framework authors to truly understand solve those hard non-business problems is time well spent, allowing business developers to be more focused on solving the problems they are actually paid to address.

Tuesday, January 3, 2017 10:53:41 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 28, 2016

In a previous blog post I related a coding standards horror story from early in my career. A couple commenters asked for part 2 of the story, mostly to see how my boss, Mark, dealt with the chaos we found in the company who acquired us.

There are two things I think are fortunate that relate to the story.

First, they bought our company because they wanted our software and because they wanted Mark. It is quite possible that nobody in the world understood the vertical industry and had the software dev chops that Mark provided, so he had a lot of personal value.

Second, before the acquisition I'd been tasked with writing tooling to enable globalization support for our software. Keep in mind that this was VT terminal style software, and all the human readable text shown anywhere on the screen came from text literals or strings generated by our code. The concept of a resx file like we have in Windows didn't (to our knowledge) exist, and certainly wasn't used in our code. Coming right out of university, the concept of lexical parsing and building compilers was fresh in my mind, so my solution was to write a relatively simplistic parser that found all the text in the code and externalized it into what today we'd call a resource file.

That project was one of the most fun things I've probably ever done. And one of the few times in my career where a university compiler design course directly applied to anything in the real world.

Because Mark was so well regarded by the new company, he ended up in charge of the entire new development team. As such, he had the authority to impose his coding standards on the whole group, including the team of chaos-embracing developers. Not that they were happy, but this was the late 1980s and jobs weren't plentiful, and my recollection is that they grumbled about it, and the fact that it was a "damn Yankee" imposing his will on the righteous people of The South. But they went along with the change.

However, that still left massive amounts of pre-existing code that was essentially unreadable and unmaintainable. To resolve that, Mark took my parser as a base and wrote tools (I don't remember if it was one tool, or one per coding "style") that automatically reformatted all that existing code into Mark's style. That was followed by a ton of manual work to fix edge cases and retest the code to make sure it worked. In total I think this process took around 2 months, certainly not longer.

I wasn't directly involved in any of that fix-up process, as I had been assigned to do the biggest project yet in my young career: building support for an entire new product line in a related vertical to the focus of our original software. Talk about a rush for someone just a year out of university!

Wednesday, December 28, 2016 2:05:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, December 22, 2016

Early in my career, actually my first "real" job, I worked for a guy named Mark. Mark was an amazingly smart and driven programmer and I learned a lot from him.

But he and I would butt heads constantly over coding standards and coding style.

Our platform was the DEC VAX and our code was written using VAX Basic. For this story to make sense, you must realize that VAX Basic wasn't really BASIC the way you typically think about it. The DEC compiler team had started with Basic syntax and then merged in all the goodness of FORTRAN and Modula II and Pascal. For example, even back in the late 1980's the language had structured error/exception handling.

It is also important to understand that in the late 1980's there weren't code editors like Visual Studio. We used something called TPU (Text Processing Utility) that was more powerful than Notepad by far, but nothing compared to modern editors of today. It competed with Emacs and vi. As a result, it was up to the developer to style their code, not the editing tool.

Mark had defined a strict set of coding standards and a style guide, and he'd dial back into work from home at night to review our code (yes, via at 1200 baud modem at the time). It was not uncommon to come into work the next day and have a meeting in Mark's office where he'd go through the styling mistakes in my code so I could go fix them.

He defined things like every line would start with a tab, then 2 spaces. Or two tabs, then 2 spaces. Why the 2 spaces? I still have no idea, but that was the standard. He had a list of language keywords that were off limits. Roughly half the keywords were forbidden in fact. He defined where the Then would go after an If; on the next line down, indented.

I chafed at all of this. The 2 space thing was particularly stupid in my view, as was making all those fun keywords off limits. I can't say I cared about the Then statement one way or the other, except that it forced me to type yet another line with the stupid 2 space indent.

But what are you going to do? Mark was the boss, and this was my first job out of university in the late 1980's Reagan-era recession. In a job environment where thousands of experienced computer programmers were being laid off in Minnesota I wasn't going to risk my job.

Which isn't to say that I didn't argue - anyone who knows me knows that I can't keep my opinions to myself :)

The thing is, after a period of time it seems like any standard becomes second nature. I just internalized Mark's rules and coded happily away. In fact, what I found is that my "artistic expression" wasn't really crippled by the guidelines, I just had to learn how to be artistic within them.

Knowing a few artists, I'm now aware that putting boundaries around what you do is key to creating art. Artists pretty much always limit themselves (often artificially) so they have a context in which to work. Not that I think code is 100% art, but I do think that there's art in good code.

Several months after starting to work with Mark, and absorbing this rigorous standards-based approach to coding, our company was purchased by another company. Mark, myself, and a couple other employees survived this process. And we all moved from Minnesota to Birmingham, AL; which is another story entirely. Let me just say here that regional culture differences within the US are easily comparable to the difference between the US and many European countries.

The company that bought my employer was also a software company, with a dev team of comparable size to what we'd had. But they had no coding standard or guideline. They did use the DEC VAX, and they did use VAX Basic. But each developer did whatever they wanted without regard for anyone else.

One of the developers cut his teeth on FORTRAN. It turns out that VAX Basic could be made to look exactly like FORTRAN. Remember that DEC merged most of the good features of FORTRAN into VAX Basic after all.

Another developer clearly learned to program on an Apple ][ and his VAX Basic code had line numbers. I kid you not!! Just like Applesoft!

My point is that everyone in that dev shop had radically different coding styles and approaches. Some very ancient and creaky (even by 1989 standards), and some quite modern. It turns out they'd done this all intentionally. Partly because none of them wanted to learn anything particularly new. And mostly because it provided a sort of job security, because it was essentially impossible for any of them to maintain anyone else's code.

Contrast this to Mark's world, where it was impossible to tell who'd written any bit of code, because all code used the same style and structure.

This was the point where I had my first professional developer epiphany. Yes, it is truly painful to adapt to the idea of living within strict coding and style guidelines. But the alternative is so much worse!

Ever since that experience I insist on consistency of coding standards and styles within each project (or enterprise) where I work. And even if I think some choices (like 2 spaces after each tab) are really, really, really stupid, I'll use and vehemently support that choice.

Which, again, isn't to say that I won't argue a little up front about anything that seems really dumb, but once the decision is made, it is made.

I learned a lot from Mark. Some good, some bad. But this particular lesson is one that is central to my view of good software development.

Thursday, December 22, 2016 12:30:01 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, September 4, 2015

In a recent blog post @praeclarum discusses frustrations with NuGet 3.

His viewpoint is from someone who uses (used?) PCLs.

What is interesting is that I agree with him about NuGet 3, but my viewpoint is as someone who chose Shared Projects over PCLs.

So I rather suspect NuGet 3 solved some unknown set of problems that somebody must have had – but it wasn’t authors of libraries using and not using PCLs. Huh.

You might ask why I don’t use PCLs – and in particular in my CSLA .NET framework, which is extremely cross platform. And that’s actually part of the answer.

CSLA .NET came into being before .NET itself was released, and has been evolving ever since. Perhaps the biggest point of evolution occurred when I chose to support Silverlight 2.0 in around 2006 (or 2007?). That was the first point where the (then years old) codebase needed to stop depending on full .NET. Also at that point in time neither PCLs nor Shared Projects existed, but Linked Files existed (the precursor to Shared Projects).

As a result, much of the CSLA Silverlight code existed as links to files in the full CSLA .NET project. So there were two different projects – one for .NET and one for Silverlight. Shortly followed by a third project when .NET Server and .NET Client split.

Of course the .NET and Silverlight platforms weren’t the same, but by using linked files each shared file was compiled for each target platform, and we were able to use compiler directives to accommodate differences in each platform.

Any real-world project tends to end up with some cruft, and CSLA is no exception. By the time we’d followed this model to add Silverlight, Windows Phone, various versions of .NET (3.5, 4, 4.5), Xamarin (iOS and Android aren’t always identical either), WinRT, and mono we’d accidentally duplicated various files and inappropriately reused some compiler directives. I wish I could say the CSLA dev team was perfect at all times, but we take shortcuts, or miscommunicate, from time to time like anyone else.

Earlier this year I undertook a major initiative to shift all the linked files into a set of common Shared Projects using the official support provided in Visual Studio 2015. The end result is really nice, but this change brought me face to face with every compromise or mistake we’d made over the past several years. Fun!

The end result is that all the actual code for CSLA resides in a small set of Shared Projects, and those projects are shared into concrete compilable projects for each target platform supported by CSLA .NET. The release folder (compiler output target) looks like this:


Then a set of NuGet 2 packages are created representing the various scenarios where people would want to use CSLA to reuse their business logic – across different user experiences and/or different platforms. The list of NuGet packages includes:

  • Core (basically Csla.dll for your platform)
  • Web (Core + Web (basic ASP.NET and Web Forms helpers))
  • MVC (Core + Web + helpers for the MVC version you are using)
  • WPF (Core + WPF helpers)
  • WinForms (Core + Windows Forms helpers)
  • WinRT/UWP (Core + WinRT and UWP helpers – which are the same right now thankfully)
  • Android (Core + Android helpers)
  • iOS (Core + iOS helpers)
  • Entity Framework (Core + helpers for the EF version you are using)
  • Validation (Core + helpers for enable backward compatibility with an older rules engine from CSLA 3.8)
  • Templates (installs VS templates and snippets on the dev workstation)

To accomplish this we have the following solution structure for the core Csla.dll outputs:


Notice that there’s one Csla.Shared project – it contains all the code – and then a set of concrete compilable projects for the various platforms and .NET versions supported. Those projects contain no code at all, just settings for the compilers.

Would a PCL simplify this? Sort of, but not really. There are too many differences between some of these target platforms for the actual implementation code to exist in a single PCL. Remember, you can’t use compiler directives or have any platform-specific implementation code in a PCL – and that constraint is just too limiting to allow CSLA to do what it does. Perhaps this is in large part because a key feature of CSLA is that it shields your business code from underlying platform differences, so we can’t dumb down the code in CSLA and allow platform differences to penetrate our abstractions or we’d lose a primary benefit of CSLA itself.

Namely, you write your business logic one time and then reuse that exact code on every platform you want to target, thus radically reducing your maintenance burden over the years you’ll be running that code.

Now there is the concept of “bait-and-switch” with a PCL, where the PCL itself contains no implementation – only public interface definitions. And then you create a concrete compilable project with the actual implementation for each platform and .NET version you want to support. In other words, you do exactly what we are doing here, with the addition of one more project that only contains all the public class/interface/type information from Csla.Shared.

And I might do this “bait-and-switch” hack someday – but to make such a thing manageable I’d want a tool I can run as part of the build process that generates all that public type information from Csla.Shared, because otherwise I’d have to maintain it all by hand and that’s just asking for trouble. (anyone knowing of such a tool feel free to let me know!!)

We follow the same basic pattern for the various UI technologies – so there are a similar set of concrete projects and one shared project for XAML, Android, iOS, Web, MVC, etc.

The result is, in my view, extremely manageable thanks to the great support for Shared Projects in Visual Studio 2015. Yes, we’ve been following this approach “the hard way” since ~2007, so I feel very confident saying that VS2015 is a major step forward in terms of tooling around this model.

I also feel comfortable saying that I have no regrets that we didn’t go down the PCL route when PCLs were introduced. They’d have required we use the “bait-and-switch” hack, which is too much magic for my taste, and I think they’d have further increased the complexity of our solution.

Back to NuGet 3 – nothing I’m talking about here requires (or currently uses) NuGet 3. Part of the build process generates a set of NuGet 2 packages that I push into NuGet:


As a result, when you add a reference to ‘CSLA MVC5’ you get the correct ‘CSLA Core’, ‘CSLA Web’, and ‘CSLA MVC’ assemblies in your project for the version of .NET your projects are targeting. Exactly what you expect. Easy for the end developer, and I think easy for us as maintainers of an OSS library.

From what I can see, NuGet 3 just changes a bunch of stuff and adds confusion to fix some problems I’m not encountering. That’s never a good sign…

Friday, September 4, 2015 6:01:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, July 26, 2014

I am sitting here listening to Stone Sour on Xbox Music (yes, I’m a hard rock/metal fan – always have been).

Stone Sour - what an excellent band. Not only because their music rocks, but because their stage performance is amazing. If you ever have the chance I suggest you see them in concert. Don’t bother to go out of your way to see Rob Zombie. His music might be good, but he is flakey and might skip out on a live show in the middle.

I say this because I was just at Rock Fest in Cadott, WI and that’s what happened. Stone Sour, despite the Corey Taylor having a rough throat put on an amazing performance – Corey gave everything he had left to put on a great show. Yes, it was sub-standard in terms of his voice, but it was perhaps the most amazing show I’ve ever seen.

Rob Zombie, having a rough throat, did just enough to get paid and then walked off the stage, unwilling to match Stone Sour’s ability to put everything into the performance. Rob Zombie said, in so many words, that he was willing to walk off stage rather than give a sub-standard performance, missing the opportunity to match Taylor’s dedication to his audience.

There’s a parallel to software development that crystallized for me with these two contrasting performances. It is something I think my friend Billy Hollis has been trying to tell me (and the rest of the industry) for the past few years.

When we’re younger we’re so ready to prove ourselves that we’ll do pretty much whatever it takes to get the job done – to build software that meets the user’s needs. It seems that many of us change as we get older, being more willing to abandon the user’s needs in our pursuit of “doing the right thing”. We focus on TDD and Agile and all sorts of other esoteric concepts at the expense of actually getting things done for the users who need our software to improve their lives.

Don’t get me wrong, I’m not saying we should write crappy software. But  there’s a mantra in Microsoft that applies here: “shipping is a feature”. In other words, if you delay getting sh*t done because you’ll only release if your code is perfect then you are missing the whole point of writing software. We aren’t creating art, we’re improving people’s lives – which can only be done by putting our software in the hands of actual users.

There’s a balance to be struck between “doing everything right” and “getting sht*t done”. For most of my career I’ve leaned toward the latter, but it is true that as I’ve gotten older I’ve put a higher value on the former.

Thinking about this, I think it is the same misguided ego that caused Rob Zombie to piss off a couple thousand fans by walking off stage. Exactly the opposite of Corey Taylor who got sh*t done even if it wasn’t perfect – and thus won the love of a couple thousand of the same fans.

I’ll go out of my way to see Stone Sour again, but I if I see Rob Zombie it will be because he came to me – I’ll never bother to pay to watch him walk off stage again.

I suspect the same is true with the users of our software. If we actually release something – even if if isn’t perfect – our users will be happier than if we get halfway through a project and fail because we were so focused on making it perfect that it couldn’t be accomplished with reasonable effort.

In summary, things like Agile and TDD are great, as long as they don’t get in the way of getting sh*t done.

Saturday, July 26, 2014 1:03:38 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, October 7, 2013

The short answer to the question of whether the Microsoft .NET Framework (and its related tools and technologies) has a future is of course, don’t be silly.

The reality is that successful technologies take years, usually decades, perhaps longer, to fade away. Most people would be shocked at how much of the world runs on RPG, COBOL, FORTRAN, C, and C++ – all languages that became obsolete decades ago. Software written in these languages runs on mainframes and minicomputers (also obsolete decades ago) as well as more modern hardware in some cases. Of course in reality mainframes and minicomputers are still manufactured, so perhaps they aren’t technically “obsolete” except in our minds.

It is reasonable to assume that .NET (and Java) along with their primary platforms (Windows and Unix/Linux) will follow those older languages into the misty twilight of time. And that such a thing will take years, most likely decades, perhaps longer, to occur.

I think it is critical to understand that point, because if you’ve built and bet your career on .NET or Java it is good to know that nothing is really forcing you to give them up. Although your chosen technology is already losing (or has lost) its trendiness, and will eventually become extremely obscure, it is a pretty safe bet that you’ll always have work. Even better, odds are good that your skills will become sharply more valuable over time as knowledgeable .NET/Java resources become more rare.

Alternately you may choose some trendier alternative; the only seemingly viable candidate being JavaScript or its spawn such as CoffeeScript or TypeScript.

How will this fading of .NET/Java technology relevance occur?

To answer I’ll subdivide the software world into two parts: client devices and servers.

Client Devices

On client devices (PCs, laptops, ultrabooks, tablets, phones, etc.) I feel the need to further split into two parts: consumer apps and business apps. Yes, I know that people seem to think there’s no difference, but as I’ve said before, I think there’s an important economic distinction between the consumer and business apps.

Consumer Apps

Consumer apps are driven by a set of economic factors that make it well worth the investment to build native apps for every platform. In this environment Objective C, Java, and .NET (along with C++) all have a bright future.

Perhaps JavaScript will become a contender here, but that presupposes Apple, Google, and Microsoft work to make that possible by undermining their existing proprietary development tooling. There are some strong economic reasons why none of them would want every app on the planet to run equally on every vendor’s device, so this seems unlikely. That said, for reasons I can’t fathom, Microsoft is doing their best to make sure JavaScript really does work well on Windows 8, so perhaps Apple will follow suit and encourage their developers to abandon Objective C in favor of cross-platform JavaScript?

Google already loves the idea of JavaScript and would clearly prefer if we all just wrote every app in JavaScript for Chrome on Android, iOS, and Windows. The only question in my mind is how they will work advertising into all of our Chrome apps in the future?

My interest doesn’t really lie in the consumer app space, as I think relatively few people are going to get rich building casual games, fart apps, metro transit mapping apps, and so forth. From a commercial perspective there is some money to be made building apps for corporations, such as banking apps, brochure-ware apps, travel apps, etc. But even that is a niche market compared to the business app space.

Business Apps

Business apps (apps for use by a business’s employees) are driven by an important economic factor called a natural monopoly. Businesses want software that is built and maintained as cheaply as possible. Rewriting the same app several times to get a “native experience” on numerous operating systems has never been viable, and I can’t see where IT budgets will be expanding to enable such waste in the near future. In other words, businesses are almost certain to continue to build business apps in a single language for a single client platform. For a couple decades this has been Windows, with only a small number of language/tool combinations considered viable (VB, PowerBuilder, .NET).

But today businesses are confronted with pressure to write apps that work on the iPad as well as Windows (and outside the US on Android). The only two options available are to write the app 3+ times or to find some cross-platform technology, such as JavaScript.

The natural monopoly concept creates some tension here.

A business might insist on supporting just one platform, probably Windows. A couple years ago I thought Microsoft’s Windows 8 strategy was to make it realistic for businesses to choose Windows and .NET as this single platform. Sadly they’ve created a side loading cost model that basically blocks WinRT business app deployment, making Windows far less interesting in terms of being the single platform. The only thing Windows has going for it is Microsoft’s legacy monopoly, which will carry them for years, but (barring business-friendly changes to WinRT licensing) is doomed to erode.

You can probably tell I think Microsoft has royally screwed themselves over with their current Windows 8 business app “strategy”. I’ve been one of the loudest and most consistent voices on this issue for the past couple years, but Microsoft appears oblivious to the problem and has shown no signs of even recognizing the problem much less looking at solutions. I’ve come to the conclusion that they expect .NET on the client to fade away, and for Windows to compete as just one of several platforms that can run JavaScript apps. In other words I’ve come to the conclusion that Microsoft is willingly giving up on any sort of technology lock-in or differentiation of the Windows client in terms of business app development. They want us to write cross-platform JavaScript apps, and they simply hope that businesses and end users will choose Windows for other reasons than because the apps only run on Windows.

Perhaps a business would settle on iOS or Android as the “one client platform”, but that poses serious challenges given that virtually all businesses have massive legacies of Windows apps. The only realistic way to switch clients to iOS or Android is to run all those Windows apps on Citrix servers (or equivalent), and to ensure that the client devices have keyboards and mice so users can actually interact with the legacy Windows apps for the next several years/decades. Android probably has a leg up here because most Android devices have USB ports for keyboards/mice, but really neither iOS nor Android have the peripheral or multi-monitor support necessary to truly replace legacy Windows (Win32/.NET).

This leaves us with the idea that businesses won’t choose one platform in the traditional sense, but rather will choose a more abstract runtime: namely JavaScript running in a browser DOM (real or simulated). Today this is pretty hard because of differences between browsers and between browsers on different platforms. JavaScript libraries such as jquery, angular, and many others seek to abstract away those differences, but there’s no doubt that building a JavaScript client app costs more today than building the same app in .NET or some other more mature/consistent technology.

At the same time, only JavaScript really offers any hope of building a client app codebase that can run on iOS, Android, and Windows tablets, ultrabooks. laptops, and PCs. So though it may be more expensive than just writing a .NET app for Windows, JavaScript might be cheaper than rewriting the app 3+ times for iOS, Android, and Windows. And there’s always hope that JavaScript (or its offspring like CoffeScript or TypeScript) will rapidly mature enough to make this “platform” more cost-effective.

I look at JavaScript today much like Visual Basic 3 in the early 1990s (or C in the late 1980s). It is typeless and primitive compared to modern C#/VB or Java. To overcome this it relies on tons of external components (VB had its component model, JavaScript has myriad open source libraries). These third party components change rapidly and with little or no cross-coordination, meaning that you are lucky if you have a stable development target for a few weeks (as opposed to .NET or Java where you could have a stable target for months or years). As a result a lot of the development practices we’ve learned and mastered over the past 20 years are no longer relevant, and new practices must be devised, refined, and taught.

Also we must recognize that JavaScript apps never go into a pure maintenance mode. Browsers and underlying operating systems, along with the numerous open source libraries you must use, are constantly versioning and changing, so you can never stop updating your app codebase to accommodate this changing landscape. If you do stop, you’ll end up where so many businesses are today: trapped on IE6 and Windows XP because nothing they wrote for IE6 can run on any modern browser. We know that is a doomed strategy, so we therefore know that JavaScript apps will require continual rewrites to keep them functional over time.

What I’m getting at here is that businesses have an extremely ugly choice on the client:

  1. Rewrite and maintain every app 3+ times to be native on Windows, iOS, and Android
  2. Absorb the up-front and ongoing cost of building and maintaining apps in cross-platform JavaScript
  3. Select one platform (almost certainly Windows) on which to write all client apps, and require users to use that platform

I think I’ve listed those in order from most to least expensive, though numbers 1 and 2 could be reversed in some cases. I think in all cases it is far cheaper for businesses to do what Delta recently did and just issue Windows devices to their employees, thus allowing them to write, maintain, and support apps on a single, predictable platform.

The thing is that businesses are run by humans, and humans are often highly irrational. People are foolishly enamored of BYOD (bring your own device), which might feel good, but is ultimately expensive and highly problematic. And executives are often the drivers for alternate platforms because they like their cool new gadgets; oblivious to the reality that supporting their latest tech fad (iPad, Android, whatever) might cost the business many thousands (often easily 100’s of thousands) of dollars each year in software development, maintenance, and support costs.

Of course I work for a software development consulting company. Like all such companies we effectively charge by the hour. So from my perspective I’d really prefer if everyone did decide to write all their apps 3+ times, or write them in cross-platform JavaScript. That’s just more work for us, even if objectively it is pretty damn stupid from the perspective of our customers’ software budgets.

Server Software

Servers are a bit simpler than client devices.

The primary technologies used today on servers are .NET and Java. Though as I pointed out at the start of this post, you shouldn’t discount the amount of COBOL, RPG, FORTRAN, and other legacy languages/tools/platforms that make our world function.

Although JavaScript has a nescient presence on the server via tools like node.js, I don’t think any responsible business decision maker is looking at moving away from existing server platform tools in the foreseeable future.

In other words the current 60/40 split (or 50/50, depending on whose numbers you believe) between .NET and Java on the server isn’t likely to change any time soon.

Personally I am loath to give up the idea of a common technology platform between client and server – something provided by VB in the 1990s and .NET over the past 13 years. So if we really do end up writing all our client software in JavaScript I’ll be a strong advocate for things like node.js on the server.

In the mid-1990s it was pretty common to write “middle tier” software in C++ and “client tier” software in PowerBuilder or VB. Having observed such projects and the attendant complexity of having a middle tier dev team who theoretically coordinated with the client dev team, I can say that this isn’t a desirable model. I can’t support the idea of a middle tier in .NET and a client tier in JavaScript, because I can’t see how team dynamics and inter-personal communication capabilities have changed enough (or at all) over the past 15 years such that we should expect any better outcome now than we got back then.

So from a server software perspective I think .NET and Java have a perfectly fine future, because the server-side JavaScript concept is even less mature than client-side JavaScript.

At the same time, I really hope that (if we move to JavaScript on the client) JavaScript matures rapidly on both client and server, eliminating the need for .NET/Java on the server as well as the client.


In the early 1990s I was a VB expert. In fact, I was one of the world’s leading VB champions through the 1990s. So if we are going to select JavaScript as the “one technology to rule them all” I guess I’m OK with going back to something like that world.

I’m not totally OK with it, because I rather enjoy modern C#/VB and .NET. And yes, I could easily ride out the rest of my career on .NET, there’s no doubt in my mind. But I have never in my career been a legacy platform developer, and I can’t imagine working in a stagnant and increasingly irrelevant technology, so I doubt I’ll make that choice – stable though it might be.

Fwiw, I do still think Microsoft has a chance to make Windows 8, WinRT, and .NET a viable business app development target into the future. But their time is running out, and as I said earlier they seem oblivious to the danger (or are perhaps embracing the loss of Windows as the primary app dev target on the client). I would like to see Microsoft wake up and get a clue, resulting in WinRT and .NET being a viable future for business app dev.

Failing that however, we all need to start putting increasing pressure on vendors (commercial and open source) to mature JavaScript, its related libraries and tools, and its offspring such as TypeScript – on both client and server. The status of JavaScript today is too primitive to replace .NET/Java, and if we’re to go down this road a lot of money and effort needs to be expended rapidly to create a stable, predictable, and productive enterprise-level JavaScript app dev platform.

Monday, October 7, 2013 9:24:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 29, 2013

As you can tell from my post volume, I’ve had a few weeks of enforced downtime during which time a lot of thoughts have been percolating in my mind just waiting to escape :)

There’s an ongoing discussion inside Magenic as to whether there is any meaningful difference between consumer apps and business apps.

This is kind of a big deal, because we generally build business apps for enterprises, that’s our bread and butter as a custom app dev consulting company. Many of us (myself included) look at most of the apps on phones and tablets as being “toy apps”, at least compared to the high levels of complexity around data entry, business rules, and data management/manipulation that you find in enterprise business applications.

For example, I have yet to see an order entry screen with a few hundred data entry fields implemented on an iPhone or even iPad. Not to say that such a thing might not exist, but if such a thing does exist it is a rarity. But in the world of business app dev such screens exist in nearly every application, and typically the user has 1-2 other monitors displaying information relevant to the data entry screen.

Don’t get me wrong, I’m not saying mobile devices don’t have a role in enterprise app dev, because I think they do. Their role probably isn’t to replace the multiple 30” monitors with keyboard/mouse being used by the employees doing the work. But they surely can support a lot of peripheral tasks such as manager approvals, executive reviews, business intelligence alerts, etc. In fact they can almost certainly fill those roles better than a bigger computer that exists only in a fixed location.

But still, the technologies and tools used to build a consumer app and a business app for a mobile device are the same. So you can surely imagine (with a little suspension of disbelief) how a complex manufacturing scheduling app could be adapted to run on an iPad. The user might have to go through 20 screens to get to all the fields, but there’s no technical reason this couldn’t be done.

So then is there any meaningful difference between consumer and business apps?

I think yes. And I think the difference is economics, not technology.

(maybe I’ve spent too many years working in IT building business apps and being told I’m a cost center – but bear with me)

If I’m writing a consumer app, that app is directly or indirectly making me money. It generates revenue, and of course creating it has a cost. For every type of device (iOS, Android, Win8, etc.) there’s a cost to build software, and potential revenue based on reaching the users of those devices. There’s also direct incentive to make each device experience “feel native” because you are trying to delight the users of each device, thus increasing your revenue. As a result consumer apps tend to be native (or they suck, like the Delta app), but the creators of the apps accept the cost of development because that’s the means through which they achieve increased revenue.

If I’m writing a business app (like something to manage my inventory or schedule my manufacturing workload) the cost to build software for each type of device continues to exist, but there’s zero increased revenue (well, zero revenue period). There’s no interest in delighting users, we just need them to be productive, and if they can’t be productive that just increases cost. So it is all cost, cost, cost. As a result, if I can figure out a way to use a common codebase, even if the result doesn’t “feel native” on any platform, I still win because my employees can be productive and I’ve radically reduced my costs vs writing and maintaining the app multiple times.

Technically I’ll use the same tools and technologies and skills regardless of consumer or business. But economically there’s a massive difference between delighting end users to increase revenue (direct or indirect), and minimizing software development/maintenance costs as much as possible while ensuring employees are productive.

From a tactical perspective, as a business developer it is virtually impossible to envision writing native apps unless you can mandate the use of only one type of device. Presumably that’s no longer possible in the new world of BYOD, so you’ve got to look at which technologies and tools allow you to build a common code base. The list is fairly short:

  • JavaScript (straight up or via various wrapper tools like PhoneGap)
  • Microsoft .NET + Xamarin

(yes, I know C++ might also make the list, but JavaScript sets our industry back at least 10 years, and C++ would set us back more than 20 years, so really????)

I’m assuming we’ll be writing lots of server-side code, and some reasonably interactive client code to support iPad, Android tablets, and Windows 8 tablets/ultrabooks/laptops/desktops. You might also bring in phones for even more narrow user scenarios, so iPhone, Android phones, and Windows Phone too.

Microsoft .NET gets you the entire spectrum of Windows from phone to tablet to ultrabook/laptop/desktop, as well as the server. So that’s pretty nice, but leaves out iPad/iPhone and Android. Except that you can use the Xamarin tools to build iOS and Android apps with .NET as well! So in reality you can build reusable C# code that spans all the relevant platforms and devices.

As an aside, CSLA .NET can help you build reusable code across .NET and Xamarin on Android. Sadly some of Apple’s legal limitations for iOS block some key C# features used by CSLA so it doesn’t work on the iPad or iPhone :(

The other option is JavaScript or related wrapper/abstraction technologies like PhoneGap, TypeScript, etc. In this case you’ll need some host application on your device to run the JavaScript code, probably a browser (though Win8 can host directly). And you’ll want to standardize on a host that is as common as possible across all devices, which probably means Chrome on the clients, and node.js on the servers. Your client-side code still might need some tweaking to deal with runtime variations across device types, but as long as you can stick with a single JavaScript host like Chrome you are probably in reasonably good shape.

Remember, we’re talking business apps here – businesses have standardized on Windows for 20 years, so the idea that a business might mandate the use of Chrome across all devices isn’t remotely far-fetched imo.

Sadly, as much as I truly love .NET and view it as the best software development platform mankind has yet invented, I strongly suspect that we’ll all end up programming in JavaScript – or some decent abstraction of it like TypeScript. As a result, I’m increasingly convinced that platforms like .NET, Java, and Objective C will be relegated to writing “toy” consumer apps, and/or to pure server-side legacy code alongside the old COBOL, RPG, and FORTRAN code that still runs an amazing number of companies in the world.

Thursday, August 29, 2013 3:20:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, April 10, 2013

I was recently confronted by an odd bit of reality, about which I thought I’d vent a little.

I am the creator and owner of a widely used open source project (CSLA .NET) that has (in one form or another) been around since 1996. If you want an example of an open source project with longevity, CSLA .NET ranks right up there.

And CSLA is “true” open source, in that it uses a liberal and non-viral license, and there’s no commercial option (a lot of OSS projects use a viral license as a “poison pill” to drive any real use of the project to a paid commercial license that isn’t free as in beer or speech).

Recently at a conference I and some colleagues were presenting on some development techniques and patterns, and the code makes use of several open source projects – including CSLA.

Oddly this was a source of blow-back from some in the audience, who thought the talk was “overly commercial”.

Apparently talking about other people’s free open source products are fine, but if you talk about your own then that’s commercial?

And for that matter, how can it be commercial if you are talking about software that is free as in beer and speech? If anything, FOSS is anti-commercial by its very definition…

It would seem, by this ‘logic’, that the primary experts on any given OSS project should not talk about their project, but should instead talk about tools they might use, but don’t actually create or build.

In other words, people attending conferences should never get the best or most direct insight into any OSS product, because presentations by the people who create that product would be somehow “too commercial” if they talk about the stuff they’ve built.

So much for learning about Linux from Linux developers, or jquery from jquery developers, etc.

Obviously that’s all pretty dumb, and similarly I suspect the majority of the people at this conference (or any conference) don’t feel this way and do want the highest quality information they can get.

I think the real issue here is that some reasonable number of people just don’t understand open source.

A lot of people (especially in the Microsoft dev space) have never knowingly used open source – living entirely within the realm of products provided by Microsoft and component vendors.

(whether these people have actually used open source is another matter – and odds are they have unwittingly used things like jquery, ASP.NET, Entity Framework, MVVM Light, Subversion, git, etc.)

I only bother blogging about this because over the past couple years it has become virtually impossible to create any modern app without the use of some open source products.

I can’t imagine anyone building a modern web site or page without some OSS products. Much less a web app that’ll be based almost entirely on OSS products.

Similarly, it is hard to imagine building a XAML app without the use of at least one OSS MVVM framework.

And the same is true with unit testing, mocking, etc.

In short, the best tools available today are probably open source, and if you aren’t using them then you are depriving yourself and your employer of the best options out there.

What this means imo, is that people who think talking about your own open source product is “too commercial” had better grow up and get a clue pretty fast, or they’ll be finding everything to be too commercial…

Wednesday, April 10, 2013 12:04:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, April 4, 2013

In 1991 I was a DEC VAX programmer. A very happy one, because OpenVMS was a wonderful operating system and I knew it inside and out. The only problem was that the “clients” were dumb VT terminals, and it was terribly easy to become resource constrained by having all the CPU, memory, and IO processing in a central location.

This was the time when Windows NT came along (this new OS created by the same person who created OpenVMS btw) and demonstrated that PCs were more than toys – that they could be considered real computers. (yes, this is my editorial view Smile )

From that time forward my personal interest in software has been expressed entirely via distributed computing. Sure, I’ve built some server-only and some client-only software over the past 22 years, but that was just to pay the bills. What is interesting is building software systems where various parts of the system run on different computers, working together toward a common goal. That’s fun!!

This type of software system implies a smart client. I suppose a smart client isn’t strictly necessary, but it is surely ideal. Because most software interacts with humans at some point, a smart client is ideal because that’s how you provide the human with the best possible experience. Obviously you can use a VT terminal or a basic HTML browser experience to provide this human interface, but that’s not nearly as rich or powerful as a smart client experience.

For the past couple decades the default and dominant smart client experience has been expressed through Windows. With smart client software written primarily in VB, PowerBuilder, and .NET (C#/VB).

The thing is, I am entirely convinced that our industry is at a major inflection point. At least as big as the one in the mid-1990’s when we shifted from mainframe/minicomputer to PC, and when n-tier became viable, and when the infant web became mainstream.

Prior to the mid-1990’s computing was chaos. There were many types of mainframe and minicomputer, and none were compatible with each other. Even the various flavors of Unix weren’t really compatible with each other… Software developers tended to specialize in a platform, and it took non-trivial effort to retool to another platform – at least if you wanted to be really good.

Over the past couple decades we’ve been spoiled by having essentially one platform: Windows. Regardless of whether you use VB, PowerBuilder, .NET, or Java, almost everyone knows Windows and how to build software for this one nearly-universal platform.

Now we’ve chosen to return to chaos. Windows is still sort of dominant, but it shares a lot of mindshare with iOS, OS X, Android, and ChromeOS. We appear to be heading rapidly into an environment more like the late 1980’s. An environment composed of numerous incompatible platforms, all of which are viable, none of which are truly dominant.

There is one difference this time though: JavaScript.

Please don’t get me wrong, I’m not a rah-rah supporter of js. I am extremely skeptical of cross-platform technologies, having seen countless such technologies crash and burn over the past 25+ years.

But there’s an economic factor that comes to bear as well as technical factors. Unlike in the late 1980’s when computing was important but not critical to the world at large, today computing is critical to the world at large. Individuals and organizations can not function at the levels of productivity we’re used to without computing and automation.

In other words, in the late 1980’s we could afford to be highly inefficient. We could afford to have a fragmented developer space, split by myriad incompatible platforms.

Today the cost of such inefficiency is much higher. I suspect it is too high to be acceptable. Think about your organization. Can you afford to build every client app 3-6 times? Will your board of directors or company owner accept that kind of increase in the IT budget?

Or can you afford to pick just one platform (iOS, Windows, whatever) and tell all your employees and partners that you only work with that one type of device/OS?

(btw, that’s what we’ve done for the past 20 years – we just tell everyone it is Windows or nothing, and that’s been working well – so such a move isn’t out of the question)

Let’s assume your organization can’t afford a massive increase in its IT budget to hire the dev/support staff to build and maintain every app 3-6 times. And let’s assume we collectively decide to embrace every random device/OS that comes along (BYOD). What then?

I suggest there are two options:

  1. Return to a terminal-based model where the client device is a dumb as possible, probably using basic HTML – to which I say BORING! (and awful!)
  2. Find a common technology for building smart client apps that works reasonably well on most device/OS combinations

Personally I am entirely uninterested in option 1. As I said, I spent the early part of my career in the minicomputer-terminal world, so I’ve been there and done that. Boring.

The only technology that looks even remotely capable of supporting option 2 is JavaScript.

Is that a silver bullet? Is it painless? Clearly not. Even if you ignore all the bad bits of js and stick with JavaScript: The Good Parts it is still pretty messy.

However, VB 1, 2, and 3 were also pretty immature and messy. But a lot of us stuck with those products, pushing for (and getting) improvements that ultimately made VB6 the most popular development tool of its time – for apps small and big (enterprise).

So sure, js is (by modern standards) immature and messy. But if the broader business development community sees it as the only viable technology going forward that’ll bring a lot of attention and money into the js world. That attention and money will drive maturity, probably quite rapidly.

OK, so probably not maturity of js itself, because it is controlled by a standards body and so it changes at a glacial pace.

But maturity of tooling (look at the amazing stuff Microsoft is doing in VS for js tooling, as well as TypeScript), and libraries, and standards. That’ll be the avenue by which js becomes viable for smart client development.

If it isn’t clear btw, I am increasingly convinced that (like it or not) this is where we’re headed. We have a few more years of Windows as the dominant enterprise client OS, but given what Microsoft is doing with Windows 8 it is really hard to see how Windows will remain dominant in that space. Not that any one single device/OS will displace Windows – but rather that chaos will displace Windows.

As a result, barring someone inventing a better cross-platform technology (and I’d love to see that happen!!), js is about to get hit by a lot of professional business developers who’ll demand more maturity, stability, and productivity than exists today.

This is going to be a fun ride!

Thursday, April 4, 2013 8:44:57 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, May 7, 2012

There are three fairly popular presentation layer design patterns that I collectively call the “M” patterns: MVC, MVP, and MVVM. This is because they all have an “M” standing for “Model”, plus some other constructs.

The thing with all of these “M” patterns is that for typical developers the patterns are useless without a framework. Using the patterns without a framework almost always leads to confusion, complication, high costs, frustration, and ultimately despair.

These are just patterns after all, not implementations. And they are big, complex patterns that include quite a few concepts that must work together correctly to enable success.

You can’t sew a fancy dress just because you have a pattern. You need appropriate tools, knowledge, and experience. The same is true with these complex “M” patterns.

And if you want to repeat the process of sewing a fancy dress over and over again (efficiently), you need specialized tooling for this purpose. In software terms this is a framework.

Trying to do something like MVVM without a framework is a huge amount of work. Tons of duplicate code, reinventing the wheel, and retraining people to think differently.

At least with a framework you avoid the duplicate code and hopefully don’t have to reinvent the wheel – allowing you to focus on retraining people. The retraining part is generally unavoidable, but a framework provides plumbing code and structure, making the process easier.

You might ask yourself why the MVC pattern only became popular in ASP.NET a few short years ago. The pattern has existed since (at least) the mid-1990’s, and yet few people used it, and even fewer used it successfully. This includes people on other platforms too, at least up to the point that those platforms included well-implemented MVC frameworks.

Strangely, MVC only started to become mainstream in the Microsoft world when ASP.NET MVC showed up. This is a comprehensive framework with tooling integrated into Visual Studio. As a result. typical developers can just build models, views, and controllers. Prior to that point they also had to build everything the MVC framework does – which is a lot of code. And not just a lot of code, but code that has absolutely nothing to do with business value, and only relates to implementation of the pattern itself.

We’re in the same situation today with MVVM in WPF, Silverlight, Windows Phone, and Windows Runtime (WinRT in Windows 8). If you want to do MVVM without a framework, you will have to build everything a framework would do – which is a lot of code that provides absolutely no direct business value.

Typical developers really do want to focus on building models, views, and viewmodels. They don’t want to have to build weak reference based event routers, navigation models, view abstractions, and all the other things a framework must do. In fact, most developers probably can’t build those things, because they aren’t platform/framework wonks. It takes a special kind of passion (or craziness) to learn the deep, highly specialized techniques and tricks necessary to build a framework like this.

What I really wish would happen, is for Microsoft to build an MVVM framework comparable to ASP.NET MVC. Embed it into the .NET/XAML support for WinRT/Metro, and include tooling in VS so we can right-click and add views and viewmodels. Ideally this would be an open, iterative process like ASP.NET MVC has been – so after a few years the framework reflects the smartest thoughts from Microsoft and from the community at large.

In the meantime, Caliburn Micro appears to be the best MVVM framework out there – certainly the most widely used. Probably followed by various implementations using PRISM, and then MVVM Light, and some others.

Monday, May 7, 2012 2:01:55 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, April 26, 2012

I am sometimes asked for technical career advice. A common question these days is whether it is worth learning WPF, or Silverlight – .NET and XAML in general I suppose, or would it be better to learn HTML 5 and JavaScript, or perhaps even Objective C?

This is a challenging question to be sure. How good is your crystal ball? Smile

XAML appears to be alive and well – WPF, Silverlight, and now WinRT (Windows 8 – and probably Windows Phone 8 and “Xbox 720” and more) all use XAML.

I look at the WinRT usage of XAML as being essentially “Silverlight 6” – it is far closer to Silverlight than WPF, but isn’t exactly like Silverlight either. Assuming success with Windows 8, WinRT will become the new primary client dev target for most smart client development (over the next few years).

The primary competitors are Objective C (if you believe iPads will take over the client space), and HTML 5/JavaScript (if you believe in fairy tales the concept of ‘one technology to rule them all’).

This is where the crystal ball comes into play.

Do you think Apple will displace Microsoft – iPads will replace the use of Windows – as the monopoly client OS?

Do you think the concept of ‘natural monopoly’ that has caused the Windows hegemony over the past 20 years is at an end – that some fundamental economic shift has occurred so companies are now willing to increase their IT budgets as a % of revenue to accommodate multiple client platforms (unlike the past 20 years)? In which case business app developers should expect to support at least iPad and Windows, if not Android, into the future?

Do you think that Windows 8 and WinRT will be strong enough to withstand the iPad onslaught, and that the natural monopoly economic effect remains in place, so Windows will remain the dominant client platform for business apps into the foreseeable future?

These are really the three options, resulting in:

  1. Objective C slowly overtakes .NET and we ultimately are Apple devs instead of Microsoft devs
  2. H5/js rules the world as the ‘one technology to rule them all’ and vendors like Microsoft and Apple become entirely irrelevant because we live in a purely open-source world where nobody makes money off any platform technologies, so probably the only hardware/OS left is something like Android running Chrome, because it is a 100% commodity play at that level
  3. .NET and XAML remain entirely valid, and life generally continues like it is today, with a mix of .NET smart client work and primarily server-based web work with h5/js primarily used to boost the user experience, but not often used to write standalone smart client apps

My crystal ball leans toward option 3 – I don’t think economic realities change much or often, and I struggle to see where IT departments will come up with the increased budget (% of revenue) necessary to build apps for both iPads and Windows over the long term. It will be measurably cheaper (by many, many, many thousands of dollars) for companies to buy employees Win8 tablets rather than building and maintaining both iOS and Windows versions of every business app.

And I don’t believe in the ‘one technology to rule them all’ idea. That hasn’t happened in the entire history of computing, and it is hard to imagine everyone on the planet embracing one monoculture for software development. Especially when it would be counter to the interests of every platform vendor out there (Microsoft, Apple, Google, Oracle, and even IBM).

Still with me? Winking smile

To summarize, I think learning XAML is time well spent. Today that’s WPF or Silverlight. There is absolutely no doubt that Silverlight is closer to WinRT than WPF, and people building SL apps today will have an easier time migrating them to WinRT later, whereas most WPF apps will be a pretty big rewrite.

But there’s nothing wrong with focusing yourself on h5/js. If you do so, I suggest doing it in a way that ignores or minimizes all server-side coding. If h5/js does take over the world, it will be used to create pure smart client apps, and if there’s a “web server” involved at all, it will exist purely as a deployment server for the client app. The ‘pure’ h5/js/jquery/etc. world isn’t linked to any vendor – not Microsoft, Apple, or anyone. To me this represents a pretty major career shift, because to truly embrace h5/js as a complete software development platform is so demanding (imo) it won’t leave time to retain .NET or other vendor-specific technology expertise.

For my part, I’m not yet ready to abandon Microsoft for h5/js, because I think Windows 8, WinRT, .NET, and XAML have a rather bright future. A year from now I think a lot of people will be happily using Windows 8 desktops, laptops, and tablets – and hopefully a lot of Windows Phones, and with luck we’ll be looking forward to some cool new Xbox. I live in (I think realistic) hope that my .NET/XAML skills will apply to all these platforms.

What does your crystal ball say?

Thursday, April 26, 2012 9:32:58 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, October 3, 2011

People often ask “Why should I use MVVM? It seems like it complicates things.”

The most common reason hard-core developers put forward for MVVM is that “it enables unit testing of presentation code”.

Although that can be true, I don’t think that’s the primary reason people should invest in MVVM.

I think the primary reason is that it protects your presentation code against some level of change to the XAML. It makes your code more maintainable, and will help it last longer.

For example, build a WPF app with code-behind. Now try to move it to Silverlight without changing any code (only XAML). Pretty hard huh?

Or, build a Silverlight app with code-behind. Now have a user experience designer rework your XAML to look beautiful. Your app won’t build anymore? They changed numerous XAML types and there are now compiler errors? Oops…

Looking forward, try taking any WPF or Silverlight app that has code-behind and moving it to WinRT (Windows 8) without changing any code (only XAML – and the XAML will need to change). Turns out to be nearly impossible doesn’t it?

And yet, I have lots of CSLA .NET application code that uses MVVM to keep the presentation code cleanly separated from the XAML. Examples where the exact same code runs behind WPF and Silverlight XAML. I’m not quite yet to the point of having CSLA working on WinRT, but I fully expect the exact same code to run on Windows 8, just with a third set of XAML.

To me, that is the power and value of MVVM. Your goal should be no code-behind, viewmodel code that doesn’t rely on specific XAML types, and an abstract set of viewmodel methods that can be bound to arbitrary UI events.

Yes, MVVM is an investment. But it will almost certainly pay for itself over time, as you maintain your app, and move it from one flavor of XAML to another.

Does MVVM mean you can completely avoid changing code when the XAML changes? No. But it is a whole lot closer than any other technique I’ve seen in my 24+ year career.

Monday, October 3, 2011 9:59:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, September 16, 2011

Microsoft revealed quite a lot of detail about "Windows 8" and its programming model at the //build/ conference in September 2011. The result is a lot of excitement, and a lot of fear and worry, on the part of Microsoft developers and customers.

From what I've seen so far, in reading tweets and other online discussions, is that people's fear and worry are misplaced. Not necessarily unwarranted, but misplaced.

There's a lot of worry that ".NET has no future" or "Silverlight has no future". These worries are, in my view, misplaced.

First, it is important to understand that the new WinRT (Windows Runtime) model that supports Win8 Metro style apps is accessible from .NET. Yes, you can also use C++, but I can't imagine a whole lot of people care. And you can use JavaScript, which is pretty cool.

But the important thing to understand is that WinRT is fully accessible from .NET. The model is quite similar to Silverlight for Windows Phone. You write a program using C# or VB, and that program runs within the CLR, and has access to a set of base class libraries (BCL) just like a .NET, Silverlight, or WP7 app today. Your program also has access to a large new namespace where you have access to all the WinRT types.

These WinRT types are the same ones used by C++ or JavaScript developers in WinRT. I think this is very cool, because it means that (for perhaps the first time ever) we'll be able to create truly first-class Windows applications in C#, without having to resort to C++ or p/invoke calls.

The BCL available to your Metro/WinRT app is restricted to things that are "safe" for a Metro app to use, and the BCL features don't duplicate what's provided by the WinRT objects. This means that some of your existing .NET code won't just compile in WinRT, because you might be using some .NET BCL features that are now found in WinRT, or that aren't deemed "safe" for a Metro app.

That is exactly like Silverlight and WP7 apps. The BCL features available in Silverlight or WP7 are also restricted to disallow things that aren't safe, or that make no sense in those environments.

In fact, from what I've seen so far, it looks like the WinRT BCL features are more comparable to Silverlight than anything else. So I strongly suspect that Silverlight apps will migrate to WinRT far more easily than any other type of app.

None of this gives me any real worry or concern. Yes, if you are a Windows Forms developer, and very possibly if you are a WPF developer, you'll have some real effort to migrate to WinRT, but it isn't like you have to learn everything new from scratch like we did moving from VB/COM to .NET. And if you are a Silverlight developer you'll probably have a pretty easy time, but there'll still be some real effort to migrate to WinRT.

If nothing else, we all need to go learn the WinRT API, which Microsoft said was around 1800 types.

So what should you worry about? In my view, the big thing about Win8 and Metro style apps is that these apps have a different lifetime and a different user experience model. The last time we underwent such a dramatic change in the way Windows apps worked was when we moved from Windows 3.1 (or Windows for Workgroups) to Windows 95.

To bring this home, let me share a story. When .NET was first coming out I was quite excited, and I was putting a lot of time into learning .NET. As a developer my world was turned upside down and I had to learn a whole new platform and tools and langauge - awesome!! :)

I was having a conversation with my mother, and she could tell I was having fun. She asked "so when will I see some of this new .NET on my computer?"

How do you answer that? Windows Forms, as different as it was from VB6, created apps that looked exactly the same. My mother saw exactly zero difference as a result of our massive move from VB/COM to .NET.

Kind of sad when you think about it. We learned a whole new programming platform so we could build apps that users couldn't distinguish from what we'd been doing before.

Windows 8 and Metro are the inverse. We don't really need to learn any new major platform or tools or languages. From a developer perspective this is exciting, but evolutionary. But from a user perspective everything is changing. When I next talk to my mother about how excited I am, I can tell her (actually I can show her thanks to the Samsung tablet - thank you Microsoft!) that she'll see new applications that are easier to learn, understand, and use.

This is wonderful!!

But from our perspective as developers, we are going to have to rethink and relearn how apps are designed at the user experience and user workflow level. And we are going to have to learn how to live within the new application lifecycle model where apps can suspend and then either resume or be silently terminated.

Instead of spending a lot of time angsting over whether the WinRT CLR or BCL is exactly like .NET/Silverlight/WP7, we should be angsting over the major impact of the application lifecycle and Metro style UX and Metro style navigation within each application.

OK, I don't honestly think we should have angst over that either. I think this is exciting, and challenging. If I wanted to live in a stable (stagnant?) world where I didn't need to think through such things, well, I think I'd be an accountant or something…

Yes, this will take some effort and some deep thinking. And it will absolutely impact how we build software over the next many years.

And this brings me to the question of timing. When should we care about Metro and WinRT? Here's a potential timeline, that I suspect is quite realistic based on watching Windows releases since 1990.

Win8 will probably RTM in time for hardware vendors to create, package, and deliver all sorts of machines for the 2012 holiday season. So probably somewhere between July and October 2012.

For consumer apps this means you might care about Win8 now, because you might want to make sure your cool app is in the Win8 online store for the 2012 holiday season.

For business apps the timing is quite different. Corporations roll out a new OS much later than consumers get it through retailers. As an example, Windows 7 has now been out for about three years, but most corporations still use Windows XP!!! I have no hard numbers, but I suspect Win7 is deployed in maybe 25% of corporations - after being available for three years.

That is pretty typical.

So for business apps, we can look at doing a reasonable amount of Win8 Metro development around 2015.

Yes, some of us will be lucky enough to work for "type A" companies that jump on new things as they come out, and we'll get to build Metro apps starting in late 2012.

Most of us work for "type B" companies, and they'll roll out a new OS after SP1 has been deployed by the "type A" companies - these are the companies that will deploy Win8 after has been out for 2-4 years.

Some unfortunate souls work for "type C" companies, and they'll roll out Win8 when Win7 loses support (so around 2018?). I used to work for a "type C" company, and that's a hard place to find yourself as a developer. Yet those companies do exist even today.

What does this all mean? It means that for a typical corporate or business developer, we have around 4 years from today before we're building WinRT apps.

The logical question to ask then (and you really should ask this question), is what do we do for the next 4 years??? How do we build software between now and when we get to use Metro/WinRT?

Obviously the concern is that if you build an app starting today, how do you protect that investment so you don't have to completely rewrite the app in 4 years?

I don't yet know the solid answer. We just don't have enough deep information yet. That'll change though, because we now have access to early Win8 builds and early tooling.

What I suspect is that the best way to mitigate risk will be to build apps today using Silverlight and the Silverlight navigation model (because that's also the model used in WinRT).

The BCL features available to a Silverlight app are closer to WinRT than full .NET is today, so the odds of using BCL features that won't be available to a Metro app is reduced.

Also, thinking through the user experience and user workflow from a Silverlight navigation perspective will get your overall application experience closer to what you'd do in a Metro style app - at least when compared to any workflow you'd have in Windows Forms. Certainly you can use WPF and also create a Silverlight-style navigation model, and that'd also be good.

Clearly any app that uses multiple windows or modal dialogs (or really any dialogs) will not migrate to Metro without some major rework.

The one remaining concern is the new run/suspend/resume/terminate application model. Even Silverlight doesn't use that model today - except on WP7. I think some thought needs to go into application design today to enable support for suspend in the future. I don't have a great answer right at the moment, but I know that I'll be thinking about it, because this is important to easing migrations in the future.

It is true that whatever XAML you use today won't move to WinRT unchanged. Well, I can't say that with certainty, but the reality is that WinRT exposes several powerful UI controls we don't have today. And any Metro style app will need to use those WinRT controls to fit seamlessly into the Win8 world.

My guess is that some of the third-party component vendors are well on their way to replicating the WinRT controls for Silverlight and WPF today. I surely hope so anyway. And that's probably going to be the best way to minimize the XAML migration. If we have access to controls today that are very similar to the WinRT controls of the future, then we can more easily streamline the eventual migration.

In summary, Windows 8, WinRT, and Metro are a big deal. But not in the way most people seem to think. The .NET/C#/CLR/BCL story is evolutionary and just isn't that big a deal. It is the user experience and application lifecycle story that will require the most thought and effort as we build software over the next several years.

Personally I'm thrilled! These are good challenges, and I very much look forward to building .NET applications that deeply integrate with Windows 8. Applications that I can point to and proudly say "I built that".

Update: here are a couple related blog posts from fellow Microsoft RDs:

and a good post from Doug Seven:

and from fellow Magenic experts:

Friday, September 16, 2011 8:27:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, January 12, 2011

Anyone who expected HTML5 to be more standard or consistent than previous HTML standards hasn’t been paying attention to the web over the past 13+ years.

Google’s decision to drop H.264 may be surprising as a specific item, but the idea of supporting and not supporting various parts of HTML5 (or what vendors hope might be in HTML5 as it becomes a standard over the next several years) is something everyone should expect.

I have held out exactly zero hope for a consistent HTML5 implementation across the various browser vendors. Why? It is simple economics.

If IE, Chrome, FF and other browsers on Windows were completely standards complaint in all respects there would be no reason for all those browsers. The only browser that would exist is IE because it ships with Windows.

The same is true on the Mac and any other platform. If the browsers implement exactly the same behaviors, they can’t differentiate, so nobody would use anything except the browser shipping with the OS.

But the browser vendors have their own agendas and goals, which have nothing to do with standards compliance or consistency. Every browser vendor is motivated by economics – whether that’s mindshare for other products or web properties, data mining, advertising, or other factors. These vendors want you to use their browser more than other browsers.

So in a “standards-based world” how do you convince people to use your product when it is (in theory) identical to every other product?

When products have a price you could try to compete on that price. That’s a losing game though, because when you undercut the average price you have less money to innovate or even keep up. So even in the 1990’s with Unix standards vendors didn’t play the pricing game – it is just a slow death.

But browsers are free, so even if you wanted the slow death of the price-war strategy you can’t play it in the browser world. So you are left with “embrace and extend” or “deviate from the standard” as your options. And every browser vendor must play this game or just give up and die.

This happened to Unix in the 1990’s too – the various vendors didn’t want to play on price, so they added features beyond any standard with the intent of doing two things:

  1. Lure users to their version of Unix because it is “better”
  2. Lock them into your version of Unix because as soon as they use your cool features they are stuck on your version

The same is true with browsers and HTML5. All browser vendors are jockeying now (before the standard is set) to differentiate and shape the standard. But even once there is a “standard” they’ll continue to support the various features they’ve added that didn’t make it into the standard.

This is necessary, because it is the only way any given browser can lure users away from the other browsers, thereby meeting the vendors’ long term goals.

People often say they don’t want a homogenous computing world. That they want a lot of variation and variety in the development platforms used across the industry. Over the past 13+ years HTML has proven that variation is absolutely possible, and that it is very expensive and frustrating.

Working for consulting companies during all this time, I can say that HTML is a great thing. Consultants charge by the hour, and any scenario where the same app must be rebuilt, or at least tweaked, for every browser and every new version of every browser is just a way to generate more revenue.

Is HTML a drag on the world economy overall? Sure it is. Anything that automatically increases the cost of software development like HTML is an economic drag by definition. The inefficiency is built-in.

The only place (today) with more inefficiency is the mobile space, where every mobile platform has a unique development environment, tools, languages and technologies. I love the mobile space – any app you want to build must be created for 2-5 different platforms, all paid for at an hourly rate. (this is sarcasm btw – I really dislike this sort of blatant inefficiency and waste)

But I digress. Ultimately what I’m saying is that expecting HTML5 to provide more consistency than previous versions of HTML is unrealistic, and moves like Google just took are something we should all expect to happen on a continuing basis.

Wednesday, January 12, 2011 11:49:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, January 11, 2011

One of the nicest things about TFS is the way it can be customized and extended to fit different types of process.

A colleague of mine, Mario Cardinal, works for a company who has built an Agile/SCRUM extension:

This really highlights how TFS can be used as the basis for a deep solution around a specific process, methodology or project philosophy. Cool stuff – and if you do SCRUM you should probably take a look at Urban Turtle to see if it can help you out.

Tuesday, January 11, 2011 12:41:56 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, November 29, 2010

One week from today, on December 6, at the Microsoft office in Bloomington, MN you can attend a free two-track .NET developer training event: Code Mastery.

This event includes content from the Microsoft PDC 2010 event, plus custom content covering topics such as:

  • Windows Phone 7 (WP7) development
  • How to really use the MVVM design pattern in WPF
  • SQL Azure
  • Combining Scrum and TFS 2010
  • Best practices for development in .NET
  • and more!!

If that isn’t enough, there’s a raffle at the end of the day, with great prizes (including an MSDN Universal subscription), and our special guest Carl Franklin from .NET Rocks! will be in attendance to spice up the event.

Register now to reserve your seat!

Monday, November 29, 2010 10:21:14 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, November 8, 2010

Listen to an interview where I talk about CSLA 4, UnitDriven and a lot of things related to Silverlight, WPF and Windows Phone (WP7) development.

Pluralcast 28 : Talking Business and Objectification with Rocky Lhotka

This was recorded in October at the Patterns and Practices Symposium in Redmond.

Monday, November 8, 2010 10:56:59 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, October 27, 2010

In the past week I’ve had a couple people mention that CSLA .NET is ‘heavyweight’. Both times it was in regard to the fact that CSLA 4 now works on WP7.

I think the train of thought is that CSLA 4 supports .NET and Silverlight, but that phones are so … small. How can this translate?

But I’ve had people suggest that CSLA is too heavyweight for small to medium app development on Windows too.

As you can probably guess, I don’t usually think about CSLA as being ‘heavyweight’, and I feel comfortable using it for small, medium and large apps, and even on the phone. However, the question bears some thought – hence this blog post.

I think there are perhaps three things to consider:

  1. Assembly size
  2. Runtime footprint
  3. Conceptual surface area

CSLA is a small framework in terms of assembly size – weighing in at around 300k (slightly more for .NET, less for SL and WP7). This is smaller than many UI component libraries or other frameworks in general, so I feel pretty good about the lightweight nature of the actual assemblies.

The runtime footprint is more meaningful though, especially if we’re talking about the phone. It is a little hard to analyze this, because it varies a lot depending on what parts of CSLA you use.

The most resource-intensive feature is the ability to undo changes to an object graph, because that triggers a snapshot of the object graph – obviously consuming memory. Fortunately this feature is entirely optional, and on the phone it is not clear you’d implement the type of Cancel button this feature is designed to support. Fortunately, if you don’t use this feature then it doesn’t consume resources.

The other primary area of resource consumption is where business rules are associated with domain object types. This can get intense for applications with lots and lots of business rules, and objects with lots and lots of properties. However, the phone has serious UI limitations due to screen size, and it is pretty unrealistic to think that you are going to allow a user to edit an object with 100 properties via a single form on the phone…

Of course if you did decide to create a scrolling edit form so a user could interact with a big object like this, it doesn’t really matter if you use CSLA or not – you are going to have a lot of code to implement the business logic and hook it into your object so the logic runs as properties change, etc.

There’s this theory I have, that software has an analogy to the Conservation of Energy Principle (which says you can neither create nor destroy energy). You can neither create nor destroy the minimum logic necessary to solve a business problem. In other words, if your business problem requires lots of properties with lots of rules, you need those properties and rules – regardless of which technology or framework you are using.

The CSLA 4 business rule system is quite spare – lightweight – at least given the functionality it provides in terms of running rules as properties change and tracking the results of those rules for display to the user.

The conceptual surface area topic is quite meaningful to me – for any framework or tool or pattern. Developers have a lot to keep track of – all the knowledge about their business, their personal lives, their relationships with co-workers, their development platform, the operating system they use, their network topography, how to interact with their IT department, multiple programming languages, multiple UI technologies, etc. Everything I just listed, and more, comes with a lot of concepts – conceptual surface area.

Go pick up a new technology. How do you learn to use it? You start by learning the concepts of the technology, and (hopefully) relating those concepts to things you already know. Either by comparison or contrast or analogy. Technologies with few concepts (or few new concepts) are easy to pick up – which is why it is easy to switch between C# and VB – they are virtually identical in most respects. But it is harder to switch from Windows Forms to Web Forms, because there are deep and important conceptual differences at the technology, architecture and platform levels.

I think large conceptual surface areas are counterproductive. Which is why, while I love patterns in general, I think good frameworks use complex patterns behind the scenes, and avoid (as much as possible) the requirement that every developer internalize every pattern. Patterns are a major avenue for conceptual surface area bloat.

CSLA has a fairly large conceptual surface area. Larger than I’d like, but as small as I’ve been able to maintain. CSLA 4 is, I think, the best so far, in that it pretty much requires a specific syntax for class and property implementations – and you have to learn that – but it abstracts the vast majority of what’s going on behind that syntax, which reduces the surface area compared to older versions of the framework.

Still, when people ask me what’s going to be the hardest part of getting up to speed with CSLA, my answer is that there are two things:

  1. Domain-driven, behavior-focused object design
  2. Learning the concepts and coding practices to use the framework itself

The first point is what it is. That has less to do with CSLA than with the challenges learning good OOD/OOP in general. Generally speaking, most devs don’t do OO design, and those that try tend to create object models that are data-focused, not behavior-focused. It is the fault of tooling and a matter of education I think. So it becomes an area of serious ramp-up before you can really leverage CSLA.

The second point is an area where CSLA could be considered ‘heavyweight’ – in its basic usage it is pretty easy (I think), but as you dive deeper and deeper, it turns out there are a lot of concepts that support advanced scenarios. You can use CSLA to create simple apps, and it can be helpful; but it also supports extremely sophisticated enterprise app scenarios and they obviously have a lot more complexity.

I can easily see where someone tasked with building a small to mid-size app, and who’s not already familiar with CSLA, would find CSLA very counter-productive in the short-term. I think they’d find it valuable in the long-term because it would simplify their maintenance burden, but that can be hard to appreciate during initial development.

On the other hand, for someone familiar with CSLA it is a lot harder to build even simple apps without the framework, because you end up re-solving problems CSLA already solved. Every time I go to build an app without CSLA it is so frustrating, because I end up re-implementing all this stuff that I know is already there for the taking if I could only use CSLA…

.NET is great – but it is a general-purpose framework – so while it gives you all the tools to do almost anything, it is up to you (or another framework) to fill in the gaps between the bits and pieces in an elegant way so you can get down to the business of writing business code. So while CSLA has a non-trivial conceptual surface area where it solves these problems – you’ll have to solve them anyway because they exist and must be solved.

In summary, from an assembly size and runtime size perspective, I don’t think CSLA is heavyweight – which is why it works nicely on WP7. But from a conceptual surface area perspective, I think there’s an argument to be made that CSLA is a very comprehensive framework, and has the heavyweight depth that comes along with that designation.

Wednesday, October 27, 2010 9:58:30 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, October 15, 2010

Over the past 14 years of building and maintaining the CSLA framework I’ve tried several source control technologies and strategies. My current technology is Subversion and I have a branching and tagging strategy with which I’ve been very happy. So I thought I’d share how this works.

My CSLA .NET development team is distributed, with contributors to CSLA in various regions of the US and as far away as Norway. And my user base is truly global, with people using the code (and presumably looking at the repository) from countries in every corner of the planet.

I use Subversion primarily because it is “free as in beer”. In many ways I’d rather use TFS, but since I expose my repository to the world via svn clients and the web (and since CSLA is a free, open-source framework) TFS is completely out of reach financially.

The thing is though, if I used TFS I’d almost certainly use it the same way I’m using svn today. The lessons I’ve learned over the years about maintaining numerous releases, building major new versions and reducing administrative and developer complexity should generally apply to any source control technology.

Top level folders

My top level folder structure organizes each project.


No branching or tagging at this level. I tried that years ago, and it was a mess, because the branches and tags often don’t operate at the project level (because projects often have multiple components), so there’s a mismatch – the top level folders contain projects, but the branches/tags often contain components – and that’s confusing.

I know I could have a different repository for each project. And for more narrowly focused authorization access to the projects that would be nice. But in my case I want to expose all these projects externally via svn and web clients, so having the projects within a single repository makes my administration burden quite manageable. And I think it makes the repository easier to consume as an end-user, because there’s this one URI that provides access to the entire history of CSLA .NET.

You can see some “legacy projects” in this structure. Things like csla1-x or samples are really just places where I’ve dumped very old code, or code from some of the ways I no longer use svn. I don’t want to lose that history, but at the same time it wasn’t realistic to try and fit that older code into the way I currently organize the code.

Normally you’d look at core (the current code) or other comparatively live projects like n2, vb or ce.

Project folder structure

Within each project folder I have trunk, branches and tags.


This keeps each project isolated and reduces confusion within the repository.

Within the trunk I have the folders of my project. In the current trunk I’ve organized the CSLA 4 content into some high level areas.


At one point in the past I had the Samples as a separate project, rather than a folder within a project. That was a problem though, because the samples need to version along with the actual project source. They are dependent on the source. The same is true of the support files and setup project (and hopefully soon the NuPack project).

Basically what I’m doing is keeping all related content together within a project to avoid version mismatch issues (such as having samples versioning independent from the framework they demonstrate).


I use tagging for releases. When I release a version of the framework, I create a tag that includes everything from trunk at the point of release.


The way svn prefers to think about tags is as a read-only snapshot of a point in time. That is perfect for a release, since it really is a snapshot in time. This is incredibly valuable because it means the code for any given release can always be easily discovered.

I wasn’t always so disciplined about this, and the results were bad. People would report issues with a version of the code and I wasn’t always able to see or even find that version. Nor was it possible to do change comparisons between versions. Having an easily discoverable and accessible snapshot of each version is invaluable.


I have used branching for different reasons:

  • Long-term maintenance of a point release (like 3.8)
  • Fixing bugs in an old release (where old is more than 1 release prior to current)
  • Creating major new versions of the code
  • Supporting experimental or research efforts that may or may not ever merge into the trunk
  • Creating alternate versions of the framework (like the N2 or VB versions)


Not all those reasons turn out to be good ones… Specifically, I no longer use branching to create alternate versions of the framework. Efforts like N2 or the VB version of the framework are now top level projects. While I might use “branching” to create those top level projects, I don’t maintain that code here in the branches folder of the core. The reason is that those projects take on a life of their own, and need their own branching and tagging – because they version and release independently.

However, I find that the long term maintenance of a point release is a perfect use for a branch. So you can see the branch for V3-8-x for example, which was created when I started working on CSLA 4 in the trunk.

In fact that’s an important point. In the past, when I started a major new version, I’d start that work in a branch and then merge back into trunk. While that technically works, the merge process is really painful. So I now keep trunk as being the future-looking code – including major new versions. In other words, whenever I start working on CSLA 4.5 (or whatever) that work will occur in the trunk, and before I start that work I’ll create a V4-0-x branch where I’ll be able to keep working on the then-current version.

I do cross-merging while working on major releases, especially early in the process when the code is still very similar. For example, while I was building CSLA 4 people would report issues with 3.8. So I’d fix the issue in 3.8 and then merge that fix from the 3.8 branch into trunk. That was easy, because trunk started out exactly the same. Over time however, that becomes increasingly difficult because the trunk diverges more from any given branch, and eventually such cross-merging becomes (in my experience) too difficult and it is easier to manually do the bug fix in both places (if appropriate).

Today almost all work is done in trunk, because the primary focus is on CSLA 4. At the same time, there are occasional bug fixes or changes made to 3.8, which of course occur in the V3-8-x branch. And when I do a release of 3.8.x that still results in a new tag for that release – with that tag created off the branch.

In some extremely rare cases, I might end up doing a bug fix for an older release (like 3.6 or something). While this is really unlikely, it can happen, and in such a case I’d create a branch off the tag (the snapshot of the release) to make that fix. And when that update is released, I’d create a tag off that branch for the release.

In summary, for branching I always end up with a branch to maintain the current code every time the trunk becomes focused on a major future release. I sometimes create a branch to fix a bug in an old release. And I sometimes create a branch to do some experimental or research work – to see “what if I did something crazy” without affecting any mainstream efforts.

I no longer use branching to create alternate versions of the framework, or to create major new versions. The drawbacks to using branching in those cases, specifically in terms of complexity and/or merging issues, is just too high.

This particular approach to folder structures, tagging and branching may not apply to all scenarios, but it has absolutely made maintenance of CSLA .NET easier and better over the past several years.

Friday, October 15, 2010 10:39:57 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, September 1, 2010

The Microsoft Patterns and Practices group has a new guidance book out:

MSDN Library: Parallel Programming with Microsoft .NET

A printed version will be coming from O’Reilly

Parallel Programming with Microsoft® .NET

And there are code samples on CodePlex

Parallel Patterns CodePlex

Good stuff from the P&P group!

Wednesday, September 1, 2010 4:22:41 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, July 19, 2010

I was just at a family reunion and heard a joke about the comedian’s retirement home. A guy walks into the common room and hears one old guy shout out “19”, and everyone laughs. Across the room another guy shouts out “54” and everyone laughs even harder. The guy turns to his guide and asks “What is going on?”. The guide replies “These guys know all the same jokes, and at their age it takes too long to tell them, so they just assigned them all numbers.” The guy smiles, and shouts out “92”, which results in a just a few grudging chuckles. “What’d I do wrong?” he asks the guide. The guide replies “Some people can tell a joke, some people can’t.”

That made me think about patterns (yes, I know, I’m a geek).

I like design patterns. Who wouldn’t? They are a formalized description of a solution to a specific problem. If you have that problem, then having a formally described solution seems like a dream come true.

Perhaps more importantly patterns are a language short-cut. If everyone in a conversation understands a pattern, the pattern (and often its problem) can be discussed merely by using the pattern name, which saves an immense amount of time as opposed to describing the actual problem and solution in detail.

Of course the “formalized description” is prose. Human language. And therefore it is ambiguous and open to interpretation. The descriptions must be human-readable, because any pattern worth ink and paper transcends any specific platform or programming language. Describing a “pattern” in Java or C# is silly – because that makes it far too likely that it isn’t really a broad pattern, but is simply a practice that happens to work in a given language or on a given platform.

But this ambiguity leads to trouble. Not unlike the comedian’s retirement home, patterns are a short-cut language to some really complex concepts, and often even more complex implementations. While everyone might have a basic comprehension of “inversion of control”, I can guarantee you that saying IoC doesn’t bring the same concept, implementation or emotional response from everyone who hears it.

Pattern zealots often forget (or overlook) the fact that patterns have consequences. Good and bad consequences. Every pattern has bad consequences, as well as good ones. Some people get attached to a pattern because it helped them at some point, and they just assume that pattern will always have a positive or beneficial result. But that’s simply not true. Sometimes the negative consequences of a pattern outweigh the positive – it is all very dependent on the specific problem domain and environment.

Soft things like staffing levels, skill sets, attitudes and time frames all enter into the real world environment. Add the reality that any given problem almost certainly has several patterns that provide solutions – for different variations of the problem – and it becomes clear that no one pattern is always “good”.

It should come as no surprise then, that patterns are often misused – in several different ways.

My pet peeve is when a pattern is applied because something likes the pattern, not because the application actually has the problem the pattern would solve. I often see people using IoC, for example, because it is trendy, not because they actually need the flexibility provided by the pattern. They use a container to create instances of objects that they will never swap out for other implementations. What a waste – they’ve accepted all the negative consequences of the pattern for absolutely no benefit since they don’t have the problem the pattern would solve. Is this the fault of IoC? Of course not, IoC is a powerful pattern.

It is the fault of what I call the “Pattern Of The Year” (POTY) syndrome. When a pattern becomes really popular and trendy, it becomes the POTY. And everyone wants to go to the POTY. If you need the POTY, you should go. But if you don’t need the POTY, it is really a little silly (if not creepy) for you to go to the POTY…

In short: only use a pattern if you have the problem it solves, and the positive consequences outweigh the negative consequences.

Perhaps the most common misuse of patterns is failure to actually understand the pattern or its implementation. To stick with IoC as an example, it is pretty common for a development team to completely misunderstand the pattern or the framework that implements the pattern. Sure, some architect or lead developer “got it” (or so we hope) which is why the team is using the pattern – but you can find apps where numerous competing containers are created, each initialized differently.

I always thought Apple BASIC spaghetti code was the worst thing possible – but misuse of certain design patterns quickly creates a mess that is an order of magnitude worse than anything people wrote back in the early 80’s…

In short: if you use a pattern, make sure your entire team understand the pattern and your implementation of the pattern.

As I mentioned earlier, most problems can be solved by more than one pattern. Any truly interesting problem almost certainly has multiple solutions, each with different good/bad consequences and various subtle differences in outcome. It is not uncommon for the best solution to be a combination of a few more basic patterns.

As an example, the CSLA data portal is a combination of around six basic design patterns that work together in concert to solve the problem space the data portal targets. I’m not saying the data portal is a design pattern, but it is a solution for a problem that came into being by combining several complimentary patterns.

A few years after I created the data portal, various other design patterns were formalized that describe other solutions to this same problem space. Some are similar, some are not. If you look into each solution, it is clear that each one is actually a different combination of some lower level design patterns, working together to solve the problem.

The thing is, every pattern your bring into your solution (or ever pattern brought in by a higher level pattern) comes with its own consequences. You need to be careful to minimize the negative consequences of all those patterns so the overall balance is toward the positive.

In short: don’t be afraid to combine simple or basic design patterns together to solve a bigger problem, but be aware of the negative consequences of every pattern you bring into play.

Having introduced this concept of “low level” vs “high level” patterns, I’m going to follow that a bit further. Most of the patterns in the original GoF book are what I’d call low level patterns. They stand alone and have little or no dependency on each other. Each one solves a very narrow and clear problem and has very clear good/bad consequences.

Of course that was 15 years ago, and since then people have applied the pattern concept to more complex and bigger problem spaces. The resulting solutions (patterns) very often build on other patterns. In other words we’re raising the level of abstraction by building on previous abstractions. And that’s a fine thing.

But it is really important to understand that ultimately patterns are implemented, and the implementations of patterns are often far messier than the abstract though models provided by the patterns themselves. Even that is OK, but there’s a meta-consequence the flows out of this: complexity.

As you start to use higher level patterns, and their implementations, you can easily become locked into not only the implementation of the pattern you wanted, but also the implementations of the lower level patterns on which the implementation is built.

Again I’ll use IoC to illustrate my point. If you want IoC you’ll almost certainly use a pre-existing implementation. And once you pick that framework, you are stuck with it. You won’t want to use more than one IoC framework, because then you’d have multiple containers, each configured differently and each competing for the attention of every developer. The result is a massive increase in complexity, which means a reduction in maintainability and a corresponding increase in cost.

Now suppose you pick some higher level pattern, perhaps a portal or gateway, that is implemented using IoC. If you want the implementation of the gateway pattern you must also accept a dependency on their IoC framework choice.

People often ask me whether (or when will) CSLA .NET will incorporate Enterprise Library, log4net, Unity, Castle/Windsor, <insert your framework here>. I try very, very, very hard to avoid any such dependencies, because as soon as I pick any one of these, I make life really hard for everyone out there who didn’t choose that other framework.

CSLA 3.8 has a dependency on a simple data structure framework, and even that was a continual nightmare. I can hardly express how happy I am that I was able to get rid of that dependency for CSLA 4. Not that the data structure framework was bad – it does a great job – but the complexity introduced by the dependency was just nasty.

In short: be aware of the complexity introduced as high level patterns force you to accept dependencies on lower level patterns and implementations.

The final topic I’d like to cover flows from a conversation I had with Ward Cunningham a few years ago. We were talking about patterns and the “pattern movement”, and how it has become a little warped over time as people actively look for ways to apply patterns, rather than the patterns being used because they are the natural answer to a problem.

It is kind of like a carpenter who spends a lot of money buying some really nice new power tool. And then trying to use that power tool for every part of the construction process – even if that means being less efficient or increasing the complexity of the job – just to use the tool.

Obviously I’d never want to hire such a carpenter to work on my house!!

Yet I’ve seen developers and architects get so fascinated by specific patterns, frameworks or technologies that they do exactly that: increase the complexity of simple problem domains specifically so they can use their new toy concept.

In this conversation Ward suggested that there are different levels of understanding or mastery of patterns. At the most basic level are people just learning what patterns are, followed by people who “get” a pattern and actively seek opportunities to use that pattern. But at higher levels of mastery are people who just do their job and (often without a conscious thought) apply patterns as necessary.

Carpenters don’t think twice about when and how to construct a staircase or put together a 2x6” wall frame. These are common design patterns, but they are natural solutions to common problems.

In short: strive for “pattern mastery” where you are not fixated on the pattern, but instead are just solving problems with natural solutions, such that the pattern “disappears” into the fabric of the overall solution.

The pattern movement has been going on for at least 15 years in our industry. And over that time I think it has been far more beneficial than destructive.

But that doesn’t mean (especially as a consultant) that you don’t walk into many organizations and see horrible misuse of design patterns – the results being higher complexity, lower maintainability and higher cost of development and maintenance.

I think it is important that we continually strive to make patterns be a common abstract language for complex problems and solutions. And I think it is important that we continually educate everyone on development teams about the patterns and implementations we bring into our applications.

But most importantly, I think we need to always make conscious choices, not choices based on trends or fads or because somebody on the team is in love with pattern X or framework Y or technology Z.

  1. Use a pattern because you have the problem it solves
  2. Only use a pattern if the good consequences outweigh the bad (and remember that every pattern has negative consequences)
  3. Use patterns and implementations only if the entire team understands them
  4. Use the simplest pattern that solves your problem
  5. Don’t be afraid to combine several simple patterns to solve a complex problem
  6. Be aware of (and consciously accept) the consequences of any low level patterns that come with most high level pattern implementations
  7. Strive for “pattern mastery”, where you are solving problems with natural solutions, not looking for ways to apply any specific pattern

Eventually maybe we’ll all be in a software development retirement home and we can shout things like “Memento” and “Channel adaptor” and everyone will chuckle with fond memories of how those patterns made our lives easier as we built the software on which the world runs.

Monday, July 19, 2010 11:03:04 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, May 7, 2010

There’s a lot of understandable buzz about the iPad, including this well-written post that I think conveys something important.

As a Microsoft-oriented technical evangelist I’m now almost constantly asked what I think of the iPad. I’ve spent perhaps a total of 20 minutes with one, so I’m not an expert, but I can confidently say that I found it very pleasant to use, and I think it is a great v1 product.

(as an aside, many people also call it a “Kindle killer”, which I absolutely think it is NOT. It is too heavy, the screen is shiny and backlit and the battery doesn’t last 1-2 weeks – it was almost inconceivable to me that anything could replace real books (but the Kindle did), and the iPad certainly doesn’t compete with real paper or the Kindle)

I think the iPad reveals something very, very important. As does the iPhone, the Android phones and the upcoming Windows Phone 7: most users don’t need a “computer”.

Not a computer in the traditional sense, the way we as software designer/developers think about it.

Given the following:

  • “Instant” on
  • Primarily touch-based UI for the OS
  • Apps (and OS) that is designed for touch through and through (and no non-touch apps)
  • Light weight
  • Good battery life
  • Good networking (including home LAN, corporate domain, network printing, etc)
  • Portable peripherals, and standard connectors (USB, Firewire, ESATA, etc)
  • Docking station capability

I submit that your typical user doesn’t need a traditional computer. Sure, there are the “knowledge workers” in accounting, who push computers harder than developers do, but they aren’t a typical user either.

From what I can see, a typical user spends a lot of time

  • reading and composing email
  • using specialized line of business apps, mostly doing data entry and data viewing/analysis
  • browsing the web
  • playing lightweight casual games (solitaire, Flash-based games, etc)
  • using consumer apps like birthday card makers
  • organizing and viewing pictures and home videos
  • creating simple art projects with drawing apps, etc

None of these things require anything like the new i7 quad core (w/ hyperthreading – so 8 way) laptop Magenic is rolling out to all its consultants. Most users just don’t need that kind of horsepower, and would gladly trade it to get better battery life and more intuitive apps.

Which (finally) brings me to the real point of this post: today’s apps suck (just ask David Platt).

David talks a lot about why software sucks. But I want to focus on one narrow area: usability, especially in a world where touch is the primary model, and keyboard/mouse is secondary.

I have a Windows 7 tablet, which I like quite a lot. But it is far, far, far less usable than the iPad for most things. Why? It really isn’t because of Windows, which can be configured to be pretty touch-friendly. It is because of the apps.

Outlook, for example, is absolutely horrible. Trying to click on a message in the inbox, or worse, trying to click on something in the ribbon – that’s crazy. I’m a big guy, and I have big fingers. I typically touch the wrong thing more often than the right thing…

Web browsers are also horrible. Their toolbars are too small as well. But web pages are as much to blame – all those text links crammed together in tables and lists – it is nearly impossible to touch the correct link to navigate from page to page. Sure, I can zoom in and out, but that’s just a pain.

The web page thing is one area where the iPad is just as bad as anything else. It isn’t the fault of the devices (Windows or iPad), it is the fault of the web page designers. And it really isn’t their fault either, because their primary audience is keyboard/mouse computer users…

And that’s the challenge we all face. If the traditional computing form factor is at its end, and I suspect it is, then we’re in for an interesting ride over the next 5-7 years. I don’t think there’s been as big a user interaction transition since we moved from green-screen terminals to the Windows GUI keyboard/mouse world.

Moving to a world that is primarily touch is going to affect every app we build in pretty fundamental ways. When click targets need to be 2-4 times bigger than they are today, our beautiful high-resolution screens start to seem terribly cramped. And these battery-conserving end user devices don’t have that high of resolution to start with, so that makes space really cramped.

And that means interaction metaphors must change, and UI layouts need to be more dynamic. That’s the only way to really leverage this limited space and retain usability.

For my part, I think Microsoft is in a great place in this regard. Several years ago they introduced WPF and XAML, which are awesome tools for addressing these UI requirements. More recently they streamlined those concepts by creating Silverlight – lighter weight and more easily deployed, but with the same UI power.

I’d be absolutely shocked if we don’t see some sort of Silverlight-based tablet/slate/pad/whatever device in the relatively near future. And I’d be shocked if we don’t see the iPad rapidly evolve based on user feedback.

I really think we’re entering a period of major transition in terms of what it means to be a “computer user”, and this transition will have a deep impact on how we design and develop software for these computing appliances/devices.

And it all starts with recognizing that the type of UI we’ve been building since the early 1990’s is about to become just as obsolete as the green-screen terminal UIs from the 1980’s.

It took about 5 years for most organizations to transition from green-screen to GUI. I don’t think the iPad alone is enough to start the next transition, but I think it is the pebble that will start the avalanche. Once there are other devices (most notably some Silverlight-based device – in my mind at least), then the real change will start, because line of business apps will shift, as will consumer apps.

I’m looking forward to the next few years – I think it is going to be a wild ride!

Friday, May 7, 2010 10:16:26 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, May 6, 2010
Thursday, May 6, 2010 11:30:45 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, May 3, 2010

At the Chicago codecamp this past weekend I was sitting next to a younger guy (probably mid-20’s) who was angsting about what to learn and not learn.

“I’ll never learn C#, it is Microsoft only”

“Should I learn Python? I don’t have a use for it, but people talk about it a lot”

And much, much more.

Eventually, for better or worse (and I suspect worse), I suggested that you can’t go wrong. That every programming language has something to teach, as does every platform and every architectural model.

For my part, a few years ago I listed many of the languages I’d learned up to that point – and I don’t regret learning any of them. OK, I also learned RPG and COBOL, and they weren’t fun – but even those languages taught me something about how a language shouldn’t work :)  But I must say that PostScript was a fun language – programming printers is kind of an interesting thing to do for a while.

Platform-wise, I’ve used Apple II, C64, AmigaOS, OpenVMS, PC DOS, Novell, Windows 3.x, Windows NT, and modern Windows, as well as Unix, Linux and a tiny bit of some JCL-oriented mainframe via batch processing.

Every platform has something to teach, just like every programming language.

I also suggested that people who’ve only ever done web development, or only ever done smart-client or terminal-based development – well – they’re crippled. They lack perspective and it almost always limits their career. If you’ve never written a real distributed n-tier app, it is really hard to understand the potential of leveraging the all those resources. If you’ve never written a server-only app (like with a VT100 terminal) it is really hard to understand the simplicity of a completely self-contained app model. If you’ve never written a web app it is really hard to understand the intricacies of managing state in a world where state is so fluid.

Sadly (in my view at least), the angsting 20-something wasn’t interested in hearing that there’s value in learning all these things. Perhaps he was secretly enjoying being paralyzed by the fear that he’d “waste” some time learning something that wasn’t immediately useful. Who knows…

It is true that things aren’t always immediately useful. Most of the classes I took in college weren’t immediately useful. But even something as non-useful as Minnesota history turns out to be useful later in life – it is a lot more fun to travel the state knowing all the interesting things that happened in various places and times. (I pick that topic, because when I took that class I thought it was a total waste of time – I needed credits and it seemed easy enough – but now I’m glad it took it!)

(And some classes were just fun. I took a “Foundations of Math” class – my favorite class ever. We learned how to prove that 1+1=2 – why is that actually how it works? And why x+y=y+x, but x-z<>z-x. Useful? No. Never in my career have I directly used any of that. But fun? Oh yeah! :) )

A colleague of mine, some years ago, was on an ASP classic gig. This was in 2004 or 2005. He was very frustrated to be working on obsolete technology, right as .NET 2.0 was about to come out. His fear was that he’d be left behind as ASP.NET moved forward, and he had 3-4 more months of working on 1997 technologies. My point to him was that even if the technology wasn’t cutting edge, there were things to be learned about the business, the management style and the process being used by the client.

A career isn’t about a given technology, or even a technology wave. It is a long-term 20-30 year endeavor that requires accumulation of knowledge around technology, architecture, software design, best practices, business concepts, management concepts, software process, business process and more.

People who fixate on technology simply can’t go as far in their career as people who also pay attention to the business, management and inter-personal aspects of the world. It is the combination of broad sweeping technical knowledge and business knowledge that really makes a person valuable over the long haul.

“Wasting” 6-8 months on some old technology, or learning a technology that never becomes mainstream (I learned a language called Envelope of all things – and it was in the cool-but-useless category) is not that big a deal. That’s maybe 2% of your career. And if, during that time, you didn’t also learn something interesting about business, management and process – as well as deepening your appreciation for some aspects of technology, then I suggest you just weren’t paying attention…

All that said, spending several years working on obsolete technology really can harm your career – assuming you want to remain a technologist. I’m not saying you should blissfully remain somewhere like that.

But I am suggesting that, as a general rule, spending a few weeks or even months learning and using anything is useful. Nothing is a waste.

Monday, May 3, 2010 8:41:33 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, April 19, 2010

Here’s a link to an article I wrote on using code-gen vs building a more dynamic runtime:

Update: There doesn't appear to be a code download on the Microsoft site. Click here to download the code.

Sunday, April 18, 2010 11:30:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, April 10, 2009

I’m working on a number of things at the moment. Most notably the CSLA .NET for Silverlight video series that will be available in the very near future (it turns out that creating a video series is a lot of work!). And of course I’m prepping for numerous upcoming speaking engagements, and I’d love to see you at them – please come up and say hi if you get a chance.

But I’m also working on some content around Microsoft Oslo, MGrammar and related concepts. To do this, I’m creating a prototype “MCsla” language (DSL) that allows the creation of CSLA .NET business objects (and more) with very concise syntax. I’ll probably be blogging about this a bit over the next couple weeks, but the goal is to end up with some in-depth content that really walks through everything in detail.

My goal is for the prototype to handle CSLA editable root objects, but covers business rules, validation rules and per-type and per-property authorization rules. Here’s a conceptual example:

Object foo
    // per-type authz
    Allow Create ("Admin");
    Allow Edit ("Admin", "Clerk" );

    // property definitions

    Public ReadOnly int Id;

    Public Integer Value {
       // per-property authz
       Allow Write ("Clerk" );
       // business/validation rules
       Rule MinValue (1 );
       Rule MaxValue (50 );

But what I’ve realized as I’ve gotten further into this, is that I made a tactical error.

A lot of people, including myself, have been viewing MGrammar as a way to create a DSL that is really a front-end to a code generator. My friend Justin Chase has done a lot of very cool work in this space, and he’s not alone.

But it turns out that if you want to really leverage Microsoft Oslo and not just MGrammar, then this is not about creating a code generator. And what I’m finding is that starting with the DSL is a terrible mistake!

In fact, the idea behind MOslo is that the DSL is just a way to get metadata into the Oslo repository. And you can use other techniques to get metadata into the repository as well, including the graphical “Quadrant” tool.

But my next question, and I’m guessing yours too, is that if all we do is put metadata into the repository, what good is that???

This is where a runtime comes into play. A runtime is a program that reads the metadata from the repository and basically executes the metadata.

I always had a mental picture of MOslo “projecting” the metadata into the runtime. But that’s not accurate. It is the runtime that “pulls” metadata from the repository and then uses it.

And that’s OK. The point is that the runtime is this application that interprets/compiles/uses/consumes the metadata to end up with something that actually does some work.


What I’m learning through this process, is that the DSL exists to service the metadata in the repository. But it is the runtime that defines the metadata. The runtime consumes the metadata, so the metadata exists to serve the needs of the runtime.

In other words, I should have started with the runtime first so I knew what metadata was required, so I could design a DSL that was capable of expressing and capturing that metadata.

The example MCsla code you see above is OK, but it turns out that it is missing some important bits of information that my runtime needs to function. So while the DSL expresses the important CSLA .NET concepts, it doesn’t express everything necessary to generate a useful runtime result…

So at the moment I’ve stopped working on the DSL, and am focusing on creating a working runtime. One that can execute the metadata in a way that the user gets a dynamically created UI (in WPF) that is bound to a dynamically created CSLA .NET business object, that leverages a dynamically created data access layer (using the CSLA .NET 3.6 ObjectFactory concept). I’m not dynamically creating the database, because I think that’s unrealistic in any real-world scenario. We all have pre-existing databases after all.

Once I get the runtime working (and it is close – here’s a screenshot of a simple object, showing how the dynamic WPF UI is running against an object with business rules, buttons bound to the CslaDataProvider through commanding and all the other nice CSLA features.


Not that my UI is a work of art, but still :)

I have a lot more to do, but it is now clear that starting with the runtime makes a lot more sense than starting with the DSL.

Friday, April 10, 2009 9:30:07 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, January 23, 2009

For some years now there’s been this background hum about “domain specific languages” or DSLs. Whether portrayed as graphical cartoon-to-code, or specialized textual constructs, DSLs are programming languages designed to abstract the concepts around a specific problem domain.

A couple weeks ago I delivered the “Lap around Oslo” talk for the MDC in Minneapolis. That was fun, because I got to demonstrate MSchema (a DSL for creating SQL tables and inserting data), and show how it can be used as a building-block to create a programming language that looks like this:

“Moving Pictures” by “Rush” is awesome!

It is hard to believe that this sentence is constructed using a programming language, but that’s the power of a DSL. That’s just fun!

I also got to demonstrate MService, a DSL for creating WCF services implemented using Windows Workflow (WF). The entire thing is textual, even the workflow definition. There’s no XML, no C#-class-as-a-service, nothing awkward at all. The entire thing is constructed using terms you’d use when speaking aloud about building a service.

A couple years ago I had a discussion with some guys in Microsoft’s connected systems division. My contention in that conversation was that a language truly centered around services/SOA would have a first-class construct for a service. That C# or VB are poor languages for this, because we have to use a class and fake the service construct through inheritance or interfaces.

This MService DSL is exactly the kind of thing I was talking about, and in this regard DSLs are very cool!

So they are fun. They are cool. So why might DSLs be a bad idea?

If you’ve been in the industry long enough, you may have encountered companies who built their enterprise systems on in-house languages. Often a variation on Basic, C or some other common language. Some hot-shot developer decided none of the existing languages at the time could quite fit the bill. Or some adventurous business person didn’t want the vendor lock-in that came with using VendorX’s compiler. So these companies built their own languages (usually interpreted, sometimes compiled). And they built entire enterprise systems on this one-off language.

As a consultant through the 1990’s, I encountered a number of these companies. You might think this was rare, but it was not all that rare – surprising but true. They were all in a bad spot, having a lot of software built on a language and tools that were known by absolutely no one outside that company. To hire a programmer, they had to con someone into learning a set of totally dead-end skills. And if a programmer left, the company not only lost domain knowledge, but a very large percentage of the global population of programmers who knew their technology.

How does this relate to DSLs?

Imagine you work at a bio-medical manufacturing company. Suppose some hot-shot developer falls in love with Microsoft’s M language and creates this really awesome programming language for bio-medical manufacturing software development. A language that abstracts concepts, and allows developers to write a line of this DSL instead of a page of C#. Suppose the company loves this DSL, and it spreads through all the enterprise systems.

Then fast-forward maybe 5 years. Now this company has major enterprise systems written in a language known only by the people who work there. To hire a programmer, they need to con someone into learning a set of totally dead-end skills. And if a programmer leaves, the company has not only lost domain knowledge, but also one of the few people in the world who know this DSL.

To me, this is the dark side of the DSL movement.

It is one thing for a vendor like Microsoft to use M to create a limited set of globally standard DSLs for things like creating services, configuring servers or other broad tasks. It is a whole different issue for individual companies to invent their own one-off languages.

Sure, DSLs can provide amazing levels of abstraction, and thus productivity. But that doesn’t come for free. I suspect this will become a major issue over the next decade, as tools like Oslo and related DSL concepts work their way into the mainstream.

Friday, January 23, 2009 5:30:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer