Rockford Lhotka

 Friday, March 11, 2005

I was going to stay out of this, really I was. But it appears to be spinning out of control, with the press jumping in and spouting inaccurate conclusions left and right…

 

I am a Visual Basic MVP, and I do not favor the idea of merging the VB6 IDE into Visual Studio 2007 (or whatever comes after 2005). I’m afraid I see that as a ridiculous idea, for many of the reasons Paul Vick lists.

 

I’ve been in this industry for a reasonable amount of time – nearly 20 years. Heck, my first job on the VAX was porting code from the PDP-11 to the VAX. But there were still companies running that PDP-11 software ten years after the hardware was no longer manufactured.

 

The lesson? Companies run old, dead technology.

 

Why?

 

Because it works. Companies are nothing if not pragmatic. And that’s OK. But none of those companies expected DEC to support the long-dead PDP. They built their own support network through user groups and a sub-industry of recyclers who sold refurbished parts for years and years after nothing new was made.

 

VB6 today is the PDP-11 of 20 years ago. It is done, and support is ending. (though technically Microsoft has support options through 2008 I guess)

 

And you know what? Companies will continue to run VB6.

 

Why?

 

Because it works. Microsoft would prefer if you upgraded, but you don’t have to. And that’s OK. But like the people running the PDP-11’s, you can’t expect Microsoft to support a dead tool. Especially when they’ve provided a far superior alternative in the form of VB 2005.

 

If you want to keep running VB6 that’s cool. But like anyone using dead technology, you have to accept the costs of handling your own support. Of getting “refurbished” parts (in this case developers and components).

 

I’ll bet you that none of those old companies are still running a PDP-11 today. Why? Because eventually the cost of running a dead technology outweighs the cost of moving forward. The business eventually decides that moving forward is the cost effective answer and they do it.

 

This will be true for VB6 as well. It is simply a cost/benefit decision at a business level, nothing more. For some time to come, it will be cost-effective to maintain VB6 code, even though the technology is dead. Eventually – perhaps even 10 years later – the cost differential will tip and it will be more effective to move to a more modern and supported technology.

 

Other than a few odd ducks, I very much doubt that most developers would choose to continue to use VB6. Certainly they wouldn’t make that choice after using VB.NET or VB 2005 for a few days. I’ve seen countless people make the migration, and none of them are pining for the “good old days of VB6” after using VB.NET. Mostly because VB.NET is just so damn much fun!

 

And if you press the VB6-focused MVP’s, by and large you’ll find that they are staying in VB6 for business reasons, not technical ones. Their customer bases are pragmatic and conservative. Their customer bases are still at the point where the cost of running a dead technology is lower than switching to a modern technology. And that’s OK. That’s business.

 

What irritates me is when people let emotion into the discussion. This shouldn’t be a dogmatic discussion.

 

It is business, pure and simple! Deal with it.

Friday, March 11, 2005 2:54:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

OK, I am happy!

 

For years now it has been clear that the patent system is woefully inadequate when it comes to software and software-related patents. Any system that grants a patent for clicking once on a button to buy a product is insane!

 

Microsoft is far from the worst player in the patent system. IBM files for more patents than you can shake a stick at, and there are many other examples of egregious or opportunistic patents in the software and Internet space by many companies.

 

However, Microsoft recently has been under fire for Patent # ‘959 (“IsNot”). This is a patent they applied for and which has not yet been reviewed or granted, so it is still pending. The patent is basically for a VB version of the C-style != operator and thus is obviously pretty silly (sorry Paul and Amanda, but it is).

 

Microsoft is well within their rights to file the patent, and I give it good odds of being accepted (given the track record of the patent office). Given the current state of the patent system, Microsoft has to apply for patents like this – as do all companies. It is pure self-defense. It is really not the fault of the companies that the patent system is messed up…

 

The result of this patent application has been substantial discussion by Microsoft and various Microsoft “insiders”. My contention all along has been that the patent system needs reform and that Microsoft is a perfect candidate to lead the charge.

 

And wouldn’t you know it, here they go! Microsoft is actively lobbying for patent reform and has a cohesive list of ways they think the system should change to fix current issues.

 

I am sure this will lead to some healthy debate (and probably some unhealthy debate by various fools out there) about exactly what should change and how. But the fact that Microsoft is helping to spearhead the debate, to push it to the forefront so it can be discussed is very exciting.

 

Assuming Microsoft follows through and continues to sincerely push for reform, and participates in the ensuing debate in good faith then I say kudos to Microsoft!

Friday, March 11, 2005 10:29:50 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

I got this email based on a previous post on browser vs rich clients:

 

I read your comments about smart clients and Google maps. What do you think about XAML + Avalon, is it the MS’s recipe to kill HTML browser? I don’t see how it is a strategic advantage for MS, it is a replicable technology, I bet open source groups will come up with a Mozilla plug-in that is interoperable with that standards.

 

This question stems from a common misconception, which is that XAML is similar to HTML and is sent to the client where it is parsed and run.

 

In reality, XAML never leaves the server. It is compiled on the server into a .NET assembly, and that assembly is downloaded and run on the client. While this is replicable, it isn't the trivial task some people think it is.

 

Either you would need to replicate the .NET runtime and DirectX on the client so you could run the compiled assembly, or you'd need to build a XAML compiler on the server that generates a downloadable executable code module for your platform.

 

For instance, one could envision a XAML compiler that created some sort of compiled code module that could run on Linux, using OpenGL or some other 3D graphics library on that platform. But this is a non-trivial task, since XAML is designed with the assumption that the target platform has the .NET runtime and DirectX, so the compiler would need to map all the .NET and DirectX functionality into equivalents on some other target platform.

 

More realistically, you might envision something like mono being extended with Avalon support so it could run the pre-compiled .NET assemblies on non-Windows platforms. Even this is very non-trivial, since they’d have to not only replicate Avalon, but mediate the differences between DirectX and OpenGL or some other graphics library.

 

There are a couple examples of similar technologies available today. Most notably OpenLazslo and Xamalon. They illustrate the basic concept involved, which is to compile the XML into a binary, which is then transferred to the client and executed.

Friday, March 11, 2005 10:12:11 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, March 9, 2005

I have this long-standing theory and thought I’d share it. The theory goes like this:

 

First there was DOS.

Then there was Windows which ran on DOS.

Then DOS was emulated in Windows.

Then there was .NET which ran on Windows.

Then Windows was emulated in .NET.

 

Of course that last bit hasn't happened yet, but I expect it is just a matter of time before Microsoft's OS becomes .NET and Win32 becomes an emulated artifact. I also expect that branding-wise Windows will remain the foremost brand. I think this is why the .NET brand is already being deprecated. .NET as a brand must fade so it can be reborn like the phoenix in the future - as Windows.

Wednesday, March 9, 2005 8:57:24 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, March 7, 2005
Pat Helland has decided to leave Microsoft for Amazon. Pat is an incredibly bright man and is one of the foremost thinkers in the SOA space. I hope he is able to continue to contribute to the space in his new role. In any case - all the best Pat!!
Monday, March 7, 2005 1:45:10 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

Here's a very astute article that highlights the reason Linux is virtually doomed to the same fate as Unix.

 

Years ago, before Microsoft was a serious company and I was still working on VAXen it was clear that Unix was in trouble. We all called it "Unixification" - the reality that each Unix vendor had to differentiate and to differentiate they had to vary from the "standard" and as they varied from the standard they created tiny little niche products that couldn't compete with each other, much less things like the VAX or AS400.

 

This is the core flaw with any standards-based product concept. If a product is standards-based then all products are the same, so the only way to compete is on price, service and maybe quality (though if the standard is any good then quality should be automatic). Any business person will tell you that this is a commodity market and very few people want to play in the commodity space because it is hard work for low margin.

 

How do you break out of a commodity market? By de-commoditizing your product. By making it unique and compelling. Hence Unixification. The making of Unix variants that are unique and compelling.

 

It should come as no surprise to anyone that the efforts to make money of Linux would give rise to the Unixification of Linux. Dare we say "Linuxification". Linux vendors don't want to be stuck in a commodity market any more than their Unix predecessors. The only way out is to de-commoditize Linux - to provide non-standard value-add in substantial ways.

 

Judge Jackson hit the nail on the head years ago in the Microsoft monopoly trial. He pointed out quite clearly that there is a natural monopoly in the operating system space. In short, there will be a monopoly in the OS - the only question is whether it is Windows, the Mac or a Linux variant.

 

It doesn’t surprise me at all that people are starting to recognize that the same economic forces that caused Unixification are now causing Linuxification. Ultimately this, coupled with Judge Jackson’s observations about the natural monopoly in this space are likely to ensure that Microsoft retains the dominant desktop OS in the foreseeable future.
Monday, March 7, 2005 1:30:23 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

This past weekend I spoke at two events in Milwaukee, WI.

 

On Saturday morning I spoke at Deeper in .NET, a local event organized by the Wisconsin .NET user group. They lined up sponsors, got a venue and had around 400 people show up for the free all-day event. The speaker line-up was pretty compelling, including Scott Guthrie and Rob Howard. I was impressed by the organization of the event and I think it was a great day!

 

In particular I’m very happy to see events like this and the Heartland Developers Conference (in Iowa) be so successful. Typically the big Microsoft conferences are on the coasts – San Francisco, Orlando and so forth. While those are nice destinations, the fact is that we here in the middle of the country want the same content, the same speakers. Events like this one on Saturday bring the speakers and content to the audience, and that is an awesome thing!

 

Personally though, I think those of us in Iowa, Wisconsin and Minnesota should organize the events in May, June or September. Organizing these events when the weather is cold just reinforces people’s misguided perception about this part of the US. During the months of May, June and September the weather (at least in Minnesota) is as nice as you’ll find anywhere. Probably nicer. And Minnesota is beautiful in those months, all green trees and blue water.

 

On Friday and Sunday I spoke at the No Fluff, Just Stuff Software Symposium (NFJS). This conference is different from any other I’ve done, and I think is somewhat unique in general.

 

What makes NFJS unique in general is that they cap attendance at 200 people, they do many shows around the country (so one is probably near you) and they do the shows over a weekend. Basically they are trying to keep it small, intimate and affordable. People must like it, since most shows apparently sell out. Basically they are doing (on a commercial level) what Deeper in .NET and Heartland Developers Conference have been doing via user groups.

 

NFJS has historically been a Java show, but they recognize that most organizations today use a mix of Java and .NET. They also recognize that the two platforms are more similar than they are different. Because of this, the NFJS organizers are building a .NET component to the conference.

 

I must say I went with some trepidation, since not all Java people are tolerant toward anything Microsoft (I’m being polite here). But I had a blast. Sure there were a couple misguided sneers here and there, but by and large the people attending the show are serious technologists who want to do the best job they can with the best knowledge and tools they can find. And that kind of person is really fun to talk with irregardless of their preferred development platform!

 

What I found most interesting about the NFJS attendees is their focus. In a Microsoft conference the focus is all on Microsoft. What is Microsoft going to do, what are they doing and why’d they do what they did? The NFJS show isn’t vendor-focused though, so the attendees were more interested in industry trends rather than vendor actions. People wanted to know where the industry would be in five years, what is going to replace Java now that it is old and boring (Ruby and Groovy seem like popular options), what is the real story about integration between BEA, IBM and yes, even .NET.

 

There were discussions about whether software development is art or engineering or both (thanks to a keynote by Dave Thomas). But I can’t remember a conference where there was serious debate about whether software is trying to model the real world or is a mathematical proof of a process. The geek-o-meter at this show is very high! :)

 

I’m doing at least a couple more NFJS shows in Boston and Minneapolis (here’s my speaking schedule), and I’m really looking forward to them!

 

For user groups interested in organizing day-long events like Deeper in .NET, I strongly recommend it. You can work with your local Microsoft DE, and local consulting and technology companies to get sponsors to cover the cost. Get involved with INETA, as they can help you get speakers (they covered the cost of getting me to Milwaukee for instance). Make it happen – these events are great!!

Monday, March 7, 2005 9:05:57 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, March 2, 2005
 
In this case he gave it up because he's never worked with Microsoft tools and wasn't about to broaden his horizons and see if they were any good. He even draws a conclusion about what it is like to use Microsoft's tools after having not used them. Presumably a conclusion based on extensive propaganda, since by his own admission it isn't based on actual experience or facts.
 
Yes, he's a fool, but not because he gave up steady work or good pay. He's a fool in the same way someone who only uses one programming language is a fool. This is the same type of thinking that led to separate black and white drinking fountains in Alabama...
 
I spent many years on VAX computers, but always had my trusty Amiga nearby. Today I work on Windows with Microsoft tools, but I also run a Linux box, and have worked on Unix boxes off and on my entire career. I also spent tiny bits of time using a PDP-11, a mainframe, an AS400 and a Prime. Heck, I even have Eclipse and several versions of the JDK all set up on my machine.
 
If all you know is one platform and tool set then your perspective is waaay skewed. Seriously, live a little!
 
Is it really that scary to think about spending a few months working with Microsoft tools? To see what they are like if nothing else? Eclipse is busily emulating Visual Studio - wouldn't it be nice to know what they are emulating and why?
 
This isn't about the steady work or paycheck, this is about professional growth. This is about being all you can be by understanding all you can understand.
 
Ultimately, Moe Taxes cut off his nose to spite his face. He has my pity.
Wednesday, March 2, 2005 4:18:52 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, February 25, 2005
I take it back. We don't need Windows Forms, smart clients or Flash. Javascript and XML are obviously enough to rule the world. For evidence, see http://maps.google.com.
 
(now we just need tools to let us write stuff like this as easily as we program Windows Forms and the game's up)
Friday, February 25, 2005 4:49:34 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

In this article, Bill Vaughn voices his view that DataAdapter.Fill should typically be a developer's first choice for getting data rather than using a DataReader directly.

 

In my VB.NET and C# Business Objects books I primarily use the DataReader to populate objects in the DataPortal_Fetch methods. You might infer then, that Bill and I disagree.

 

While it is true that Bill and I often get into some really fun debates, I don't think we disagree here.

 

Bill's article seems to be focused on scenarios where UI developers or non-OO business developers use a DataReader to get data. In such cases I agree that the DataAdapter/DataTable approach is typically far preferable. Certainly there will be times when a DataReader makes more sense there, but usually the DataAdapter is the way to go.

 

In the OO scenarios like CSLA .NET the discussion gets a bit more complex. In my books I discussed why the DataReader is a good option - primarily because it avoids loading the data into memory just to copy that data into our object variables. For object persistence the DataReader is the fastest option.

 

Does that mean it is the best option in some absolute sense?

 

Not necessarily. Most applications aren't performance-bound. In other words, if we lost a few milliseconds of performance it is likely that our users would never notice. For most of us, we could trade a little performance to gain maintainability and be better off.

 

As a book author I am often stuck between two difficult choices. If I show the more maintainable approach the performance Nazis will jump on me, while if I show the more performant option the maintainability Nazis shout loudly.

 

So in the existing books I opted for performance at the cost of maintainability. Which is nice if performance is your primary requirement, and I don’t regret the choice I made.

 

(It is worth noting that subsequent to publication of the books, CSLA .NET has been enhanced, including enhancing the SafeDataReader to accept column names rather than ordinal positions. This is far superior for maintenance, with a very modest cost to performance.)

 

Given some of the enhancements to the strongly typed DataAdapter concept (the TableAdapter) that we’ll see in .NET 2.0, it is very likely that I’ll switch and start using TableAdapter objects in the DataPortal_xyz methods.

 

While there’s a performance hit, the code savings looks to be very substantial. Besides, it is my opportunity to make the maintenance-focused people happy for a while and let the performance nuts send me nasty emails. Of course nothing I do will prevent the continued use the DataReader for those who really need the performance.

Friday, February 25, 2005 2:28:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, February 24, 2005
My newest article is online at TheServerSide.net.
Thursday, February 24, 2005 12:06:43 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, February 23, 2005
Just a super-quick entry that I'll hopefully follow up on later. Ted wrote a bunch of stuff on contracts related to Indigo, etc.
 
I think this is all wrong-headed. The idea of being loosely coupled and the idea of being bound by a contract are in direct conflict with each other. If your service only accepts messages conforming to a strict contract (XSD, C#, VB or whatever) then it is impossible to be loosely coupled. Clients that can't conform to the contract can't play, and the service can never ever change the contract so it becomes locked in time.
 
Contract-based thinking was all the rage with COM, and look where it got us. Cool things like DoWork(), DoWork2(), DoWorkEx(), DoWorkEx2() and more.
 
Is this really the future of services? You gotta be kidding!
Wednesday, February 23, 2005 3:00:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
Rich put together this valuable list of tools - thank you!!
Wednesday, February 23, 2005 2:51:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

Sahil has started a discussion on a change to the BinaryFormatter behavior in .NET 2.0.

 

Serialization is just the process of taking complex data and converting it into a single byte stream. Motivation is a separate issue :)

 

There are certainly different reasons to "serialize" an object.

 

One for what I do in CSLA .NET, which is to truly clone an object graph (not just an object, but a whole graph), either in memory or across the network. This is a valid reason, and has a set of constraints that a serializer must meet to be useful. This is what the BinaryFormatter is all about today.

 

Another is to easily convert some or all of an object’s data into actual data. To externalize the object’s state. This is what the XmlSerializer is all about today. The purpose isn’t to replicate a .NET type, it is to convert that type into a simple data representation.

 

Note that in both cases the formatter/serializer attempts to serialize the entire object graph, not just a single object. This is because the “state” of an object is really the state of the object graph. That implies that all references from your object to any other objects are followed, because they collectively constitute the object graph. If you want to “prune” the graph, you mark references such that the formatter/serializer doesn’t follow them.

 

In the case of the BinaryFormatter, it does this by working with each object’s fields, and it follows references to other objects. An event inside an object is just another field (though the actual backing field is typically hidden). This backing field is just a delegate reference, which is just a type of object reference – and thus that object reference is followed like any other.

 

In the case of the XmlSerializer, only the public fields and read/write properties are examined. But still, if one of them is a reference the serializer attempts to follow that reference. The event/delegate issue doesn’t exist here only because the backing field for events isn’t a public field. Make it a public field and you’ll have issues.

 

In .NET 1.x the BinaryFormatter attempts to follow all references unless the field is marked as [NonSerializable]. In C# the field: target provided (what I consider to be) a hack to apply this attribute to the backing field. It also has a better approach, which is to use a block structure to declare the event so you get to manually declare the backing field and can apply the attribute yourself.

 

The trick is that the default behavior is for the backing field to be serializable, and things like Windows Forms or other nonserializable objects might subscribe to the object’s events. When the BinaryFormatter follows the references to those objects it throws an exception (as it should).

 

I made a big stink about this when I wrote my VB Business Objects book though, because I discovered this issue and there was no obvious solution. This is because VB.NET has neither the Field: target hack, nor the block structure declaration for events. I was forced to implement a C# base class to safely declare my events.

 

I spent a lot of time talking to both the VB and CLR (later Indigo) teams about this issue.

 

The VB team provided a solution by supporting the block structure for declaring events, thus allowing us to have control over how the backing field is serialized. This is a very nice solution as I see it.

 

I am not familiar with the change Sahil is seeing in .NET 2.0. But that is probably because I’ve been using the block structure event declarations as shown in my C# and VB 2005 links above, so I’m manually ensuring that only serializable event handlers are being traced by the BinaryFormatter.

 

But I should point out that the C# code in the example above is for .NET 1.1 and works today just as it does in .NET 2.0.

Wednesday, February 23, 2005 12:06:27 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, February 21, 2005

A reader commented on my previous post about patternshare.org:

 

The content on patternshare.org is weak. I don't think the content will ever be as strong as the books that the Authors are trying hock.

 

It seems like every good design choice these days are being labeled a "Pattern". Quite a shame.

 

Don't misunderstand... I'm a huge pattern evangilist.

 

We must keep in mind that patternshare.org can not replicate the patterns from the books. That would violate copyright law. The goal of patternshare.org is to be an index, to make it easier for you to figure out which books to buy and/or where in those books to find interesting patterns. That is an admirable goal and one that I think patternshare.org can accomplish.

 

In 50 years (or whenever it is that copyrights run out) we can put all the book content online and then we won’t have “weak” content. But up to that point I’m afraid we’re kind of stuck with the reality that the patterns are in books, and the books are copyrighted and that’s that…

 

Regarding the comment that "everything is being labeled a pattern" I agree. It is the current over-hyped trend in the architecture/design space - competing only with SOA.

 

But I think that the current pragmatic value of patterns is to provide an abstract language we humans can use to discuss our software designs. A language better than we have without “patterns”. Regardless of whether some of these "patterns" are really patterns or not, the fact is that the collective effort of all these books and articles is providing us with that common language - and thus is allowing us to communicate at a higher level than was possible even 3-5 years ago.

Monday, February 21, 2005 6:03:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer