Rockford Lhotka

 Wednesday, March 22, 2006

I have sent off the final code for the book to Apress, which officially means that I have RTM code at this point. You can go here to download the release code for VB or C#.

As a bonus, I also put a test release of a WCF data portal channel up for download - you can get if from this page.

Update: Today (March 22) I recieved my author's copies of the Expert C# 2005 Business Objects book - so it is most certainly in print. What a joy it is to actually see the results of the past several months of work!!

Wednesday, March 22, 2006 12:38:56 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, March 19, 2006

A number of people have asked why the C# edition of my new Business Objects book is coming out before the VB edition. And yes, that is the case. The C# book should be out around the end of March, and the VB book about 2-3 weeks later.

As I've said before, I wrote the books "concurrently". Which really means I wrote one edition, knowing full well that I'd be coming through later to swap out all the code bits and change a few of the figures. It really didn't matter to me which one I did first, because I was going to have to go through every chapter to do this swapping process either way.

I did write the actual framework and sample app in VB first. The initial port to C# was handled by a fellow Magenicon, Brant Estes. Believe it or not, he had the initial compilable code done in about three days! I am thinking he didn't do a whole lot of sleeping (because he did the port by hand, not with a tool - which is awesome, because it means the code style and quality is far higher).

I could give you a fluffy (if somewhat true) answer, that the reason I did the C# book first was to ensure that the C# code was fully tested and brought entirely into line with the VB code. And there really is some truth to that. By doing the C# edition first, I was able to go through every line and recheck, tweak and enhance that code. Several large chunks of functionality were actually added or altered in C# first (during the writing process) and I back-ported them into the VB version of the code.

But the real reason is what a few people have speculated: dollars. For better or worse, the fact is that the .NET 1.0 C# book is outselling the VB book by quite a lot. Nearly 2:1 actually. Due to this, my publisher really wanted to get the C# edition out first. I initially pushed back, but I personally have no rational reason to do one before the other. I do love VB very much, but that's an irrational and emotional argument that simply doesn't hold a lot of sway...

(That said, if there was more than a couple week difference in release dates, I would have insisted on VB first, and I would have won that argument. But you must pick your battles, and it made no sense to me to have a fight with my publisher over which book came out 3 weeks before the other...)

For what its worth, porting the book is far, far easier than porting (and maintaining) the code. I've said it before, and I'm sure I'll say it again, maintaining the same code in two languages is just a pain. Converting the book was a relatively rote process of copy-paste, copy-paste, copy-paste. But converting the code means retesting, double-checking and almost certainly still missing a thing or two...

Anyway, I thought I'd blog this just to make the order of release totally clear. For better or worse, it was simply a business decision on the part of my publisher.

Sunday, March 19, 2006 9:36:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, March 13, 2006

Business people hold the common belief that computer people are all the same, just cogs in a machine. Don’t like the color, shape, odor or whatever of one cog, just replace it with another cog. Domestic cogs are too expensive? You can find low-priced cogs overseas.

 

Those of us actually in the computer field know that this is quite incorrect. One “cog” is not at all like another. In fact, the skills of good cogs are orders of magnitude better that the skills of average cogs. Similarly, the skills of bad cogs are orders of magnitude worse than the average.

 

Unfortunately, the array of skills falls on a truncated bell curve; and I think that’s a primary contributor to the way business people perceive us cogs. That, and another issue I’ll get to later in this entry.

 

But the bell curve issue is a real and serious one. And one that isn’t likely to go away. The fact is that the vast majority of computer professionals are competent. Not stellar, not terrible; just competent. And, for technical ability, one competent professional really is much like another.

 

There are also “professionals” who are a standard deviation or two down the skill slope. We’ve all interacted with some of these people. But the fact is that they simply don’t last long; not most of them anyway. That’s why the bell curve is truncated. You have to truncate it, because to ignore this low end would distort the curve, making it useless.

 

The reality is that the vast majority of practicing computer professionals are in the middle of the skill range, but artificially appear to be near the bottom. At least this is true for the business people who don’t exist inside the industry and don’t see that bottom third (or so) of the skill range like we do.

 

This vast majority of very competent and generally hard-working professionals tend to be Mort (in Microsoft’s code scheme). They are as focused on the business as on the technology. Sure they are technologists, but that knowledge is focused on solving the issues of the business in the most efficient manner possible.

 

What’s important to realize though, is that the technology skills of Mort aren’t a differentiator. Even subjectively, the tech skills of most computer professionals really are pretty interchangeable (within a platform at least, such as Java or .NET). But the business domain understanding and knowledge are most certainly less interchangeable. Yet in the end, it is the intersection between the technology and business knowledge that truly differentiates one computer professional from another. It is in this nebulous area of overlap that the cogs become specialized and substantially more valuable than just any other random cog.

 

If business people understood the true value of this intersection of skills and knowledge they’d understand why they can’t just outsource or offshore all the computer work to save money. While the technical skills might be comparable, that area of business-technology overlap can only be gained by in-house staff (or perhaps by consultants who focus on a vertical area of business).

 

Years ago I managed a small group of programmers at a bio-medical manufacturing company. Our small team (of 5) built and maintained software that is a pipe dream for almost every client I’ve encountered from a consulting perspective. But our small team had this intersection in spades. We all understood the business, the people, the politics and we understood our entire technology infrastructure and standards. Coupled with being competent professionals, this little team did what I can say with certainty (having been in consulting for 12 years now) would take a team of consultants 2-3 times the size and at least twice as long.

 

But it is also important to remember that the bell curve and the non-linear nature of skill differentiation combine to create some interesting effects.

 

One standard deviation up the skill curve and you have far fewer professionals. Yet they are likely an order a magnitude better than average. Two standard deviations up the curve and there are precious few people, but they are really good with technology.

 

This is the part business people just don’t get. Hire someone one standard deviation up the curve, get them the business domain knowledge and they’ll out-produce a number of average developers. In theory, it would be even better if you got someone two standard deviations up the curve.

 

But what really happens is this: moving up the skill curve requires increased focus and specialization on the technology. There’s simply less time/room for learning the business domain.

 

The group one standard deviation up still has enough time to grasp the business domain effectively, and that group really is much better than average. We’ve all worked with people like this. People that really grok both the business and the technology and are able to come up with answers to problems in a fraction of the time as many others with whom they work. These are the Elvis persona in Microsoft-speak.

 

Of course the business people often don’t recognize this value, so these people don’t get paid according to their production and so they often leave and become consultants. A simple case of the business people shooting themselves in the foot, turning a would-be gold mine into an adequate resource.

 

Why “adequate”? Well, Elvis as a consultant loses the business domain expertise, and even their increased technical skills can’t entirely compensate. The fact is, Elvis as a consultant is not much (if any) better than Mort as a full-time employee (FTE). That near-mystical business/technology intersection enjoyed by a typical Mort typically offsets most or all the benefit Elvis gets from their technology understanding.

 

This is why Elvis-as-a-FTE is a real catch, and why business people should get a clue and work to keep these people!

 

Then there’s the people that are two or more standard deviations up the curve. They are so busy keeping up with technology that they don’t have time for the business domain expertise. Hell, they often don’t have time for families, stable inter-personal relationships or anything else. Let’s face it: these are the uber-geeks. In Microsoft’s persona system these are the Einstein persona.

 

An Einstein may work in a regular company, but honestly that’s pretty rare. The lack of business domain focus, coupled with the narrow technologies used by any one company tend to make Einstein a very poor fit for a single business. Of course very large businesses have the variety to overcome this, giving an Einstein enough different technologies to avoid boredom. Oddly, the same is true for some very small companies, where dabbling in random technologies seems to be the norm rather than the exception.

 

So most Einstein types are in consulting, working for software companies (like Microsoft or Oracle) or they are in very large or very small business environments.

 

And everywhere but in a software company, they are trouble. The Einstein types directly contribute to making all computer professionals look like cogs. They can (and do) create software that can only be understood and maintained by people at their level. But they are easily bored, and generally don’t do maintenance, so they move on, leaving the maintenance to Mort.

 

But of course Mort can’t maintain this stuff, so it slowly (or rapidly) degrades and becomes a nightmare. You’ve probably seen this: a system that was obviously once elegant, but which has been hacked and warped over time into a hideous monster. The remnants of an Einstein. Perhaps a better name for this persona would be Frankenstein…

 

Just check out this “overview of languages” post, but make special note of what happened when the Einstein types moved on and the Mort/Elvis types tried to maintain their masterwork…

 

Now we know what happened here. But to business people it is likewise “quite obvious” what happened: They hired some bad cogs to start with (the Einstein types), who created a monster and left. Then they hired some more bad cogs later (the Mort/Elvis types) who were simply unable to fix the problem. See, all cogs are the same; so let’s just offshore the whole thing, because some of those foreign cogs are really cheap!

 

In summary, what needs to be done is to figure out a way to get business people to understand that not all cogs are the same. That there are wide variations in technical skill, and wide variations in business skill. Perhaps most of all, they need to understand the incredible value in that mystical intersection where business and technical skill and knowledge overlap to create truly productive computer professionals.

 

The alternative is scary. When there’s thought of having Microsoft unionize, you know we’re in trouble. Unionization certainly has its good and bad points, but it is hard to deny that people inside a union embrace the idea of being interchangeable cogs and use it to their advantage. To me, at least, that’s a terrifying future…

Monday, March 13, 2006 4:14:18 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

Somebody woke up on the wrong side of the bed, fortunately it wasn't me this time :)

This is a nice little rant about the limitations of OO-as-a-religion. I think you could substitute any of the following for OO and the core of the rant would remain quite valid:

  • procedural programming/design
  • modular programming/design
  • SOA/SO/service-orientation
  • Java
  • .NET

Every concept computer science has developed over the past many decades came into being to solve a problem. And for the most part each concept did address some or all of that problem. Procedures provided reuse. Modules provided encapsulation. Client/server provided scalability. And so on...

The thing is, that very few of these new concepts actually obsolete any previous concepts. For instance, OO doesn't elminate the value of procedural reuse. In fact, using the two in concert is typically the best of both worlds.

Similarly, SO is a case where a couple ideas ran into each other while riding bicycle. "You got messaging in my client/server!" "No! You got client/server in my messaging!" "Hey! This tastes pretty good!" SO is good stuff, but doesn't replace client/server, messaging, OO, procedural design or any of the previous concepts. It merely provides an interesting lense through which we can view these pre-existing concepts, and perhaps some new ways to apply them.

Anyway, I enjoyed the rant, even though I remain a staunch believer in the power of OO.

Monday, March 13, 2006 11:44:39 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, March 9, 2006

Ron Jacobs and I had a fun conversation about the state of object-oriented design and the use of business objects in software development. You can listen to it here: http://channel9.msdn.com/Showpost.aspx?postid=169693

Thursday, March 9, 2006 5:56:37 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, March 4, 2006

Here's another status update on the books.

I am done with the C# book, and it is still on schedule to be available at the end of March.

I am in the final revision stages with the VB book, so it is still on schedule to be available mid-April.

Since posting the beta code, people have pointed out a couple bugs (which I've fixed), one bug which can't be fixed without breaking the book (so it will be fixed later - and it isn't a show-stopper anyway) and a few feature requests (which will obviously come later).

I expect to be able to put the final version 2.0 code for both books online around March 13.

I'm also working with Magenic to have them host a new online forum. The plan is to use Community Server 2.0, hosted on a server at Magenic and most likely available through forums.lhotka.net (though that's not for sure at this point).

So the next few weeks look to be exciting (anticipated):

March 13

CSLA .NET 2.0 code available

End of March

Expert C# 2005 Business Objects available

End of March

New CSLA .NET forums available

Mid-April

Expert VB 2005 Business Objects available

Saturday, March 4, 2006 11:51:10 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, March 1, 2006

I was recently asked whether I thought it was a good idea to avoid using the Validation events provided by Microsoft (in the UI), in favor of putting the validation logic into a set of objects. I think the answer to the question (to some degree) depends on whether you are doing Windows or Web development.

 

With Windows development, Windows Forms provides all the plumbing you need to put all your validation in the business objects and still have a very rich, expressive UI - with virtually no code in the UI at all. This is particularly true in .NET 2.0, but is quite workable in .NET 1.1 as well.

 

With Web development life isn't as nice. Of course there's no way to run real code in the browser, so you are stuck with JavaScript. While the real authority for any and all business logic must reside on the web server (because browsers are easily compromised), many web applications duplicate validation into the browser to give the user a better experience. While this is expensive and unfortunate, it is life in the world of the Web.

 

(Of course what really happens with a lot of Web apps is that the validation is only put into the browser - which is horrible, because it is too easy to bypass. That is simply a flawed approach to development...)

 

At least with ASP.NET there are the validation controls, which simplify the process of creating and maintaining the duplicate validation logic in the browser. You are still left to manually keep the logic in sync, but at least it doesn't require hand-coding JavaScript in most cases.

 

 

Obviously Windows Forms is an older and more mature technology (or at least flows from an older family of technologies), so it is no surprise that it allows you to do more things with less effort. But in most cases the effort to create Web Forms interfaces isn't bad either.

 

In any case, I do focus greatly on keeping code out of the UI. There's nothing more expensive than a line of code in the UI - because you _know_ it has a half-life of about 1-2 years. Everyone is rewriting their ASP.NET 1.0 UI code to ASP.NET 2.0. Everyone is tweaking their Windows Forms 1.0 code for 2.0. And all of it is junk when WinFX comes out, since WPF is intended to replace both Windows and Web UI development in most cases. Thus code in the UI is expensive, because you'll need to rewrite it in less than 2 years in most cases.

 

Code in a business object, on the other hand, is far less expensive because most business processes don't change nearly as fast as the UI technologies provided by our vendors... As long as your business objects conform to the basic platform interfaces for data binding, they tend to flow forward from one UI technology to the next. For instance, WPF uses the same interfaces as Windows Forms, so reusing the same objects from Windows Forms behind WPF turns out to be pretty painless. You just redesign the UI and away you go.

 

A co-worker at Magenic has already taken my ProjectTracker20 sample app and created a WPF interface for it – based on the same set of CSLA .NET 2.0 business objects as the Windows Forms and Web Forms interfaces. Very cool!

 

So ultimately, I strongly believe that validation (and all other business logic) should be done in a set of business objects. That’s much of the focus in my upcoming Expert VB 2005 and C# 2005 Business Objects books. While you might opt to duplicate some of the validation in the UI for a rich user experience, that’s merely an unfortunate side-effect of the immature (and stagnant) state of the HTML world.

Wednesday, March 1, 2006 3:44:33 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

This recent MSDN article talks about SPOIL: Stored Procedure Object Interface Layer.

This is an interesting, and generally good idea as I see it. Unfortunately this team, like most of Microsoft, apparently just doesn't understand the concept of data hiding in OO. SPOIL allows you to use your object's properties as data elements for a stored procedure call, which is great as long as you only have public read/write properties. But data hiding requires that you will have some private fields that simply aren't exposed as public read/write properties. If SPOIL supported using fields as data elements for a stored procedure call it would be totally awesome!

The same is true for LINQ. It works against public read/write properties, which means it is totally useless if you want to use it to load "real" objects that employ basic concepts like encapsulation and data hiding. Oh sure, you can use LINQ (well, dlinq really) to load a DTO (data transfer object - an object with only public read/write properties and no business logic) and then copy the data from the DTO into your real object. Or you could try to use the DTO as the "data container" inside your real object rather than using private fields. But frankly those options introduce complexity that should be simply unnecessary...

While it is true that loading private fields requires reflection - Microsoft could solve this. They do own the CLR after all... It is surely within their power to provide a truly good solution to the problem, that supports data mapping and also allows for key OO concepts like encapsulation and data hiding.

Wednesday, March 1, 2006 10:00:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, February 12, 2006

Every now and then someone asks what tools I use to write books and articles.

 

I write in Word. Typically the publisher provides a specific template with styles that are to be used, so when they get to the last stage and need to do layout they can easily change the styles to get the proper appearance on the printed page.

 

Graphics, figures and screen-shots are, I find, the most frustrating part of the process in many ways. I use Snagit for capturing screen images into TIFF files, which isn’t too bad. But for creating other graphics I use a combination of Visio, PowerPoint, Corel Draw, MS Paint, screen shots and Snagit – along with Acrobat and (more recently) PDF Converter to generate PDF docs containing the figures. Not being graphically oriented, I find the whole process arcane and frustrating – especially as I’ve often had to redo figures a couple times because they are “blurry” or something – typically due to various resolution issues.

 

As an aside, this is what scares me about WPF (Avalon), since all of us programmers are going to be forced to learn all this arcane graphics stuff just to be competent at even basic application development. Personally I think that this could derail WPF adoption overall – at least until a large set of stock, good-looking, controls come into being from either Microsoft or third parties.

 

Microsoft seems to have this deluded idea that business sponsors are going to pay for graphic designers to build the UI – which I think is highly unlikely, given that they typically won’t even pay for decent testing… Who’d pay to make it pretty when they won’t even pay to make sure it actually works?!?

 

But back to the tools.

 

All the writing is done in Word. The final stage of reviewing however, occurs in PDF form. The publisher does the final layout, resulting in a PDF which will ultimately be sent to the printer. But I have to go through each chapter in PDF form to catch any final issues (typos, graphics issues, etc). I annotate the PDF files and send them back, so the layout people can make the changes and recreate the PDF.

 

I also use Microsoft Project. Not for the writing itself, but to schedule the process. Before committing to a writing schedule I create a project of my life. I put fixed-date tasks for all my travel, client commitments, vacations and anything else that I know will consume time during the writing process. Then I put in floating tasks for every chapter and have Project level my life (so to speak). This gives me at least a reasonable projection of how many calendar days it will take to do each chapter.

 

That’s pretty much it :)

Sunday, February 12, 2006 9:09:42 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, February 7, 2006

A few people have asked how the book is coming along, so here’s an update.

 

I touch each chapter a minimum of four times: to write it (AU), to revise it based on tech review comments (TR), to revise it based on copyedit questions (CE) and to do a final review after layout/typesetting (TS).

 

I am writing both the VB and C# books “concurrently”. What this really means is that I’m writing the book with one language, then skimming back through to change all language keywords, code blocks and diagrams to match the other language.

 

To practice what I preach (which is that you should be competent, if not fluent, in at least two languages) I am doing the book in C# and then converting to VB. It takes around 8 hours per chapter to do that conversion, 12 if there are a lot of diagrams to convert (code is easy – the damn graphics are the hard part…).

 

So, here’s the status as of this evening:

 

Chapter

C#

VB

Cover

AU done

-

Front matter

AU done

-

1

TS done

TR done

2

TS done

AU

3

TS done

AU

4

TS

-

5

CE done

-

6

CE done

-

7

CE done

-

8

CE done

-

9

CE done

-

10

CE done

-

11

TR done

-

12

TR done

-

 

People have also asked how much I expect the CSLA .NET 2.0 public beta code to change between now and the book’s release at the end of March. Chapters 2-5 cover the framework, and as you can see those chapters are into the final editing stages. As such, I certainly don’t anticipate much change.

 

While I’ve made every effort to keep the VB and C# code in sync, there may be minor tweaks to the code as I roll through the VB chapters 2-5. But I’ve used both in projects and at conferences like VS Live last week, and both pass my unit tests, so those changes should be cosmetic, not functional.

 

In other words, the beta is pretty darn close to the final code that’ll be provided for download with the book.

Tuesday, February 7, 2006 9:05:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer