Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Saturday, January 29, 2005

I just upgraded this blog to the newest version of dasBlog. I upgraded my personal blog a few days ago and it has been trouble-free, so I thought it safe to upgrade this one. The only serious change you may see is when posting comments, as this new version requires that you type in a code from a graphic to help defeat posting bots. Kind of a pain, but worth it I suppose.

Of course I'm doing this just before leaving town for two weeks - first to Chicago and then to San Francisco for VS Live. Always a good time to upgrade ;)

The Chicago trip is primarily focused around working with a Magenic client, but I'll be having some evening dinner meetings with various groups as well, including my first real foray into this fun new trend: a Nerd Dinner.

Saturday, January 29, 2005 10:58:59 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, January 24, 2005

Sahil has some interesting thoughts on the web service/DataSet question as well.

 

He spends some time discussing whether “business objects” should be sent via a web service. His definition of “business object” doesn’t match mine, and is closer to Fowler’s data transfer object (DTO) I think.

 

It is important to remember that web services only move boring data. No semantic meaning is included. At best (assuming avoidance of xsd:any) you get some limited syntactic meaning along with the data.

 

When we talk about moving anything via a web service, we’re really just talking about data.

 

When talking about moving a “business object”, most people think of something that can be serialized by web services – meaning by the XmlSerializer. Due to the limitations of the XmlSerializer this means that the objects will have all their fields exposed as public fields or read-write properties.

 

What this means in short, is that the “business objects” can not follow good OO design principles. Basically, they are not “business objects”, but rather they are a way of defining the message schema for the web service. They are, at best, data transfer objects.

 

In my business objects books and framework I talk about moving actual business objects across the wire using remoting. Of course the reality here is that only the data moves – but the code must exist on both ends. The effective result is that the object is cloned across the network, and retains both its data and the semantic meaning (the business logic in the object).

 

You can do this with web services too, but not in a “web service friendly” way. Cloning an object implies that you get all the data in the object. And to do this while still allowing for encapsulation means that the serialization must get private, friend/internal and protected fields as well as public ones. This is accomplished via the BinaryFormatter. The BinaryFormatter generates and consumes streams, which can be thought of as byte arrays. Thus, you end up creating a web service that moves byte arrays around. Totally practical, but the data is not human-readable XML – it is Base64 encoded binary data. I discuss how to do this in CSLA .NET on my web site.

 

Now we are talking about moving business objects. Real live, OO designed business objects.

 

Of course this approach is purely for n-tier scenarios. It is totally antithetical to any service-oriented model!

 

For an SO model you need to have clearly defined schemas for your web service messages, and those should be independent from your internal implementation (business objects, DataSets or whatever). I discuss this in Chapter 10 of my business objects books.

Monday, January 24, 2005 10:10:21 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

OK, so Shawn has some good points about the use of a DataSet for the purpose of establishing a formal contract for your web service messages. This is in response to my previous entry about not using DataSets across web services.

 

The really big distinction between using web services for SO vs n-tier remains.

 

If you are doing SO, you need to clearly define your message schema, and that schema must be independent from your internal data structures or representations. This independence is critical for encapsulation and is a core part of the SO philosophy. Your service interface must be independent from your service implementation.

 

What technology you use to define your schema is really up to you. Shawn’s entry points out that you can use strongly typed DataSet objects to generate the schema – something I suppose I knew subconsciously but didn’t really connect until he brought it up. Thank you Shawn!

 

Any tool that generates an XSD will work to define your schema, and from there you can define proxy classes (or apparently use DataSet objects as proxies). Alternately you can create your “schema” in VB or C# code and get the XSD from the code using the wsdl.exe tool from the .NET SDK (though you have less control over the XSD this way).

 

But my core point remains, that this is all about defining your interface, and should not be confused with defining your implementation. Using a DataSet for a proxy is (according to Shawn) a great thing, and I’ll roll with that.

 

But to use that same DataSet all the way down into your business and data code is dangerous and fragile. That directly breaks encapsulation and makes you subject to horrific versioning issues.

 

And versioning is the big bugaboo hiding in web services. Web services have no decent versioning story. At the service API level the story is the same as it was for COM and DCOM – which was really bad and I believe ultimately helped drive the success of Java and .NET.

 

At the schema level the versioning story is better. It is possible to have your service accept different variations on your XML schema, and you can adapt to the various versions. But this implies that your service (or a pre-filter/adapter) isn’t tightly coupled to some specific implementation. I fear that DataSets offer a poor answer in this case.

 

And in any case, if you to create maintainable code you must be able to alter your internal implementation and data representation independently from your external contract. The XSD that all your clients rely on can’t change easily, and you must be able to change your internal structures more rapidly to accommodate changing requirements over time.

 

Again, I’m talking about SO here. If you are using web services for n-tier then the rules are all different.

 

In the n-tier world, your tiers are tightly coupled to start with. They are merely physical deployments of layers, and layers trust each other implicitly because they are just a logical organization of the code inside your single application. While there are still some real-world versioning issues involved, the fact is that they aren’t remotely comparable to the issues faced in the SO world.

Monday, January 24, 2005 9:13:49 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, January 23, 2005

Every now and then the question comes up about whether to pass DataSet or DataTable objects through a web service.

 

I agree with Ted Neward that the short answer is NO!!

 

However, nothing is ever black and white…

 

For the remainder of this discussion remember that a DataSet is just a collection of DataTable objects. There’s no real difference between a DataTable and DataSet in the context of this discussion, so I’m just going to use the term DataSet to mean 1..n DataTables.

 

There are two “types” of DataSet – default and strongly typed.

 

Default DataSet objects can be converted to relatively generic XML. They don’t do this by default of course. So you must choose to either pass a DataSet in a form that is pretty much only useful to .NET code, or to force it into more generic XML that is useful by anyone.

 

To make this decision you need to ask yourself why you are using web services to start with. They are designed, after all, for the purpose of interop/integration. If you are using them for that intended purpose then you want the generic XML solution.

 

On the other hand, if you are misusing web services for something other than interop/integration then you are already on thin ice and can do any darn thing you want. Seriously, you are on your own anyway, so go for broke.

 

Strongly typed DataSet objects are a different animal. To use them, both ends of the connection need the .NET assembly that contains the strongly typed code for the object. Obviously interop/integration isn’t your goal here, so you are implicitly misusing web services for something else already, so again you are on your own and might as well go for broke.

 

Personally my recommendation is to avoid passing DataSet objects of any sort via web services. Create explicit schema for your web service messages, then generate proxy classes in VB or C# or whatever language based on that schema. Then use the proxy objects in your web service code.

 

Your web service (asmx) code should be considered the same as any other UI code. It should translate between the “user” presentation (XML based on your schema/proxy classes) and your internal implementation (DataSet, business objects or whatever).

 

I discuss this in Chapter 10 of my Business Objects books, but the concepts apply directly to data-centric programming just as they do to OO programming.

Sunday, January 23, 2005 9:06:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, January 19, 2005

When I first got involved with Microsoft it was around 1990 or ’91. The world of the time was dominated by IBM, with DEC a close second. If you wanted to network PCs you used Novell or Banyan. All PC programs were DOS programs. “Windows” was just one of many graphical libraries being used by software to handle drawing on the screen. And if you wanted to do real work you used a mainframe or minicomputer.

 

At the time, Microsoft was an underdog.

 

Personally I wouldn’t have anything to do with them. Novell was the obvious PC networking choice, and PCs in general were so incredibly limited compared to my beautiful VAX computers that I couldn’t understand why anyone would bother to use them. If you were going to use a PC-like computer, the Amiga was the way to go. It was far closer to a real computer than the PC!

 

Of course the open source community of the time hated the VAX, Novell and anything commercial. At the time the open source world was tied into Unix, even though it wasn’t open source either. But they hated us VAX people who could use the “search” command to search rather than using “grep”. (if you don’t believe me, use google or the wayback machine to look at the newsgroup flame wars of the time)

 

In any case, the manufacturing company I worked for kept bringing in more and more PCs to do more and more things. And we needed a way to make the PC software accessible to our end users. On the VAX I had a very nice bit of menu software that did personalization, authorization and so forth. We investigated the lame-ass equivalents that existed for DOS, but the PC world had no concept of centralized management and so we adopted none of them.

 

Finally, out of a busy field of competitors, Microsoft’s Windows product showed up. It was pretty lame too, but at least we were able to use it as a meaningful launch platform for software. This allowed our end users to actually find and use the software we had installed on our Novell server. It also forced us to hire another helpdesk/PC support person… Windows was not cheap, but it was obvious that the world was moving that direction and so we did too.

 

I should point out that none of the Windows competitors were cheap either. All of them used the highly cantankerous PC as a platform and anything we chose would have required that we hire at least one more PC support person… Compared to our VAXen (with their 99.995% uptime) the PC was just plain terrible. This was true of Windows, our Novell server – all of it.

 

Personally, I tried to keep at arm’s length from the whole debacle. My boss however, felt that my career would be enhanced if I got involved, so I ended up as the Novell administrator. That included making Windows work with Novell, and eventually with that VAX (because we installed software on the VAX to make it look like a Novell server to the PCs).

 

The only part of this that was fun was the VAX-as-Novell-server bit. Using that technology we were able to use gawk (open source awk, a text processing tool) to transform our massive manufacturing reports into tab-delimited “spreadsheet” files that could be used in Lotus 123, Quattro Pro or that lame excuse for a spreadsheet, Excel.

 

Even with Windows, Microsoft was still the underdog. Heck, this was the time that OS2 was supposed to take over the world. Unfortunately IBM forgot that people (end users) don’t give a rip about operating systems. They only care about the programs they use. OS2 didn’t have any software (compared to DOS, and thus Windows – which was still just a graphical shell on DOS). This was the problem with the Amiga too. A superior piece of work in almost every way, but there was no software for it. Not on the level of the stuff for DOS (Windows) or the Mac.

 

Then in the early 1990’s Microsoft came out with Windows NT. The core of NT is the same as that of OpenVMS (the VAX operating system). That was enough for me to start looking seriously at the PC. I even bought one so I could be part of the Windows NT beta program. It sat next to my Amiga and limped along as best it could.

 

It is worth noting that the VAX operating system was originally named VMS. DEC changed the name to OpenVMS in response to the open source/unix community’s unceasing assaults and criticisms. Today’s attacks on Microsoft are just the latest version of a decades-old pattern of abusive behavior on the part of the open source community. Before DEC it was IBM and before that there were no computers to speak of.

 

I know that the open source “community” can’t be represented by any one person or one group any more than Christians can be represented by Evangelicals or Catholics. But after listening to the same tired rants from what must be clones of the same people for 20 years it does seem like there’s a cohesive voice of the open source community… At the very least there’s a consistent dogma.

 

I started to find out about Windows programming at this time. I had never programmed for Windows itself (or DOS), but I figured that since NT was, at its core, like OpenVMS. I assumed that I could program it in comparable fashion. Not so. The most trivial program took pages of code. There was more code involved to interact with the OS than to do actual business work. So we just kept working on the VAX where we could get real work done.

 

Then came Visual Basic 1.0. It had a whole host of competitors, and wasn’t necessarily the best of the bunch. But it provided the most direct access to Windows programming of all the options. It allowed us to write Windows programs where most of our code was actual business code. Where we didn’t need pages of code just to draw a window or other silliness.

 

With the advent of Windows NT 4 Microsoft finally also provided an option to Novell. It was realistic to use NT 4 as a production server, and so we switched from Novell to NT somewhere in there. That was a powerful change, because now our server was programmable.

 

Yes, I know it was technically possible to program a Novell server. But you have to be kidding. That was really unlike a productive programming experience – especially for someone coming from a VAX! You need to understand that I have never had to deal with all the  memory segmentation issues. The VAX, the Amiga and Windows NT all provide a flat memory model. To this day I fail to understand why anyone voluntarily chose to program on a PC in DOS or Windows…

 

Around this time it was clear that DEC was dying, and Microsoft was no longer an underdog. Windows NT and Visual Basic propelled Microsoft into a space where they were accepted as a primary option. For small and mid-size business they were often the primary option in fact.

 

It also helped that through the early 1990’s the “spreadsheet wars” between Lotus, Quattro Pro and Excel eventually settled out. As we all know, the victor was Excel. But that was not a foregone conclusion. During that whole time, Microsoft was the underdog. Quattro Pro was easiest to use and Lotus was the most powerful (and had the backing of existing users). But Lotus never got easy to use, and Quattro never got powerful. Excel got easier to use and more powerful until it was easy enough and powerful enough to displace the other two.

 

There are lessons to be learned here. Lotus could have made themselves easier to use and they would have won hands down. Quattro had the harder job, but they could have made themselves more powerful and maybe could have won. But the odd man out, the underdog Microsoft was the one who managed to strike the balance that won the war.

 

I imagine there are similar stories around Word and WordPerfect, but I never witnessed that conflict.

 

By the mid-1990’s the dust had settled and Microsoft was a major player in corporate and home computing, and I left the manufacturing company for consulting. Why? So I could focus all my energies on programming PCs and not be distracted by the dying VAX.

 

Quite a turnaround for someone who wouldn’t touch a PC with a 10 foot pole :-)

Wednesday, January 19, 2005 10:29:50 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, January 13, 2005

A while ago TheServerSide.net published an article with Jeff Richter’s thoughts on the future of .NET assembly versioning.

 

If you read the article you’ll find that the Longhorn OS will seriously change the way that .NET itself is versioned. In fact, it turns out that to a serious degree the whole idea of installing “side-by-side” versions of .NET itself will go away when Longhorn shows up.

 

Oh sure, they have plans for a complex scheme by which assemblies can be categorized into different dependency levels. Some levels can be versioned more easily, while others can only be versioned with the OS itself.

 

What this really means is that .NET is losing its independence from the OS. In the end, we’ll only get new versions of .NET when we get new versions of the OS – and we all know how often that happens…

 

I’d say that this was inevitable, but frankly it was not. Java hasn’t fallen into this trap, and .NET doesn’t need to either. Not that it will be easy to avoid, but the end result of the current train of thought portrayed by Richter is devastating.

 

Fortunately there’s the mono project. As .NET becomes more brittle and stagnant due to its binding with Longhorn, we might find that mono on Windows becomes a very compelling story. mono will be able to innovate, change and adapt much faster than the .NET that inspired it. Better still, mono will remain unbound from the underlying OS (like .NET was originally) and thus will be able to run side-by-side in cases where .NET becomes crippled.

 

Hopefully Microsoft will realize what they are doing to themselves before all this comes to pass. Otherwise, I foresee a bright future for mono on Windows.

Thursday, January 13, 2005 12:59:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, January 10, 2005

On the PeerCast website there is an FAQ in which they recommend turning off the Windows XP Firewall:

I got everything right with my broadcast, but no matter what I do, my stream can't get through. Should I get lost?
You are probably behind a firewall, if it is a personal firewall installed on your local PC, try turning it off. (Windows XP Pro for example..)

No wonder open-source is “more secure”, when they are actively running around telling people to disable the primary safety mechanism provided on Windows.

(to be fair, their blanket statement would apply to Linux firewalls too, but their example is Windows, which leads one to believe they prefer having lots of unprotected Windows machines on the Internet or something...)

Sabotage or just stupid advice?

If open-source people are as smart as they claim, they should be telling people to open only those ports required for the software, not to turn off all defenses and let anything through!!

Monday, January 10, 2005 11:50:27 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Google has released a 20 year timeline of usenet newsgroup history, highlighting notable events along the way.

I find it very interesting to see the history of the “world's largest BBS“.

Personally I got involved in usenet in 1989 or 1990. I was working for a bio-medical manufacturing company, and managed to convince a local defense contractor to allow us to tap a usenet feed off them. The feed came through our 1200 baud modem, with us dialing into their system to transfer the data. Later they also allowed us to route email through their system over our much faster 2400 baud modems. Ahh, those were the days! :-)

Why'd they do this you ask? Because we were both DEC VAX shops, and there was a hacker-metality brotherhood amongst VAX admins. (that's hacker in the good sense, not the dark sense) I don't think much of that mentality still exists in our industry, and that is a sad loss. I suppose it is the price of “progress“.

Prior to getting the usenet feed, I was a Citadel user. That was a Commodore 64 BBS that worked in a manner similar to usenet. It might have run on the Amiga too, but I don't recall for certain. Anyway, Citadel was a store-and-forward relay system like usenet (at the time), and so I was able to interact with people from all across North America via a local phone call. Someone in the area obviously made long-distance calls as a gateway, and my thanks go to that unknown benefactor!

To bring it back to the usenet timeline though, I think this timeline is interesting by itself. But it also should help us remember that the Web is just one thread in the much richer tapestry that is the Internet proper.

Monday, January 10, 2005 9:45:59 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, January 07, 2005
  1. Less choice leads to better results.
  2. Higher level languages and frameworks restrict choice.
  3. Thus, higher level languages and frameworks should lead to better results.

 

The “less choice” statement flows from this article entitled Choice Cuts. Ignore the politics and focus on the research beneath the statement. That’s what is valuable in this context.

 

It is obvious that higher level languages and frameworks restrict choice, so I’m not going to cite a bunch of references for that. While many of these languages and frameworks provide a way to drop out to a lower level, in and of themselves they restrict choice. Just look at Visual Basic versions 1-6. Very productive, but often restrictive.

 

Whether the conclusion that higher level languages and frameworks will lead to better results is true or not is unknown.

 

Certainly there are examples where better results have been gained. This train of thought is lurking somewhere behind the software factories movement, and does seem quite compelling. Certainly many people using CSLA .NET with code generators have found radically increased productivity – and the combination of the two does restrict choice.

 

Yet there are countless examples (especially from the era of CASE tools) where the results were totally counter-productive. Where restriction of choice forced intricate workarounds that decreased productivity and made things worse.

 

Charles Simonyi contributed to an article on The Edge, essentially noting that there still is a software crisis and that low-level languages aren’t solving the problem. He argues in fact, that low-level languages like C#, VB and Java are a dead-end. That we must move to higher-level language concepts in order to adequately represent the real world through software.

 

This is the view of the software factories people and the domain-specific language movement. And it is rather hard to argue the point. There are many days when I feel like I’m writing the same code I wrote 15 years ago – except now its in a GUI instead of on a VT100 terminal. But the business code just hasn’t changed all that terribly much…

 

To look at it a different way, a software architect’s job is to restrict choice. Our job in this role is to look at the wide array of choices and narrow them down to a specific set that our organization can use. Why? Because having each developer or project lead do all that filtering would seriously cut into productivity.

 

Companies employ architects specifically to limit choice. To set standards and policies that narrow the technology focus to a limited set of options. The rest of the organization then lives within those artificial boundaries. There are many reasons for this, including licensing costs, training and so forth. I doubt that many people have considered the direct (presumably positive) impact of the reduction of choice however.

 

I’ve been working on an idea I’ve dubbed an Entity Description Language (EDL). My original motivation was to reduce the amount of plumbing code we write. Look at a typical Java, C# or VB class and you’ll find that less than 4% of the actual lines in the class perform tangible business functions. Most of the lines are just language syntax or pure plumbing activities like opening a database connection.

 

In working on EDL though, I’ve discovered that it is virtually impossible – and perhaps undesirable – to replicate the capabilities of C# or VB in their entirety. There are reasons why these languages use such a verbose syntax. They require verbosity in order to provide flexibility, or choice. Virtually unlimited choice.

 

If we now consider that limited choice is better, then perhaps the fact that something like EDL restricts choice is only good…

Friday, January 07, 2005 10:23:13 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, January 06, 2005

OK, I figured it out (I think)

 

    [NonSerialized]

    EventHandler _nonSerializableHandlers;

    EventHandler _serializableHandlers;

 

    /// <summary>

    /// Declares a serialization-safe IsDirtyChanged event.

    /// </summary>

    public event EventHandler IsDirtyChanged

    {

      add

      {

        if (value.Target.GetType().IsSerializable)

          _serializableHandlers =

             (EventHandler)Delegate.Combine(_serializableHandlers, value);

        else

          _nonSerializableHandlers =

             (EventHandler)Delegate.Combine(_nonSerializableHandlers, value);

      }

      remove

      {

        if (value.Target.GetType().IsSerializable)

          _serializableHandlers =

            (EventHandler)Delegate.Remove(_serializableHandlers, value);

        else

          _nonSerializableHandlers =

            (EventHandler)Delegate.Remove(_nonSerializableHandlers, value);

      }

    }

 

    /// <summary>

    /// Call this method to raise the IsDirtyChanged event.

    /// </summary>

    virtual protected void OnIsDirtyChanged()

    {

      if (_nonSerializableHandlers != null)

        _nonSerializableHandlers(this, EventArgs.Empty);

      if (_serializableHandlers != null)

        _serializableHandlers(this, EventArgs.Empty);

    }

 

I temporarily forgot that C# makes you invoke the delegate directly anyway, so having a separate clause in the manual event declaration isn’t required. I still think it makes the code easier to read, but functionality is king in the end.

Thursday, January 06, 2005 8:58:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Several months ago I posted an entry about the new VB syntax for manual declaration of events – specifically showing how it can be used to solve today’s issue with attempted serialization of nonserializable event listeners.

 

Subsequent to that post, I posted a follow-up with a better version of the code, thanks to input from a reader.

 

At the moment I am working through some eventing issues in both VB and C#, and I’ve found what appears to be a troubling limitation in C#.

 

In the new VB manual declaration scheme we have the ability to manually raise the event using the backing field. This allows me to have two backing fields - one serialized and one not as shown in my follow-up post. This is really nice, because it means we can retain serializable delegate references, while dropping nonserializable references.

 

Unfortunately, it doesn’t appear that the C# syntax allows us to control how the backing field is invoked. It appears that only one backing field is possible, and it is invoked automatically such that we don't have control over the invocation.

 

If we can’t control the invocation, then we can’t invoke both the serialized and nonserialized set of delegates. This will force the C# code to treat all the delegates as nonserialized, even those that could be serialized.

 

Of course I’m researching this for CSLA .NET 2.0. Wouldn’t it be a joke if this time the C# framework had to have some VB code (since last time it was the other way around)?

 

Anyone have any insight into a solution on the C# side?

Thursday, January 06, 2005 8:42:22 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Want to hear about SOA and/or Indigo? Eric Rudder, Don Box, Doug Purdy and Rich Turner are all speaking at VS Live (as is yours truly).

Even if you like me, are rather skeptical about SOA, the fact is that Indigo is coming.

Don't let the SOA hype fool you. Indigo will impact you if you use .NET.

Personally I look at Indigo much more as a replacement for remoting and DCOM, along with integrating the WSE stuff into Web services. Because of this, Indigo is a very important thing to me - and to anyone building client/server or n-tier distributed systems in .NET.

Indigo alters the way objects are serialized, the way data is marshalled across networks and more. It is pretty extensive, and is going to be harder to abstract away than either asmx or remoting have been. This means we, as consumers of the technology, will need to understand more of it than we have needed to with existing technologies.

Since VS Live has a whole day on Indigo, this is a chance to get a good look at what's coming and assess what it is going to do to you.

And of course while you are at VS Live, you can attend my distributed object-oriented workshop :-)

Thursday, January 06, 2005 4:48:49 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Dan Miser blogs that there's a new version of Trillian out there. Off to download we go :-) - and I suggest you buy it! It is well worth the money!

Thursday, January 06, 2005 3:07:56 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

You can now get details about, and download the beta of, Microsoft's new AntiSpyware software at this location.

Thursday, January 06, 2005 10:57:09 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, January 05, 2005

I was going to post this on my personal blog, but it occurred to me that it is technical enough in nature to fit here. 

EPIC is a flash video (which has been out for a while now) that portrays the future of media. It is scary, but interesting. Moreover it is thought provoking.

Certainly the web/blog/wiki thing has driven down the value of professional writing. Magazines (and vendors such as Microsoft) are paying less and less for content. Why should they pay when tons of people will produce content for free? Why should advertisers go with established publishers when some blogs outstrip them in readership?

Book sales are down. Even factoring in the dot-bust and the Bush-era recession of the past few years, book sales are down from where they should be. Who needs to buy a book when you can get so much online for free through a quick google search.

It is easy to look at Epic from a technology perspective, but it is the larger social perspective that I find interesting and troubling. And the social effects are real and there's active debate.

Take the controversy over wikipedia.org for example - the encyclopedia business is under siege by who? Us. Anyone with a fact can share it with the world, without going through a formal company or process. Without any opportunity for anyone to make money on it. On one hand this is good, but on the other it is bad. Who’s going to fund archeological research? Who’s going to do the hard parts of finding facts? For free?

In the technology space, many of us blog things – including me. In many cases these are things that might have been paid articles prior to blogging, but now we gleefully put them on the web for free. That's fun for a while, and is good for notoriety, but in the long run it isn't sustainable.

Several people (friends or acquaintances of mine in both the Microsoft and Java spaces) recently have indicated that they are done writing - books, articles - they are done. This is troublesome. Is Atlas shrugging? Will the content of the future consist merely of the myriad voices of mundane souls?

Epic portrays at least one alternative, where it is at least possible for an author to get paid for their craft. Whether that is a realistic model doesn't matter as much as the fact that some model must be found.

Because we're not talking about just technical authors. We're talking about fiction. We're talking about music, and eventually movies. How will content creators get paid to do their work when random people do it for free? Will true artists bother? Do we care? Perhaps the people doing it for free are as good or better?

Perhaps they are just good enough, which is even scarier. That, after all, is the primary sin people ascribe to Microsoft. That they aren't the best, but rather are just good enough – leaving us stuck living in a “good enough“ world rather than a really kick ass world.

I don't necessarily buy that viewpoint on Microsoft. Having used various flavors of Linux I don’t see that as the “kick-ass world” I’m personally looking for anyway. But it is easy to look at reality TV and see where everything could sink to that level.

Epic raises serious questions that only time will answer...

Wednesday, January 05, 2005 5:27:48 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, January 03, 2005

In my previous entry Randy H notes that Microsoft has a different approach to marketing:

 

MS has some incredibly talented marketers. The Technical Product Manager role is essentially a marketer that helps to determine what features go into products and how things should work. To me, that kind of marketing has a lot of value. I wouldn't dismiss the role of marketing in our greatest technology companies. Wasn't .NET a whole lot of marketing as well?

 

While it is true that Microsoft has a unique approach to marketing, they really aren't much different than anyone else. While .NET was as much marketing as anything else (since the ".NET" got slapped on _everything_ for a while), the reason it has been successful is due to its technical merits.

 

Notice that the ".NET" label is fading already - Visual Studio 2005, Visual Basic 2005, etc. No .NET left in the product names at all. My guess is that ".NET" the term will fade away into the same marketing hole that swallowed up Remote OLE Automation, MTS and soon SOA.

 

I have always found it amazing when Microsoft is said to have this "great marketing machine". In many ways they are the worst marketers out there. Certainly far, far worse than Apple or IBM for instance.

 

Apple has the trendy thing going, and has for a very long time. Microsoft has never been trendy or fashionable or cool or hip. But Apple sure is hip, and it shows in their iPod sales. For some reason though, having powerful marketing in the "cool space" doesn't translate to widespread use.

 

IBM has those really kick-ass commercials that juxtapose business situations with strange solutions. And prior to that they had the cool commercials showing non-tech scenarios that were just metaphors for IT issues. Very cool and very smart stuff. Very effective too, as IBM’s global consulting arm has become large and influential due to that kind of marketing.

 

Microsoft has never had anything remotely similar to “real” marketing like that. Microsoft’s marketing has always been more subtle and focused on technologists. In reality, Microsoft’s marketing has always been more grass-roots, much like the open-source world.

 

And there’s some humor for you. The open-source world has apparently decided that it too needs marketing. Even if you make no money off your work, you certainly want the fame/notoriety – and to get fame you need people to use your stuff rather than your competitors’ stuff (regardless of whether they are commercial or OSS).

 

At the same time, Microsoft really wants to move into the enterprise space, and so they have been trying to figure out how to do “actual” marketing along the lines of IBM. And they want to sell consumer items like the Media PC, so they’ve been struggling to figure out how to be hip like Apple. Hopefully as they do this, they’ll manage to continue the MSDN and TechNet-style marketing to the technical community. We’ve been the bread-and-butter for them over the past 12 or so years after all.

Monday, January 03, 2005 8:52:53 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, January 02, 2005

Don Box put out his predictions for 2005, including #3 which states that the term SOA will be replaced by some new hype-word invented by the marketing wonks of various software and industry analysis companies.

As I said months ago, the S in SOA is a dollar sign. SOA is much more hype than reality, which means it is created and defined by marketing people far more than it is by actual computer scientists or engineers.

As is often the case with over-hyped concepts, the terms related to those concepts rapidly become meaningless. What is SOA? Depends on which vendor you ask. What is a service? Depends on which vendor you ask. It is all a meaningless jumble here at the start of 2005.

Of course the concept is too valuable to marketeers[1] for them to give up just because a term has been lost. So I am sure Don is right and marketing groups at Microsoft, IBM, Gartner and many other organizations are busily working to decide on the next term or terms to use for a while.

So for better or worse SOA itself isn’t dead, but I agree that the term is living on borrowed time.

 

[1] Yes, I know the term is marketers, but that extra “e” in there makes it sound so much more, well, Disney somehow :-)

Sunday, January 02, 2005 5:08:12 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In