Rockford Lhotka

 Tuesday, August 31, 2004

The service packs are:

Microsoft .NET Framework 1.0 Service Pack 3 (SP3)

Microsoft .NET Framework 1.1 Service Pack 1 (SP1)

Microsoft .NET Framework 1.1 Service Pack 1 (SP1) for Windows Server 2003

These services packs increase overall security, and support changes made to Windows in Windows XP SP2.

Tuesday, August 31, 2004 12:33:37 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

Years ago it was Carl and Gary's that was the central hub for the VB community. The place we all started browsing and then jumped off to other locations. There really hasn't been an equivalent hub (or portal) for a very long time.

But I think there's hope - check out Robert Green's blog entry on the new VB community site updates.

Robert has been working (along with Duncan) on this for quite a while now, soliciting input from a lot of people in the VB community - including authors, speakers and others. The site has been slowly evolving, and now is really starting to show some great promise as a central hub for the VB community.


Tuesday, August 31, 2004 12:29:04 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 26, 2004

OK, now I feel better. Perhaps I jumped the gun with my previous post.


Rich Turner gave an awesome presentation – totally on the mark from start to end.


He was very, very clear that the prescriptive guidance is to use asmx (web services) to cross service boundaries and to use Enterprise Services (COM+), MSMQ, Remoting or asmx inside a service boundary.


Note that inside a service might be multiple tiers. Multiple physical tiers. You might cross network boundaries (though that should be minimized), but that’s OK. This is all inside your service, within your control. Since it is inside your control, you should choose the appropriate technology based on all criteria (such as performance, transactional support, security, infrastructure support, deployment and so forth).


This is the best and most clear guidance I’ve heard from Microsoft yet. Very nice!


Thursday, August 26, 2004 10:32:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

I’d laugh except that it makes me want to cry.


I’m at a Microsoft training event, being briefed on various technologies by people on the product teams – including content on Indigo.


The unit manager gave an overview, and someone asked about the recommended architecture guidance around today’s Remoting technology. He reiterated that the recommendation is to only use it within a process. This, after he’d just finished pointing out that there are scenarios today that are only solved by remoting.


Say what?


Then several other Indigo team members covered various features of Indigo and how they map to today’s technology and how we may get from today to Indigo. Numerous times it comes up that Indigo incorporates much of the Remoting model (because it is good), and that most code using Remoting today will transparently migrate to Indigo when it gets here.


So what now?


First, the prescriptive guidance is nuts. They are saying conflicting things and just feeding confusion. Remoting is sometimes the only answer, but don't use it?


I'm sorry, I have to build real apps between now and when ever Indigo shows up. If Remoting is the answer, then it is the answer. End of story.


Second, it turns out that you are fine with Remoting as long as you don’t create custom sinks or formatters for Remoting your code will move to Indigo just as easily as any asmx code you write today (which is to say with minimal code changes).


And of course you should avoid the TCP channel and custom hosts – use IIS, the HttpChannel and the BinaryFormatter and life is good.


Finally, (as I’ve discussed before), Remoting is for communication between tiers of your own application. If you are communicating across trust boundaries (between separate applications) then you should use web services – or better yet use WSE 2.0.


Conversely, if you are using web services or WSE 2.0, then you have inserted a trust boundary and you shouldn’t be pretending that you are communicating between tiers – you are now communicating between separate applications.

Thursday, August 26, 2004 9:29:00 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, August 20, 2004

Though not strictly a computer thing, I have a classic tale of customer service. And hey, customer service is an issue for everyone, all the time. As computer professionals we provide a service to our customers, whether internal or external. And providing good customer service means we get better raises and/or get to keep our job.


This tale of customer service is actually two interlinked stories. They illustrate that customer service is all about how you treat the customer. It is all about interpersonal relationship and interaction.


Many years ago I changed the oil in my car myself. My wife and I couldn’t afford to pay to have it changed, and it wasn’t hard to do on cars built in the 70’s and early 80’s. But over time it became harder and harder to dispose of the used oil, and my income improved as my career moved along.


So eventually we started going to Valvoline Rapid Oil Change. They were quick, efficient and inexpensive. More expensive than me doing it, but by this point in life it was worth the extra money. After using their services for quite some time, they offered to change the PVC valve in our car. It was a $6 part. Not overly hard to change, and their price wasn’t much higher than the price of the part itself. In other words, it seemed worth it, so we said sure, go ahead.


In those years Minnesota required annual emissions testing for all cars in the Twin Cities area. Our test was scheduled for perhaps a month after the PVC valve had been replaced. And the car failed. Of course the state test doesn’t say why the car failed, just that it failed.


So off we went to the Chevy dealer. $60 later it turns out that the emissions failure was due to a faulty PVC valve.


Now I knew darned well that the valve was new. After all, Rapid Oil Change had just changed it less than a month before. So I went over to the oil change place, with their receipt for the PVC valve, the state emissions failure form and the dealer receipt showing the work done at the dealer.


I asked to talk to the manager. He didn’t look happy, and as I explained the story, he looked less happy. Before I could even explain where I hoped this conversation would go, he cut me off and told me that there was no way he could know if their PVC valve was faulty or if the problem had some other root cause.


In other words, he shut me down. No apology, much less offering to defray the $60 his shop had cost me.


In all the years since then we’ve never used Valvoline Rapid Oil Change. Any time I get a chance, I tell people to avoid the chain. Just by my wife and I not using them, they’ve certainly lost more than the $60. And I like to think that I’ve cost them other business as well, thus hopefully providing some just punishment for employing such a crappy manager and for installing faulty parts.


Contrast this to an experience I had just this past week. Last weekend we bought a small pop-up camper trailer so it will be easier to go camping as a family.


The trailer has a round 6-pin wiring plug for the lights. It has the 6-pin plug so it can get power directly from the car alternator/battery and thus it can charge the deep cycle battery in the camper while we drive down the road. Unfortunately my van has a flat 4-pin connector, which works great for most small trailers (like my boat trailer), but doesn’t match up to the 6-pin.


I figured I could rig up an adapter that would at least get the lights working. My thought was that I just didn’t need to recharge the battery from the van, and that it would be cheap and easy to get the lights working.


It turns out that modern cars are finicky and have complex wiring… They aren’t nearly as easy or fun to work with as my 1976 Datsun F-10, or my 1987 Cavalier… So I managed to blow out the fuse for my tail and brake lights on the van, and I still didn’t have the trailer working right.


I called around and ended up going to Burnsville Trailer Hitch, a specialist in these sorts of things. $110 later I had a fully wired 6-pin connector on my van, along with a 6 to 4 converter for my boat trailer. I thought that was money well-spent, since I’d explored running power from the battery back to the plug myself and it would have taken me far longer than the 45 minutes it took them.


But my story isn’t done. Here’s the catch. When I got home, I tried the plug and it didn’t work. The signal and brake lights on the trailer just didn’t work.


I called back to the store, and they offered to look at the problem if I brought the van back. I wasn’t totally thrilled, since we’re talking about a 15 mile drive each way, but there was nothing to be done. Off I went.


About 10 minutes after I got there, the guy comes in and says that there’s this converter unit that safely combines the signal and brake light wires from the van (which has separate bulbs for signal and brake) into a single pair of wires for left and right on the trailer (which has one bulb on each side for signal and brake). Turns out this converter was blown.


Now I knew about the converter, having wired one into my previous car. With him pointing out that it was blown, I strongly suspected that it was my doing, from my earlier attempt to wire the trailer.


Over my protests, he said they’d replace it for free. Seriously, I protested a bit, saying that it could easily have been my fault. His answer? It could have easily been their fault since they might have blown it when they did the wiring for the 6-pin connector.


So here we have a store with awesome customer service. I was and am impressed and will recommend people go there when ever it is appropriate.


But here’s the real kicker. The oil change place pissed off a regular customer, losing not only me and my wife, but everyone we can convince to avoid the place. The trailer hitch place has no reason to expect I’ll ever return. After all, how often do you need a trailer hitch? Yet they were professional and went above and beyond the call to provide great service.


As I said to start, I’m not sure this has anything to do directly with computers, but I’ll bet you that if you treat your customers like Burnsville Trailer Hitch that you’ll have happy customers, get better raises and have a more secure job overall!


Friday, August 20, 2004 6:29:23 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 12, 2004

The MSDN VB community site has featured my blog - which I think is pretty cool.

For those who don't know, the VB team at Microsoft has started assemblying a community portal site for VB developers at

They already have some cool content, including relevant blog listings, links to sites with useful code, tips and tricks and so forth. And they plan to do more in the near future, including some more interactive content so we, as the community, can help create and manage some of the site's content.

I think this is an excellent step on the part of the VB team to help support the huge VB community, and I appreciate it. Thanks guys!

Thursday, August 12, 2004 2:43:37 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, August 11, 2004

There is this broad-reaching debate that has been going on for months about remoting, Web services, Enterprise Services, DCOM and so forth. In short, it is a debate about the best technology to use when implementing client/server communication in .NET.


I’ve weighed in on this debate a few times with blog entries about Web services, trust boundaries and related concepts. I’ve also had discussions about these topics with various people such as Clemens Vasters, Juval Lowy, Ingo Rammer, Don Box and Michele Leroux Bustamante. (all the “experts” tend to speak at many of the same events, and get into these discussions on a regular basis)


It is very easy to provide a sound bite like “if you aren’t doing Enterprise Services you are creating toys” or something to that effect. But that is a serious oversimplification of an important issue.


Because of this, I thought I’d give a try at summarizing my thoughts on the topic, since it comes up with Magenic’s clients quite often as well.


Before we get into the article itself, I want to bring up a quote that I find instructive:


“The complexity is always in the interfaces” – Craig Andrie


Years ago I worked with Craig and this was almost like a mantra with him. And he was right. Within a small bit of code like a procedure, nothing is ever hard. But when that small bit of code needs to use or be used by other code, we have an interface. All of a sudden things become more complex. And when groups of code (objects or components) use or are used by other groups of code things are even more complex. And when we look at SOA we’re talking about entire applications using or being used by other applications. Just think what this does to the complexity!


I think a lot of the problem with the debate comes because of a lack of clear terminology. So here are the definitions I’ll use in the rest of this article:





A logical grouping of similar functionality within an application. Often layers are separate .NET assemblies, though this is not a requirement.


A physical grouping of functionality within an application. There is a cross-process or cross-network boundary between tiers, providing physical isolation and separation between them.


A complete unit of software providing functionality within a problem domain. Applications are composed of layers, and may be separated into tiers.


A specific type of application interface that specifically allows other applications to access some or all of the functionality of the application exposing the service. Often this interface is in the form of XML. Often this XML interface follows the Web services specifications.


I realize that these definitions may or may not match those used by others. The fact is that all of these terms are so overloaded that intelligent conversation is impossible without some type of definition/clarification. If you dislike these terms, please feel free to mentally mass-substitute them for your favorite overloaded term in the above table and throughout the remainder of this article J.


First, note that there are really only three entities here: applications, tiers and layers.


Second, note that services are just a type of interface that an application may expose. If an application only exposes a service interface, I suppose we could call the application itself a service, but I suggest that this only returns us to overloading terms for no benefit.


A corollary to the above points is that services don’t provide functionality. Applications do. Services merely provide an access point for an application’s functionality.


Finally, note that services are exposed for use by other applications, not other tiers or layers within a specific application. In other words, services don’t create tiers, they create external interfaces to an application. Conversely, tiers don’t create external interfaces, they are used exclusively within the context of an application.


In Chapter 1 of my .NET Business Objects books I spend a fair amount of time discussing the difference between physical and logical n-tier architecture. By using the layer and tier terminology perhaps I can summarize here more easily.


An application should always be architected as a set of layers. Typically these layers will include:


  • Presentation
  • Business logic
  • Data access
  • Data management


The idea behind this layering concept is two-fold.


First, we are grouping similar application functionality together to provide for easier development, maintenance, reuse and readability.


Second, we are grouping application functionality such that external services (such as transactional support, or UI rendering) can be provided to specific parts of our code. Again, this makes development and maintenance easier, since (for example) our business logic code isn’t contaminated by the complexity of transactional processing during data access operations. Reducing the amount of external technology used within each layer reduces the surface area of the API that a developer in that layer needs to learn.


In many cases each layer will be a separate assembly, or even a separate technology. For instance, the data access layer may be in its own DLL. The data management layer may be the JET database engine.


Tiers represent a physical deployment scenario for parts of an application. A tier is isolated from other tiers by a process or network boundary. Keeping in mind that cross-process and cross-network communication is expensive, we must always pay special attention to any communication between tiers to make sure that it is efficient given these constraints. I find it useful to view tier boundaries as barriers. Communication through the barriers is expensive.


Specifically, communication between tiers must be (relatively) infrequent, and coarse-grained. In other words, send few requests between tiers, and make sure each request does a relatively large amount of work on the other side of the process/network barrier.

Layers and Tiers

It is important to understand the relationship between layers and tiers.


Layers are deployed onto tiers. A layer does not span tiers. In other words, there is never a case where part of a layer runs on one tier and part of the layer runs on another tier. If you think you have such a case, then you have two layers – one running on each tier.


Due to the fact that layers are a discrete unit, we know that we can never have more tiers than layers. In other words, if we have n layers, then we have n or less tiers.


Note that thus far we have not specified that communication between layers must be efficient. Only communication between tiers is inherently expensive. Communication between layers could be very frequent and fine-grained.


However, notice also that tier boundaries are also layer boundaries. This means that some inter-layer communication does need to be designed to be infrequent and coarse-grained.


For all practical purposes we can only insert tiers between layers that have been designed for efficient communication. This means that it is not true that n layers can automatically be deployed on n tiers. In fact, the number of potential tiers is entirely dependant on the design of inter-layer communication.


This means we have to provide terminology for inter-layer communication:





Communication between layers involves the use of properties, methods, events, delegates, data binding and so forth. In other words, there’s a lot of communication, and each call between layers only does a little work.


Communication between layers involves the use of a very few methods. Each method is designed to do a relatively large amount of work.


If we have n layers, we have n-1 layer interfaces. Of those interfaces, some number m will be course-grained. This means that we can have at most m+1 tiers.


In most applications, the layer interface between the presentation and business logic layers is fine-grained. Microsoft has provided us with powerful data binding capabilities that are very hard to give up. This means that m is virtually never n-1, but rather starts at n-2.


In most modern applications, we use SQL Server or Oracle for data management. The result is that the layer interface between data access and data management is typically course-grained (using stored procedures).


I recommend making the layer interface between the business logic and data access also be course-grained. This provides for flexibility in placement of these layers into different tiers so we can achieve different levels of performance, scalability, fault-tolerance and security as required.


In a web environment, the presentation is really just the browser, and the actual UI code runs on the web server. Note that this is, by definition, two tiers – and thus two or more layers. The interaction between web presentation and web UI is coarse-grained, so this works.


In the end, this means we have some clearly defined potential tier boundaries that map directly to the course-grained layer interfaces in our design. These include:


  • Presentation <-> UI (web only)
  • Business logic <-> Data access
  • Data access <-> Data management


Thus, for most web apps m is 3 and for most Windows apps m is 2. So we’re talking about n layers being spread (potentially) across 3 or 4 physical tiers.

Protocols and Hosts

Now that we have an idea how layers and tiers are related, let’s consider this from another angle. Remember that layers are not only logical groupings of domain functionality, but also are grouped by technological dependency. This means that, when possible, all code required database transactions will be in the same layer (and thus the same tier). Likewise, all code consuming data binding will be in the same layer, and so forth.


The net result of this is that a layer must be deployed somewhere that the technological dependencies of that layer can be satisfied. Conversely, it means that layers that have few dependencies have few hard restrictions on deployment.


Given that tiers are physical constructs (as opposed to the logical nature of layers), we can bind technological capabilities to tiers. What we’re doing in this case is defining a host for tiers, which in turn contain layers. In the final analysis, we’re defining host environments in which layers of our application can run.


We also know that we have communication between tiers, which is really communication between layers. Communication occurs over specific protocols that provide appropriate functionality to meet our communication requirements. The requirements between different layers of our application may vary based on functionality, performance, scalability, security and so forth. For the purposes of this article, the word protocol is a high-level concept, encompassing technologies like DCOM, Remoting, etc.


It is important to note that the concept of a host and a protocol are different but interrelated. They are interrelated because some of our technological host options put restrictions on the protocols available.


In .NET there are three categorical types of host: Enterprise Services, IIS or custom. All three hosts can accommodate ServicedComponents, and the IIS and custom hosts can accommodate Services Without Components (SWC).


The following table illustrates the relationships:





Enterprise Services
(Server Application)


Simple .NET assembly



Web services

Simple .NET assembly



Web services (w/ WSE)

Simple .NET assembly



The important thing to note here is that we can easily host ServicedComponent objects or Services Without Components in an IIS host, using Web services or Remoting as the communication protocol.


The all three hosts can host simple .NET assemblies. For IIS and Remoting this is a native capability. However, Enterprise Services can host normal .NET assemblies by having a ServicedComponent dynamically load .NET assemblies and invoke types in those assemblies. Using this technique it is possible to create a scenario where Enterprise Services can act as a generic host for .NET assemblies. I do this in my .NET Business Objects books for instance.


What we’re left with is a choice of three hosts. If we choose Enterprise Services as the host then we’ve implicitly chosen DCOM as our protocol. If we choose IIS as a host we can use Web services or Remoting, and also choose to use or not use the features of Enterprise Services. If we choose a custom host we can choose Web services, Remoting or DCOM as a protocol, and again we can choose to use or not use Enterprise Services features.


Whether you need to use specific Enterprise Services features is a whole topic unto itself. I have written some articles on the topic, the most broad-reaching of which is this one.


However, there are some things to consider beyond specific features (like distributed transactions, pooled objects, etc.). Specifically, we need to consider broader host issues like stability, scalability and manageability.


Of the three hosts, Enterprise Services (COM+) is the oldest and most mature. It stands to reason that it is probably the most stable and reliable.


The next oldest host is IIS, which we know is highly scalable and manageable, since it is used to run a great many web sites, some of which are very high volume.


Finally there’s the custom host option. I generally recommend against this except in very specific situations, because writing and testing your own host is hard. Additionally, it is unlikely that you can match the reliability, stability and other attributes of Enterprise Services or IIS.


So do we choose Enterprise Services or IIS as a host? To some degree this depends on the protocol. Remember that Enterprise Services dictates DCOM as the protocol, which may or may not work for you.


Our three primary protocols are DCOM, Web services and Remoting.


DCOM is the oldest, and offers some very nice security features. It is tightly integrated with Windows and with Enterprise Services and provides very good performance. By using Application Center Server you can implement server farms and get good scalability.


On the other hand, DCOM doesn’t go through firewalls or other complex networking environments well at all. Additionally, DCOM requires COM registration of the server components onto your client machines. Between the networking complexity and the deployment nightmares, DCOM is often very unattractive.


However, as with all technologies it is very important to weigh the pros of performance, security and integration against the cons of complexity and deployment.


Web services is the most hyped of the technologies, and the one getting the most attention by key product teams within Microsoft. If you cut through the hype, it is still an attractive technology due to the ongoing work to enhance the technology with new features and capabilities.


The upside to Web services is that it is an open standard, and so is particularly attractive for application integration. However, that openness has very little meaning between layers or tiers of a single application. So we need to examine Web services using other criteria.


Web services is not high performance, or low bandwidth.


Web services use the XmlSerializer to convert objects to/from XML, and that serializer is extremely limited in its capabilities. To pass complex .NET types through Web services you’ll need to manually use the BinaryFormatter and Base64 encode the byte stream. While achievable, it is a bit of a hack to do this.


However, by using WSE we can get good security and reliability features. Also Web services are strategic due to the focus on them by many vendors, most notably Microsoft.


Again, we need to evaluate the performance and feature limitations of Web services against the security, reliability and strategic direction of the technology. Additionally keeping in mind that hacks exist to overcome the worst of the feature limitations in the technology, allowing Web services to have similar functionality to DCOM or Remoting.


Finally we have Remoting. Remoting is a core .NET technology, and is very comparable to RMI in the Java space.


Remoting makes it very easy for us to pass complex .NET types across the network, either by reference (like DCOM) or by value. As such, it is the optimal choice if you want to easily interact with objects across the network in .NET.


On the other hand, Microsoft recommends against using Remoting across the network. Primarily this is because Remoting has no equivalent to WSE and so it is difficult to secure the communications channel. Additionally, because Microsoft’s focus is on Web services, Remoting is not getting a whole lot of new features going forward. Thus, it is not a long-term strategic technology.


Again, we need to evaluate this technology be weighing its superior feature set for today against its lack of long-term strategic value. Personally I consider the long-term risk manageable assuming you are employing intelligent application designs that shield you from potential protocol changes.


This last point is important in any case. Consider that DCOM is also not strategic, so using it must be done with care. Also consider that Web services will undergo major changes when Indigo comes out. Again, shielding your code from specific implementations is of critical importance.


In the end, if you do your job well, you’ll shield yourself from any of the three underlying protocols so you can more easily move to Indigo or something else in the future as needed. Thus, the long-term strategic detriment on DCOM and Remoting is minimized, as is the strategic strength of Web services.


So in the end what do you do? Choose intelligently.


For the vast majority of applications out there, I recommend against using physical tiers to start with. Use layers – gain maintainability and reuse. But don’t use tiers. Tiers are complex, expensive and slow. Just say no.


But if you must use physical tiers, then for the vast majority of low to medium volume applications I tend to recommend using Remoting in an IIS host (with the Http channel and BinaryFormatter), potentially using Enterprise Services features like distributed transactions if needed.


For high volume applications you are probably best off using DCOM with an Enterprise Services host – even if you use no Enterprise Services features. Why? Because this combination is more than two times older than Web Services or Remoting and its strengths and limitations and foibles are well understood.


Note that I am not recommending the use of Web services for cross-tier communication. Maybe I’ll change my view on this when Indigo comes out – assuming Indigo provides the features of Remoting with the performance of DCOM. But today it provides neither the features nor performance that make it compelling to me.

Wednesday, August 11, 2004 5:45:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, July 14, 2004

A customer asked me for a list of things VB can do that C# can't. "Can't" isn't meaningful of course, since C# can technically do anything, just like VB can technically do anything. Neither language can really do anything that the other can't, because both are bound to .NET itself.

But here's a list of things VB does easier or more directly than C#. And yes, I'm fully aware that there's a comparable list of things C# does easier than VB - but that wasn't the question I was asked :-)   I'm also fully aware that this is a partial list.

For the C# product team (if any of you read this), this could also act as my wish list for C#. If C# addressed even the top few issues here I think it would radically improve the language.

Also note that this is for .NET 1.x - things change in .NET 2.0 when VB gets edit-and-continue and the My functionality, and C# gets iterators and anonymous delegates.

Finally, on to the list:

  1. One key VB feature is that it eliminates an entire class of runtime error you get in case sensitive languages - where a method parameter and property in the same class have the same name but for case. These problems can only be found through runtime testing, not by the compiler. This is a stupid thing that is solved in VB by avoiding the archaic concept of case sensitivity.
  2. Handle multiple events in single method (superior separation of interface and implementation).
  3. WithEvents is a huge difference in general, since it dramatically simplifies (or even enables) several code generation scenarios.
  4. In VB you can actually tell the difference between inheriting from a base class and implementing an interface. In C# the syntax for both is identical, even though the semantic meaning is very different.
  5. Implement multiple interface items in a single method (superior separation of interface and implementation).
  6. Also, independent naming/scoping of methods that implement an interface method - C# interface implementation is comparable to the sucky way VB6 did it... (superior separation of interface and implementation).
  7. Multiple indexed properties (C# only allows a single indexed property).
  8. Optional parameters (important for Office integration, and general code cleanliness).
  9. Late binding (C# requires manual use of reflection).
  10. There are several COM interop features in VB that require much more work in C#. VB has the ComClass attribute and the CreateObject method for instance.
  11. The Cxxx() methods (such as CDate, CInt, CStr, etc) offer some serious benefits over Sometimes performance, but more often increased functionality that takes several lines of C# to achieve.
  12. The VB RTL also includes a bunch of complex financial functions for dealing with interest, etc. In C# you either write them by hand or buy a third-party library (because self-respecting C# devs won't use the VB RTL even if they have to pay for an alternative).
  13. The InputBox method is a simple way to get a string from the user without having to build a custom form.
  14. Sound a Beep in less than a page of code.

And please, no flames. I know C# has a comparable list, and I know I've missed some VB items as well. The point isn't oneupmanship, the point is being able to intelligently and dispassionately evaluate the areas where a given language provides benefit.

If C# adopted some of these ideas, that would be cool. If VB adopted some of C#'s better ideas that would be cool. If they remain separate, but relatively equal that's probably cool too.

Personally, I want to see some of the more advanced SEH features from VAX Basic incorporated into both VB and C#. The DEC guys really had it nailed back in the late 80's!


Wednesday, July 14, 2004 7:47:07 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, July 7, 2004

I just got my author copies of the book, so it has been printed and is in distribution! This should mean that, any day now, bookstores will get copies and Amazon should start shipping them.

CSLA .NET">Click here for more information on both the VB .NET and C# editions of the book, as well as links to the code, the online discussion forum and other related information.


Books | News
Wednesday, July 7, 2004 11:57:40 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

I rarely read magazines or articles - I just skim them to see if there's anything I actually do want to invest the time to read. This is useful, because it means I can get the gist of a lot of content and only dive into the bits that I find actually interesting. It also means that I can (and do) read a lot of Java related stuff as well as .NET related material.

If you've followed my blog or my books, you know that I have a keen interest in distributed object-oriented systems - which naturally has spilled over into this whole SOA/service fad that's going on at the moment. So I do tend to actually read stuff dealing with these topic areas in both the .NET and Java space.

Interestingly enough, there's no difference between the two spaces. Both the .NET and Java communities are entirely confused and both have lots of vendors trying to convince us that their definitions are correct, so we'll buy the “right” products to solve our theoretical problems. What a mess.

Ted Neward has been engaged in some interesting discussions about the message-based or object-based nature of services. Now Daniel F. Savarese has joined the fray as well. And all this in the Java space, where they claim to be more mature and advanced than us lowly .NET guys... I guess not, since their discussions are identical to those happening in the .NET space.

I think the key point here is that the distributed OO and SOA issues and confusion totally transcend any technical differences between .NET and Java. What I find unfortunate is that most of the discussions on these topics are occuring within artificial technology silos. Too few discussions occur across the .NET/Java boundary.

When it comes to SOA and distributed object-oriented systems (which are two different things!), every single concept, issue, problem and solution that applies to .NET applies equally to Java and visa versa. Sure, the vendor implementations vary, but that's not where the problem (or the solution) is found. The problems and solutions are more abstract and are common across platforms.

What's needed is an intelligent, honest and productive discussion of these issues that is inclusive of people from all the key technology backgrounds. I imagine that will happen about the same time that we have any sort of intelligent, honest or productive discussion between Republicans and Democrats in the US government...


Wednesday, July 7, 2004 11:52:56 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, June 24, 2004

I got this question via email. I get variations on this question a lot, so I thought I’d blog my answer.


Hope you don't mind imposing on you for a second. I actually spoke to you very briefly after the one of the sessions and you seemed to concur with me that for my scenario - which is a typical 3-tier scenario, all residing on separate machines, both internal and external clients - hosting my business components on IIS using HTTP/binary was a sensible direction. I've recently had a conversation with someone suggesting that Enterprise Services was a far better platform to pursue. His main point in saying this is the increased productivity - leveraging all the services offered there (transactions, security, etc.). And not only that, but that ES is the best migration path for Indigo, which I am very interested in. This is contrary to what I have read in the past, which has always been that ES involves interop, meaning slower (which this person also disputes, by the way), and Don Box's explicit recommendation that Web Services were the best migration path. I just thought I'd ask your indulgence for a moment to get your impressions. Using DCOM is a little scary, we've had issues with it in the past with load-balancing etc. Just wondering if you think this is a crazy suggestion or not, and if not, do you know of any good best-practices examples or any good resources.



Here are some thoughts from a previous blog post.


The reason people are recommending against remoting is because the way you extend remoting (creating remoting sinks, custom formatters, etc) will change with Indigo in a couple years. If you aren't writing that low level type of code then remoting isn't a problem.


Indigo subsumes the functionality of both web services and remoting. Using either technology will get you to Indigo when it arrives. Again, assuming you aren't writing low-level plug-ins like custom sinks.


Enterprise Services (ES) provides a declarative, attribute-based programming model. And this is good. Microsoft continues to extend and enhance the attribute-based models, which is good. People should adopt them where appropriate.


That isn't to say, however, that all code should run in ES. That's extrapolating the concept beyond its breaking point. Just because ES is declarative, doesn’t make ES the be-all and end-all for all programming everywhere.


It is true that ES by itself causes a huge performance hit - in theory. In reality, that perf hit is lost in the noise of other things within a typical app (like network communication or the use of XML in any way). However, specific services of ES may have larger perf hits. Distributed transactions, for instance, have a very large perf hit - which is essentially independent of any interop issues lurking in ES. That perf hit is just due to the high cost of 2-phase transactions. Here's some performance info from MSDN supporting these statements.


The short answer is to use the right technology for the right thing.


  1. If you need to interact between tiers that are inside your application but across the network, then use remoting. Just avoid creating custom sinks or formatters (which most people don't do, so this is typically a non-issue).
  2. If you need to communicate between applications (even .NET apps) then use web services. Note this is not between tiers, but between applications – as in SOA.
  3. If you need ES, then use it. This article may help you decide if you need any ES features in your app. The thing is, if you do use ES, your client still needs to talk to the server-side objects. In most cases remoting is the simplest and fastest technology for this purpose. This article shows how to pull ES and remoting together.

Note that none of the above options use DCOM. The only case I can see for DCOM is where you want the channel-level security features it provides. However, WSE is providing those now for web services, so even there, I'm not sure DCOM is worth all the headaches. Then the only scenario is where you need stateful server-side objects and channel-level security, because DCOM is the only technology that really provides both features.


Thursday, June 24, 2004 4:38:44 PM (Central Standard Time, UTC-06:00)  #    Disclaimer