Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Friday, February 25, 2005
I take it back. We don't need Windows Forms, smart clients or Flash. Javascript and XML are obviously enough to rule the world. For evidence, see http://maps.google.com.
 
(now we just need tools to let us write stuff like this as easily as we program Windows Forms and the game's up)
Friday, February 25, 2005 4:49:34 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

In this article, Bill Vaughn voices his view that DataAdapter.Fill should typically be a developer's first choice for getting data rather than using a DataReader directly.

 

In my VB.NET and C# Business Objects books I primarily use the DataReader to populate objects in the DataPortal_Fetch methods. You might infer then, that Bill and I disagree.

 

While it is true that Bill and I often get into some really fun debates, I don't think we disagree here.

 

Bill's article seems to be focused on scenarios where UI developers or non-OO business developers use a DataReader to get data. In such cases I agree that the DataAdapter/DataTable approach is typically far preferable. Certainly there will be times when a DataReader makes more sense there, but usually the DataAdapter is the way to go.

 

In the OO scenarios like CSLA .NET the discussion gets a bit more complex. In my books I discussed why the DataReader is a good option - primarily because it avoids loading the data into memory just to copy that data into our object variables. For object persistence the DataReader is the fastest option.

 

Does that mean it is the best option in some absolute sense?

 

Not necessarily. Most applications aren't performance-bound. In other words, if we lost a few milliseconds of performance it is likely that our users would never notice. For most of us, we could trade a little performance to gain maintainability and be better off.

 

As a book author I am often stuck between two difficult choices. If I show the more maintainable approach the performance Nazis will jump on me, while if I show the more performant option the maintainability Nazis shout loudly.

 

So in the existing books I opted for performance at the cost of maintainability. Which is nice if performance is your primary requirement, and I don’t regret the choice I made.

 

(It is worth noting that subsequent to publication of the books, CSLA .NET has been enhanced, including enhancing the SafeDataReader to accept column names rather than ordinal positions. This is far superior for maintenance, with a very modest cost to performance.)

 

Given some of the enhancements to the strongly typed DataAdapter concept (the TableAdapter) that we’ll see in .NET 2.0, it is very likely that I’ll switch and start using TableAdapter objects in the DataPortal_xyz methods.

 

While there’s a performance hit, the code savings looks to be very substantial. Besides, it is my opportunity to make the maintenance-focused people happy for a while and let the performance nuts send me nasty emails. Of course nothing I do will prevent the continued use the DataReader for those who really need the performance.

Friday, February 25, 2005 2:28:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, February 24, 2005
My newest article is online at TheServerSide.net.
Thursday, February 24, 2005 12:06:43 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, February 23, 2005
Just a super-quick entry that I'll hopefully follow up on later. Ted wrote a bunch of stuff on contracts related to Indigo, etc.
 
I think this is all wrong-headed. The idea of being loosely coupled and the idea of being bound by a contract are in direct conflict with each other. If your service only accepts messages conforming to a strict contract (XSD, C#, VB or whatever) then it is impossible to be loosely coupled. Clients that can't conform to the contract can't play, and the service can never ever change the contract so it becomes locked in time.
 
Contract-based thinking was all the rage with COM, and look where it got us. Cool things like DoWork(), DoWork2(), DoWorkEx(), DoWorkEx2() and more.
 
Is this really the future of services? You gotta be kidding!
Wednesday, February 23, 2005 3:00:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
Rich put together this valuable list of tools - thank you!!
Wednesday, February 23, 2005 2:51:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Sahil has started a discussion on a change to the BinaryFormatter behavior in .NET 2.0.

 

Serialization is just the process of taking complex data and converting it into a single byte stream. Motivation is a separate issue :)

 

There are certainly different reasons to "serialize" an object.

 

One for what I do in CSLA .NET, which is to truly clone an object graph (not just an object, but a whole graph), either in memory or across the network. This is a valid reason, and has a set of constraints that a serializer must meet to be useful. This is what the BinaryFormatter is all about today.

 

Another is to easily convert some or all of an object’s data into actual data. To externalize the object’s state. This is what the XmlSerializer is all about today. The purpose isn’t to replicate a .NET type, it is to convert that type into a simple data representation.

 

Note that in both cases the formatter/serializer attempts to serialize the entire object graph, not just a single object. This is because the “state” of an object is really the state of the object graph. That implies that all references from your object to any other objects are followed, because they collectively constitute the object graph. If you want to “prune” the graph, you mark references such that the formatter/serializer doesn’t follow them.

 

In the case of the BinaryFormatter, it does this by working with each object’s fields, and it follows references to other objects. An event inside an object is just another field (though the actual backing field is typically hidden). This backing field is just a delegate reference, which is just a type of object reference – and thus that object reference is followed like any other.

 

In the case of the XmlSerializer, only the public fields and read/write properties are examined. But still, if one of them is a reference the serializer attempts to follow that reference. The event/delegate issue doesn’t exist here only because the backing field for events isn’t a public field. Make it a public field and you’ll have issues.

 

In .NET 1.x the BinaryFormatter attempts to follow all references unless the field is marked as [NonSerializable]. In C# the field: target provided (what I consider to be) a hack to apply this attribute to the backing field. It also has a better approach, which is to use a block structure to declare the event so you get to manually declare the backing field and can apply the attribute yourself.

 

The trick is that the default behavior is for the backing field to be serializable, and things like Windows Forms or other nonserializable objects might subscribe to the object’s events. When the BinaryFormatter follows the references to those objects it throws an exception (as it should).

 

I made a big stink about this when I wrote my VB Business Objects book though, because I discovered this issue and there was no obvious solution. This is because VB.NET has neither the Field: target hack, nor the block structure declaration for events. I was forced to implement a C# base class to safely declare my events.

 

I spent a lot of time talking to both the VB and CLR (later Indigo) teams about this issue.

 

The VB team provided a solution by supporting the block structure for declaring events, thus allowing us to have control over how the backing field is serialized. This is a very nice solution as I see it.

 

I am not familiar with the change Sahil is seeing in .NET 2.0. But that is probably because I’ve been using the block structure event declarations as shown in my C# and VB 2005 links above, so I’m manually ensuring that only serializable event handlers are being traced by the BinaryFormatter.

 

But I should point out that the C# code in the example above is for .NET 1.1 and works today just as it does in .NET 2.0.

Wednesday, February 23, 2005 12:06:27 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, February 21, 2005

A reader commented on my previous post about patternshare.org:

 

The content on patternshare.org is weak. I don't think the content will ever be as strong as the books that the Authors are trying hock.

 

It seems like every good design choice these days are being labeled a "Pattern". Quite a shame.

 

Don't misunderstand... I'm a huge pattern evangilist.

 

We must keep in mind that patternshare.org can not replicate the patterns from the books. That would violate copyright law. The goal of patternshare.org is to be an index, to make it easier for you to figure out which books to buy and/or where in those books to find interesting patterns. That is an admirable goal and one that I think patternshare.org can accomplish.

 

In 50 years (or whenever it is that copyrights run out) we can put all the book content online and then we won’t have “weak” content. But up to that point I’m afraid we’re kind of stuck with the reality that the patterns are in books, and the books are copyrighted and that’s that…

 

Regarding the comment that "everything is being labeled a pattern" I agree. It is the current over-hyped trend in the architecture/design space - competing only with SOA.

 

But I think that the current pragmatic value of patterns is to provide an abstract language we humans can use to discuss our software designs. A language better than we have without “patterns”. Regardless of whether some of these "patterns" are really patterns or not, the fact is that the collective effort of all these books and articles is providing us with that common language - and thus is allowing us to communicate at a higher level than was possible even 3-5 years ago.

Monday, February 21, 2005 6:03:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
awk
A few days ago I posted a list of programming languages in which I've been competent over the years. A few other people chimed in with their own lists, which I found very interesting.
 
Thomas Williams pointed out that XSLT was important in his history, which got me thinking about an oversight in my own list.
 
I neglected to mention awk (more specifically gawk on the VAX). awk is a unix text processing language, and gawk is the GNU Project version created for many platforms. I first learned about awk when taking a graduate level data structures class at the University of Minnesota where we used Unix boxes of some flavor or other. It was so useful that I found the VAX gawk implementation and put that on our VAX at work (this was around 1990 or so).
 
People rave about things like perl or XSLT, but I gotta say that for pure text processing it is hard to be awk. If XSLT had the power of awk it would have swept the web development world in ways we can't even imagine. I know that XSLT is widely used in the web world, but if you've used XSLT and haven't used awk you just don't know how crippled XSLT really is.
 
The thing is, that XSLT has the same mindset as awk. An awk program is divided up into blocks, and each block is triggered based on a regular expression evaluation. In the case of awk, each input line is evaluated against the regular expression for every block. Each block where the regular expression matches the input line is executed. There is no linear or event-driven or OO concept involved. It is the same as XSLT in this regard.
 
Where awk is amazing is that it is a complete language. It has variables, arrays, looping structures, conditionals and so forth. The input text is automatically parsed into easy-to-manipulate chunks based on your parsing choices. This means that inside one of these blocks, you can do virtually anything you desire. So within a block, triggered due to a regular expression match, you can use a complete programming language that is entirely geared toward text manipulation to act on a pre-parsed line of input.
 
To this day I wonder why there isn't either a variant of XSLT that can do what awk does, or a variant of awk that parses XML documents like XSLT. Perhaps the world just isn't ready for that kind of power?
Monday, February 21, 2005 9:29:49 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, February 18, 2005

In a recent online discussion the question came up “If ‘the middle tier’ has no knowledge about the actual data it's transporting, then what value is it adding?”

 

The answer: database connection pooling.

 

Pure and simple, in most rich client scenarios the only reason for a "middle tier" is to pool database connections. And the benefit can be tremendous.

 

Consider 200 concurrent clients all connecting to your database using conventional coding. They'll have at least 200 connections open, probably more. This is especially true since each client does its own "connection pooling", so typically a client will never close its connection once established for the day.

 

Then consider 200 clients going through a middle tier "app server" that does nothing but ferry the data between the clients and database. But it has the code to open the connections. Now those 200 clients might use just 3-4 connections to the database rather than 200, because they are all pooled on the server.

 

Was there a performance hit? Absolutely. Was there a scalability gain? Absolutely. Is it more expensive and harder to build/maintain? Absolutely.

 

This middle tier stuff is not a panacea. In fact its cost is typically higher than the benefit, because most applications don't actually have enough concurrent users to make it worth the complexity. But people are enamored of the idea of "n-tier", thinking it requires an actual physical tier...

 

I blame it on Windows DNA and those stupid graphic representations. They totally muddied the waters in people's understanding the difference between n-layer (logical separation) and n-tier (physical separation).

 

People DO want n-layer, because that provides reuse, maintainability and overall lower costs. Logical separation of UI, business logic, data access and data storage is almost always of tremendous benefit.

 

People MIGHT want n-tier if they need the scalability or security it can offer, and if those benefits outweigh the high cost of building a physically distributed system. But the cost/benefit isn’t there as often as people think, so a lot of people build physical n-tier systems for no good reason. They waste time and money for no real gain. This is sad, and is something we should all fight against.

 

I make my living writing books and articles and speaking about building distributed systems. And my primary message is just say NO!

 

You should be forced into implementing physical tiers kicking and screaming. There should be substantial justification for using tiers, and those justifications should be questioned at every step along the way.

 

At the same time, you should encourage the use of logical layers at all times. There should be substantial justification for not using layers, and any argument against layering of software should be viewed with extreme skepticism.

 

Layering is almost always good, tiers are usually bad.

Friday, February 18, 2005 12:11:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
I was just IMing with a friend. He's working with a client that has an interesting IT staff. The person he's working with for instance, recently shut down a main server for maintenance - in the middle of the day, without warning the active users.
 
This is why we need fault-tolerant, stateless server clusters. Not to stop downtime from accidents or hardware failure, but rather to overcome the limitations of IT staffing.
Friday, February 18, 2005 9:11:51 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, February 15, 2005

So there’s some news around Internet Explorer. Yeah, that browser that everyone uses, but which hasn’t changed for years.

 

First and most important, there was a vulnerability – a nasty one – in IE that got fixed in the most recent round of patches. If you haven’t installed them you better do it quick. This vulnerability is very easy to exploit! To see if you are vulnerable you can go here.

 

Second, a fellow RD put me onto this IE-based browser called Avant Browser. It adds a ton of Firefox-like features to IE, including tabbed browsing, integrated searching and more. And it is freeware – no ads, no spyware, no catch that we can find. I’ve been using it as my primary browser for a couple days now and no longer yearn for Firefox at all.

 

Finally, Microsoft has decided that they really need to do some thing about or with IE, so they are coming out with IE 7.0 sometime in the future. Here’s the Microsoft press announcement, and here’s already an article on the topic.

 

While I do think that Microsoft needs to do an IE upgrade, this is a double-edged sword for them – and for those of us who prefer rich clients.

 

Back around 2000, before the dot-bomb, there was an emerging debate about whether the browser should continue to be a glorified terminal or should become a programming platform. The discussion was rendered largely moot by the dot-bomb and the Bush-era recession, but Firefox and a new IE will likely rekindle the debate.

 

I personally don’t see the “browser as a programming platform” being a good thing. Browsers were designed for document viewing. They’ve already been hacked nearly to death to enable the kinds of web apps we have today. Just think how deeply they’ll need to be hacked to enable real programming capabilities comparable to Windows or KDE. Such a backwards way of getting a programming platform is very unlikely to result in anything good.

 

That said, if we were to start from scratch. If we were to design a real programming platform that supported rich GUI interactions, client-side logic, included meaningful state management and access to client-side devices like printers, scanners, etc. Well, then we’d have Windows, or at least something quite close to it.

 

Sure, it would offer a way to break from the past. It would mean all that legacy code could go away. But it would also mean that all our existing software would be stuck. The odds of such an idea going anywhere are comparable to BeOS taking over the planet.

 

So the browser will never become a new platform. At best it will become the ultimate in chewing gum and bailing twine platforms. What a nightmare!

 

The only way out I can see is a browser that directly embeds .NET or the JDK, and provides programmers in those virtual machines access to a decent document object model akin to what Microsoft is creating in Avalon or XAML. But there too, we’re just recreating Avalon itself inside a browser rather than in Windows itself. Why would we want to be restricted to some arbitrary browser window when we can have the whole OS experience?

 

So in the end I see little hope for the browser-as-a-platform concept – but I am sure there’ll be people who do see it as a good thing and who see Firefox and an IE upgrade as a way to rid themselves of traditional rich clients… Such is life.

Tuesday, February 15, 2005 1:09:55 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, February 14, 2005

There’s a thread on the CSLA .NET discussion forum about possible differences between the VB and C# versions of CSLA .NET. I started to answer the thread, then got on a roll, so it became a blog entry :-)

 

I strive to keep the two versions of CSLA .NET in sync within a reasonable time window. Everything after version 1.0 (the version in my books) is essentially my donation to the community. What I get out of it is not wealth, but rather is a lot of very interesting and useful feedback from the vibrant CSLA .NET community. I'm able to try out some of the most interesting (to me) ideas by releasing new and updated versions of the code. It is a learning opportunity.

 

The fact that I have to do every mod twice is a serious pain and does reduce the fun, but I think it is worth the pain because it makes the end result more useful for everyone.

 

I do most of my first-run coding in VB, because I prefer it. Simple personal preference. I've done some first-run coding in C# too, I just don't find it as enjoyable. Some people have the reverse experience and that's cool too. That doesn't bother me one way or the other. I fully understand feeling an affinity toward a specific language. It took me years to get over Pascal. Ahh VAX Pascal, I still harbor such fond memories.

 

But what I am more concerned about in terms of CSLA .NET is VS 2005. In .NET 2.0 we start to see some feature divergence between VB and C#. Most notably the My namespace in VB. Fortunately by playing in the middle-tier, CSLA is less subject to the differences than some code will be. However, there'll still be some differences that will make my dual life harder.

 

The biggest one that will impact me is My.Resources, which makes the use of resources somewhat simpler than C#. This isn't a huge thing, but it does mean there'll be extra code differences to reconcile between the two versions in CSLA .NET 2.0.

 

There's also My.Settings, though I don't know if that will impact me quite as much. I anticipate dropping the DB() function from BusinessBase in 2.0, since most people (rightly) avoid putting db connection strings in their config files.

 

The two primary C# features (yield and anonymous delegates) don't appear to have a home in CSLA, so I don't expect any differences from them. Not that they aren’t seriously cool features, but they just don’t have a place in CSLA .NET itself.

 

The new strongly typed TableAdapter classes are very cool. They are useful in both languages. And I hope to use strongly typed TableAdapter objects to simplify the code in the DataPortal_xyz methods.

 

There are some features that are more accessible to VB than C# in the new strongly typed DataTable (due to C#'s lack of WithEvents functionality - a major oversight imo). However, I don't expect to use any of those features in CSLA to start with, so there's no impact there.

 

When I write the book I'll create Windows and Web UI chapters. Those are what I dread most, because that's where the differences due to My become much more serious. There are numerous examples of UI development where My will be a serious code-saver - thus causing direct differences between the VB and C# code. Not that I can't do the same stuff in C#, just that it will take more and different code, which increases the effort on my part as an author.

 

Fortunately most of the book is about the framework and creating business objects, and the language divergence will have relatively minimal impact in those areas.

 

It is hard to speculate on what comes after VS 2005, but personally I expect more divergence, not less. Earlier in the thread someone noted that things like the Mac, Linux and Java still exist even though you can technically do everything they do with Windows and .NET.

 

The fact is that they all serve a purpose, as does .NET to them. People deep in C# often think different than those deep in VB. People in Java think different than those in .NET. This means they have different perspectives, different priorities, on the same problems and issues. This is only good. This means there are competing ideas that we can all evaluate and use to the best of our abilities, regardless of the language or platform we choose to use.

 

Loving distributed computing as I do, I am constantly taking ideas from the C++ and Java worlds. I closely watch the SOA world, even though I think it is misguided in many ways, because there are interesting ideas and perspectives there that can apply to distributed object-oriented systems as well.

 

I’ve said it before and I’ll say it again, if you only know one programming language family (such as the C family or the Basic family) then you really, really need to get out more. Your horizons and thus your career are simply too limited and you can’t be considered credible in most of these discussions.

 

That’s an interesting meme. Which programming languages have you been competent in during your career? I’ll start (in rough order of usage):

 

1.      Apple BASIC

2.      VAX Pascal

3.      Turbo Pascal

4.      DCL

5.      FORTRAN 90

6.      VAX Basic

7.      ARexx

8.      Modula-II

9.      Visual Basic (1-6)

10.  Visual Basic .NET

11.  C#

 

While I did write a VT terminal emulator in C once, I don’t think I was ever really competent in C, so I’m not counting that. My memories of that experience are not inspirational in the slightest… I’ve also dabbled in various Unix shell languages and bat files, but was never competent in them.

 

Converting the list to language families is harder, because things like ARexx aren’t obvious, but here’s my attempt:

 

1.      Pascal (Pascals and Modula-II)

2.      Basic (various)

3.      FORTRAN

4.      Scripting (ARexx, DCL)

5.      C (C# and C if you are generous)

 

So, having wandered from the topic of CSLA .NET parity between VB and C# we arrive at what could be a cool meme. Go ahead, comment or blog – what languages and language families have you been competent in during your career?

Monday, February 14, 2005 6:50:22 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, February 08, 2005

I was looking for info on a design pattern, and on a whim I thought I'd see if patternshare.org was online yet. And it is!

This site promises to be an awesome resource for all of us, since it provides a centralized index/resouce for patterns of many types. The fact that it is online is wonderful news!

Tuesday, February 08, 2005 7:35:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I just finished watching Eric Rudder’s keynote on Indigo at VS Live in San Francisco. As with all keynotes, it had glitz and glamour and gave a high-level view of what Microsoft is thinking.

 

(for those who don’t know, Eric is the Microsoft VP in charge of developer-related stuff including Indigo)

 

Among the various things discussed was the migration roadmap from today’s communication technologies to Indigo. I thought it was instructive.

 

From asmx web services the proposed changes are minor. Just a couple lines of code change and away you go. Very nice.

 

From WSE to Indigo is harder, since you end up removing lots of WSE code and replacing it with an attribute or two. The end result is nice because your code is much shorter, but it is more work to migrate.

 

From Enterprise Services (COM+, ServicedComponent) the changes are minor – just a couple lines of changed code. But the semantic differences are substantial because you can now mark methods as transactional rather than the whole class. Very nice!

 

From System.Messaging (MSMQ) to Indigo the changes are comparable in scope to the WSE change. You remove lots of code and replace it with an attribute or two. Again the results are very nice because you save lots of code, but the migration involves some work.

 

From .NET Remoting to Indigo the changes are comparable to the asmx migration. Only a couple lines of code need to change and away you go. This does assume you listened to advice from people like Ingo Rammer, Richard Turner and myself and avoided creating custom sinks, custom formatters or custom channels. If you ignored all this good advice then you’ll get what you deserve I guess :-)

 

As Eric pointed out however, Indigo is designed for the loosely coupled web service/SOA mindset, not necessarily for the more tightly coupled n-tier client/server mindset. He suggested that many users of Remoting may not migrate to Indigo – directly implying that Remoting may remain the better n-tier client/server technology.

 

I doubt he is right. Regardless of what Indigo is designed for, it is clear to me that it offers substantial benefits to the n-tier client/server world. These benefits include security, reliable messaging, simplified 2-phase transactions and so forth. The fact that Indigo can be used for n-tier client/server even if it is a bit awkward or not really its target usage won’t stop people. And from today’s keynote I must say that it looks totally realistic to (mis)use Indigo for n-tier client/server work.

Tuesday, February 08, 2005 12:35:52 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, February 03, 2005

One last post on SOA from my coffee-buzzed, Chicago-traffic-addled mind.

 

Can a Service have Tiers?

 

Certainly a Service can have layers. Any good software will have layers. In the case of a Service these layers will likely be:

 

1.      Interface

2.      Business

3.      Data access

4.      Data management

 

This only makes sense. You’ll organize your message-parsing and XML handling code into the interface layer, which will invoke the business layer to do actual work. The business layer may invoke the Data access layer to get/save data into the Data management (database) layer.

 

But layers are logical constructs. They are just a way of organizing code so it is maintainable, readable and reusable. Layers say nothing about how the code is deployed – that is the realm of tiers.

 

So the question remains, can a Service be divided into tiers?

 

I’ll argue yes.

 

You deploy layers onto different tiers in an effort to get a good trade-off between performance, scalability, fault-tolerance and security. More tiers mean worse performance, but may result in better scalability or security.

 

If I create a service, I may very well need to deploy it such that I can provide high levels of scalability or security. To do this, I may need to deploy some of my service’s layers onto different tiers.

 

This is no different – absolutely no different – than what we do with web applications. This shouldn’t be a surprise, since a web service is nothing more than a web application that spits out XML instead of HTML. It seems pretty obvious that the rules are the same.

 

And there are cases where a web application needs to have tiers to scale or to be secure. It follows then that the same is true for web services.

 

Thus, services can be deployed into multiple tiers.

 

Yet the SOA purists would argue that any tier boundary should really be a service boundary. And this is where things get nuts. Because a service boundary implies lack of trust, while a layer boundary implies complete trust. Tiers are merely deployments of layers, so tiers imply complete trust too.

 

(By trust here I am not talking about security – I’m talking about data trust. A service must treat any caller – even another service – as an untrusted entity. It must assume that any inbound data breaks rules. If a service does extend trust then it instantly becomes unmaintainable in the long run and you just gave up the primary benefit of SOA.)

 

So if a tier is really a service boundary, then we’re saying that we have multiple services, one calling the next. But services pretty much always have those four layers I mentioned earlier, so now each “was-tier-now-is-service” will have those layers.

 

Obviously this is a lot more code to write, and a lot of overhead, since the lower-level service (that would have been a tier) can’t trust the higher level one and must replicate much of its validation and possibly other business logic.

 

To me, at this point, it is patently obvious that the idea of entirely discarding tiers in favor of services is absurd. Rather, a far better view is to suggest that services can have tiers – private, trusting communications between layers, even across the network between the web server hosting the service interface and the application server hosting the data access code.

 

And of course this ties right back into my previous post for today on remoting. Because it is quite realistic to expect that you’ll use DCOM/ES/COM+ or remoting to do the communication between the web server and application server for this private communication.

 

While DCOM might appear very attractive (and is in many cases), it is often not ideal if there’s a firewall between the web server and application server. While it is technically possible to get DCOM to go through a firewall, I gotta say that this one issue is a major driver for people to move to remoting or web services.

 

And web services might be very attractive (and is in many cases), it is not ideal if you want to use distributed OO concepts in the implementation of your service.

 

And there we are back at remoting once again as being a perfectly viable option.

 

Of course there’s a whole other discussion we could have about whether there’s any value to using any OO design concepts when implementing a service – but that can be a topic for another time.

Thursday, February 03, 2005 11:32:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I am afraid that I'm rapidly becoming more convinced than even Ted that SOA == web services == RPC with angle brackets.

The more people I talk to, the more I realize that virtually no one is actually talking about service-oriented analysis, architecture or design. They are using SOA as a synonym for web services, and they are using web services as a replacement for DCOM, RMI, remoting or whatever RPC protocol they used before.

I think the battle is lost, if battle there was. The idea of a loosely-coupled, message-based architecture where autonomous entities interact with each other over policy-based connections is a really cool idea, but it doesn’t resonate with typical development teams.

The typical development team is building line-of-business systems and just needs a high performance, reliable and feature-rich RPC protocol. Sometimes web services fits that bill, and even if it doesn’t it is the currently fad so it tends to win by default.

People are running around creating web services that do not follow a message-based design. What would a message-based design look like you ask? Like this:

result = procedure(request)

Where ‘procedure’ is the method/procedure name, ‘request’ is the idempotent message containing the request from the caller and ‘result’ is the idempotent message containing the result of the procedure.

Then if you want to be a real purist, you’d make this asynchronous, so the design would actually be:

procedure(request)

And any result message would be returned as a service call from ‘procedure’. But that really goes out of bounds for almost everyone, because then you are truly doing distributed parallel processing and that’s just plain hard to grok.

So in our pragmatic universe, we’re talking about the

result = procedure(request)

form and that’s enough. But that isn’t what most people are doing. Most people are creating services as though they were components. Creating methods/procedures that accept parameters rather than messages. Stuff like this:

customerList = GetCustomerData(firstName As String, lastName As String)

Where ‘customerList’ is a DataSet containing the results of any matches.

There’s not a message, idempotent or not, to be found here. This is components-on-the-web. This is COM-on-the-web or CORBA-on-the-web. This is not SOA, this is just RPC redux.

And that’s OK. I have no problem with that necessarily. But since this is the norm, I am pretty much ready to concede that the “Battle of SOA” is lost. SOA has already become just another acronym in the long list of RPC acronyms we’ve left behind over the decades.

Too bad really, because I found the distributed parallel, loosely coupled, message-based concepts to be extremely interesting and challenging. Hard, and impractical for normal business development, but really ranking high on the geek-cool chart.

Thursday, February 03, 2005 10:59:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I’ve hashed and rehashed this topic numerous times. In particular, read this and this. But the debate rages on, paralyzing otherwise perfectly normal development teams in a frenzy of analysis paralysis over something that is basically not all that critical in a good architecture.

 

I mean really. If you do a decent job of architecting, it really doesn’t matter a whole hell of a lot which RPC protocol you use, because you can always switch away to another one when required.

 

And while web services a pretty obvious choice for SOA, the reality is that if you are debating between remoting, web services and DCOM/ES/COM+ then you are looking for an RPC protocol – end of story.

 

If you were looking to do service-oriented stuff you’d reject remoting and DCOM/ES/COM+ out of hand because they are closed, proprietary, limited technologies that just plain don’t fit the SO mindset.

 

In an effort to summarize to the finest point possible, I’ll try once again to clarify my view on this topic. In particular, I am going to focus on why you might use remoting and how to use it if you decide to. Specifically I am focusing on the scenario where remoting works better than either of its competitors.

 

First, the scenario that remoting does that web services and ES/COM+/DCOM don’t do (without hacking them):

 

1.      You want to pass rich .NET types between AppDomains, Processes or Machines (we’re talking Hashtables, true business objects, etc.)

2.      You are communicating between layers of an application that are deployed in different tiers (We’re not talking services here, we’re talking layers of a single application. We’re talking client-server, n-tier, etc.)

3.      You want no-touch deployment on the client

 

If you meet the above criteria then remoting is the best option going today. If you only care about 1 and 2 then ES/COM+/DCOM is fine – all you lose is no-touch deployment (well, and a few hours/days in configuring DCOM, but that’s just life :-) ).

 

If you don’t care about 1 then web services is fine. In this case you are willing to live within the confines of the XmlSerializer and should have no pretense of being object-oriented or anything silly like that. Welcome to the world of data-centric programming. Perfectly acceptable, but not my personal cup of tea.  To be fair, it is possible to hack web services to handle number 1, and it isn't hard. So if you feel that you must avoid remoting but need 1, then you aren't totally out of luck.

 

But in general, assuming you want to do 1, 2 and 3 then you should use remoting. If so, how should you use remoting?

 

1.      Host in IIS

2.      Use the BinaryFormatter

3.      Don’t create custom sinks or formatters, just use what Microsoft gave you

4.      Feel free to use SSL if you need a secure line

5.      Wrap your use of the RPC protocol in abstraction objects (like my DataPortal or Fowler’s Gateway pattern)

 

Hosting in IIS gives you a well-tested and robust process model in which your code can run. If it is good enough for www.microsoft.com and www.msn.com it sure should be good enough for you.

 

Using the BinaryFormatter gives you optimum performance and avoids the to-be-deprecated SoapFormatter.

 

By not creating custom sinks or formatters you are helping ensure that you’ll have a relatively smooth upgrade path to Indigo. Indigo, after all, wraps in the core functionality of remoting. They aren’t guaranteeing that internal stuff like custom sinks will upgrade smoothly, but they have been very clear that Indigo will support distributed OO scenarios like remoting does today. And that is what we’re talking about here.

 

If you need a secure link, use SSL. Sure WSE 2.0 gives you an alternative in the web service space, but there’s no guarantee it will be compatible with WSE 3.0, much less Indigo. SSL is pretty darn stable comparatively speaking, and we’ve already covered the fact that web services doesn’t do distributed OO without some hacking.

 

Finally, regardless of whether you use remoting, web services or ES/COM+/DCOM make sure – absolutely sure – that you wrap your RPC code in an abstraction layer. This is simple good architecture. Defensive design against a fluid area of technology.

 

I can’t stress this enough. If your business or UI code is calling web services, remoting or DCOM directly you are vulnerable and I would consider your design to be flawed.

 

This is why this whole debate is relatively silly. If you are looking for an n-tier RPC solution, just pick one. Wrap it in an abstraction layer and be happy. Then when Indigo comes out you can easily switch. Then when Indigo++ comes out you can easily upgrade. Then when Indigo++# comes out you are still happy.

Thursday, February 03, 2005 10:11:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In