Rockford Lhotka's Blog

Home | | CSLA .NET

 Friday, November 30, 2018

For some time now it has been the case that people are all excited about "microservices".

I want to start by saying that I think this is a horrible name for what we're trying to accomplish, because the term attempts to put the concept of size onto services.

Calling a service is expensive. You are creating a message, serializing that message, sending it over a network via http or a queue (or both), where it is deserialized, and finally the service can do some work. Any response goes through all that same overhead to get back to the caller. That overhead is measured in milliseconds - usually 10's of milliseconds.

So if your "micro"service does work that takes less than 10-20 milliseconds to complete, then you've paid this massive performance cost to get to/from the service over the network, only to have the actual work take less time than the overhead.

That's like paying your auto mechanic for a full hour, just so they can do a 5 minute fix to your car. Nobody would do that without being forced (and sometimes we are - due to specialized skills or tools/parts availability).

This analogy translates to the services world too. Calling a service is fine if the service:

  1. does enough work to be worth the overhead of calling the service (the work took longer than the network overhead)
  2. needs to run in a secure location
  3. is required to run in a geo-bounded location (think GDPR)
  4. must run on specialized hardware that doesn't exist elsewhere
  5. contains code that is so valuable that, if reverse engineered, would destroy your organization (variation on security)

I've tried to put those in order from most to least likely, but everyone's scenario is a bit different.

The point I'm really getting at, is that in the majority of cases it is important to make sure a service does enough work to make it worth the non-trivial overhead of getting to/from the service over http and/or a queue.

In those other less common scenarios there's a business decision to be made. If the service is very small and fast, you will be paying massive overhead for very little work to be done. Presumably the business is saying that those other concerns outweigh the fact that your app will be far slower than it could be.

(hint: as a developer you should make this clear to the business stakeholders up front - head off the question you know is coming later: "why is this app so slow???" - you want to be able to pull out the email thread where you explained that it will be slow because the business decision maker chose to put some tiny-but-critical bit of functionality in a service)

Friday, November 30, 2018 11:21:25 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, August 30, 2018

Software deployment has been a major problem for decades. On the client and the server.

On the client, the inability to deploy apps to devices without breaking other apps (or sometimes the client operating system (OS)) has pushed most business software development to relying entirely on the client's browser as a runtime. Or in some cases you may leverage the deployment models of per-platform "stores" from Apple, Google, or Microsoft.

On the server, all sorts of solutions have been attempted, including complex and costly server-side management/deployment software. Over the past many years the industry has mostly gravitated toward the use of virtual machines (VMs) to ease some of the pain, but the costly server-side management software remains critical.

At some point containers may revolutionize client deployment, but right now they are in the process of revolutionizing server deployment, and that's where I'll focus in the remainder of this post.

Fairly recently the concept of containers, most widely recognized with Docker, has gained rapid acceptance.


Containers offer numerous benefits over older IT models such as virtual machines. Containers integrate smoothly into DevOps; streamlining and stabilizing the move from source code to deployable assets. Containers also standardize the deployment and runtime model for applications and services in production (and test/staging). Containers are an enabling technology for microservice architecture and DevOps.

Virtual Machines to Containers

Containers are somewhat like virtual machines, except they are much lighter weight and thus offer major benefits. A VM virtualizes the hardware, allowing installation of the OS on "fake" hardware, and your software is installed and run on that OS. A container virtualizes the OS, allowing you to install and run your software on this "fake" OS.

In other words, containers virtualize at a higher level than VMs. This means that where a VM takes many seconds to literally boot up the OS, a container doesn't boot up at all, the OS is already there. It just loads and starts our application code. This takes fractions of a second.

Where a VM has a virtual hard drive that contains the entire OS, plus your application code, plus everything else the OS might possibly need, a container has an image file that contains your application code and any dependencies required by that app. As a result, the image files for a container are much smaller than a VM hard drive.

Container image files are stored in a repository so they can be easily managed and then downloaded to physical servers for execution. This is possible because they are so much smaller than a virtual hard drive, and the result is a much more flexible and powerful deployment model.

Containers vs PaaS/FaaS

Platform as a Service and Functions as a Service have become very popular ways to build and deploy software, especially in public clouds such as Microsoft Azure. Sometimes FaaS is also referred to as "serverless" computing, because your code only uses resources while running, and otherwise doesn't consume server resources; hence being "serverless".

The thing to keep in mind is that PaaS and FaaS are both really examples of container-based computing. Your cloud vendor creates a container that includes an OS and various other platform-level dependencies such as the .NET Framework, nodejs, Python, the JDK, etc. You install your code into that pre-built environment and it runs. This is true whether you are using PaaS to host a web site, or FaaS to host a function written in C#, JavaScript, or Java.

I always think of this as a spectrum. On one end are virtual machines, on the other is PaaS/FaaS, and in the middle are Docker containers.

VMs give you total control at the cost of you needing to manage everything. You are forced to manage machines at all levels, from OS updates and patches, to installation and management of platform dependencies like .NET and the JDK. Worse, there's no guarantee of consistency between instances of your VMs because each one is managed separately.

PaaS/FaaS give you essentially zero control. The vendor manages everything - you are forced to live within their runtime (container) model, upgrade when they say upgrade, and only use versions of the platform they currently support. You can't get ahead or fall behind the vendor.

Containers such as Docker give you some abstraction and some control. You get to pick a consistent base image and add in the dependencies your code requires. So there's consistency and maintainability that's far superior to a VM, but not as restrictive as PaaS/FaaS.

Another key aspect to keep in mind, is that PaaS/FaaS models are vendor specific. Containers are universally supported by all major cloud vendors, meaning that the code you host in your containers is entirely separated from anything specific to a given cloud vendor.

Containers and DevOps

DevOps has become the dominant way organizations think about the development, security, QA, deployment, and runtime monitoring of apps. When it comes to deployment, containers allow the image file to be the output of the build process.

With a VM model, the build process produces assets that must be then deployed into a VM. But with containers, the build process produces the actual image that will be loaded at runtime. No need to deploy the app or its dependencies, because they are already in the image itself.

This allows the DevOps pipeline to directly output a file, and that file is the unit of deployment!

No longer are IT professionals needed to deploy apps and dependencies onto the OS. Or even to configure the OS, because the app, dependencies, and configuration are all part of the DevOps process. In fact, all those definitions are source code, and so are subject to change tracking where you can see the history of all changes.

Servers and Orchestration

I'm not saying IT professionals aren't needed anymore. At the end of the day containers do run on actual servers, and those servers have their own OS plus the software to manage container execution. There are also some complexities around networking at the host OS and container levels. And there's the need to support load distribution, geographic distribution, failover, fault tolerance, and all the other things IT pros need to provide in any data center scenario.

With containers the industry is settling on a technology called Kubernetes (K8S) as the primary way to host and manage containers on servers.

Installing and configuring K8S is not trivial. You may choose to do your own K8S deployment in your data center, but increasingly organizations are choosing to rely on managed K8S services. Google, Microsoft, and Amazon all have managed Kubernetes offerings in their public clouds. If you can't use a public cloud, then you might consider using on-premises clouds such as Azure Stack or OpenStack, where you can also gain access to K8S without the need for manual installation and configuration.

Regardless of whether you use a managed public or private K8S cloud solution, or set up your own, the result of having K8S is that you have the tools to manage running container instances across multiple physical servers, and possibly geographic data centers.

Managed public and private clouds provide not only K8S, but also the hardware and managed host operating systems, meaning that your IT professionals can focus purely on managing network traffic, security, and other critical aspects. If you host your own K8S then your IT pro staff also own the management of hardware and the host OS on each server.

In any case, containers and K8S radically reduce the workload for IT pros in terms of managing the myriad VMs needed to host modern microservice-based apps, because those VMs are replaced by container images, managed via source code and the DevOps process.

Containers and Microservices

Microservice architecture is primarily about creating and running individual services that work together to provide rich functionality as an overall system.

A primary attribute (in my view the primary attribute) of services is that they are loosely coupled, sharing no dependencies between services. Each service should be deployed separately as well, allowing for indendent versioning of each service without needing to deploy any other services in the system.

Because containers are a self-contained unit of deployment, they are a great match for a service-based architecture. If we consider that each service is a stand-alone, atomic application that must be independently deployed, then it is easy to see how each service belongs in its own container image.

This approach means that each service, along with its dependencies, become a deployable unit that can be orchestrated via K8S.

Services that change rapidly can be deployed frequently. Services that change rarely can be deployed only when necessary. So you can easily envision services that deploy hourly, daily, or weekly, while other services will deploy once and remain stable and unchanged for months or years.


Clearly I am very positive about the potential of containers to benefit software development and deployment. I think this technology provides a nice compromise between virtual machines and PaaS, while providing a vendor-neutral model for hosting apps and services.

Thursday, August 30, 2018 12:47:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, February 9, 2016

I’ve been thinking about some of the current trends and hyped terms/concepts lately. On one hand I’m convinced we’re coming out of the inflection point chaos that has consumed our industry over the past several years, and on the other hand any time things start to stabilize the hype-masters come out of the woodwork because there’s money to be made. This isn’t new – see my SOA, dollar signs and trust boundaries post from 2004…

In this post I’m going to briefly talk about microservices, containers, and devops.

What’s interesting about that 2004 post is that it is one of quite a number of posts I did on service-oriented concepts, and most of my focus at the time was on “pure service-oriented” thinking – which never took off – and which is now known by the newly-trendy term “microservices”.

In other words, this “new” microservices stuff is just the type of SOA that we should have been doing for the past decade. Oops.

As always, when an old idea comes back around under a new name, we owe it to ourselves to ask whether there’s anything different this time that might make the idea more successful than it was last time?

For example, it took us decades to get asynchronous programming to become mainstream. We tried over and over again, and it never caught on; until recently there was language/pattern support via things like the async/await keywords in C# and the concept of promises in JavaScript (soon also to be async/await). So what made async programming acessible to the general developer population was a change in languages and tooling.

Given that pure SOA (now microservices) failed to become mainstream in 2004, why do we think it will be successful in 2016? Has anything changed? Do we have new language features or tooling that will make microservice architecture, design, and implementation acessible to the general developer population?

I don’t see it. I don’t see where C# or Java have changed to accomodate service-oriented concepts in any new or novel way. Other languages might have done so – F#, Rust, and other niche languages have some neat ideas. But those ideas have yet to work their way into C# or Java, and I think it unrealistic to think that the mainstream business development world is going to shift to these other niche languages (cool though they might be).

I do think there are some interesting platform innovations going on, specifically around containers. Yet another trendy hype-laden concept – containers, Docker, etc. Here I’m less skeptical though, because I think containers are in the same vein as Azure PaaS concepts like Web Roles, Web Site, etc. Pre-defined environments in which our code can run without all the complexity of dealing with some random IT person “optimizing a configuration” the night before we go live. That is never good, and has been the cause of too many high profile failures in my experience.

The idea that we can build apps that run in a pre-defined, known environment – where the environment is deployed with the app is really compelling. To some degree Azure started making this mainstream by providing us with pre-defined and pre-deployed environments that were uniform, consistent, and known. But containers are more flexible, while retaining the key requirement of being pre-defined and consistent.

Personally I expect this container model to transform the way most of us build and think about deploying server-side apps and services over the next few years. Especially (from a Microsoft developer perspective) .NET Core matures on Linux so it runs in Docker, and as Microsoft comes out with Windows containers with comparable predictability to the Linux containers we have today.

Will containers be enough to propel success with microservice architectures all by themselves? I doubt it. I think we need another iteration of language innovation, with a focus on read-only data, message routing patterns, and transparent asynchronicity before microservices will become mainstream.

On the other hand, there is the trendy DevOps term, which I think is enabled in part by containers. The idea of devops is that we extend what mature/good development shops already do with continuous integration (gated checkins, automatic builds, automated unit test execution) further down the pipeline. So we also automate much of the UX and UAT testing (Magenic has a whole business unit focused on automated QA for example), and automate the creation of “production-ready” deployment assets. Ideally those assets are auto-deployed into a QA or staging environment as a result of each build – or at least they can be auto-deployed at the click of a button.

Now those final assets today, if they exist at all, might be in the form of an msi or a deploy-ready website pushed to a git repo. But combining devops with containers means those final assets can be a complete container, ready to be spun up on 1:n servers in QA, staging, or production.

To illustrate this let me share a story. I was recently evaluating Discourse, some very nice, modern forum software. Their only supported deployment mechanism is via Docker containers. They provide a container that you can just host in Docker. So to run a Discourse instance you just need a machine (often a VM – in my case in Azure) where you can install the Docker software. Then you tell Docker to run the Discourse container, and just like that you have a fully realized Discourse instance.

If that wasn’t cool enough (and it was cool!), a couple weeks into my evaluation Discourse came out with a new version. This was visible in my Discourse admin web page as an Update button. I clicked that button, causing my Discourse instance to tell the Docker host to download the updated container and to reload the instance using that new container. It was by far the most painless upgrade of any software I’ve ever experienced – short of upgrading apps on mobile phones or UWP apps on my Surface.

In other words, devops plus containers has the potential to bring the simplicity of mobile app deployment to server-side environments. Amazing!

Yeah, I’d say it was a lot of hype. Heck I did say it was a lot of hype. Then I experienced it first-hand, and it was quite amazing.

So in summary:

  • Microservices – mostly hype; I think another iteration of language/tool innovation is required first – give it 5+ years
  • Containers – some hype, lots of promise, just coming together now; this will be important over the next 1-3 years
  • DevOps – evolution of what good dev shops should already be doing with CI; important now and into the future
  • DevOps + Containers – if this comes together like I think it will, I am extremely excited about the future!
Tuesday, February 9, 2016 3:38:44 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, October 29, 2009

I’m not too sure about all this “manifesto” stuff these days. Certain right-wing politicians from days gone by are surely rolling in their graves. Then again, today’s highly communist/socialist oriented open source worldview would have gotten many of us into a lot of trouble just a few decades ago…

Regardless, there’s now the SOA Manifesto. And it seems to capture the spirit of SOA in a concise manner.

In fact, its opening statement, talking about putting business value above technology and so forth, seems to make sense for any software endeavor, not just ones that are service-oriented.

If you consider that SOA is much more of an enterprise architecture than it is an application architecture, this all makes a great deal of sense. The idea that applications should use service-oriented communication when they need to interact with each other is exactly the sweet spot for service-orientation.

In that context, saying things like external uniformity and internal diversity are good is clearly correct; because different applications may require different technologies or implementation choices, but they all need to work with some enterprise-standardized message-based service model.

I still suspect SOA may ultimately make a big difference in our industry. But I also think it will take decades to do so, just like object-orientation. People were working on object-orientation for well over 20 years before the ideas became mainstream. SOA is (if you are really generous) maybe a decade old, so it is probably time to work on some formalism so it is ready to become mainstream 10-15 years from now :)

Thursday, October 29, 2009 10:23:13 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, March 16, 2008

I recently participated in a webcast covering SOA, REST and various related topics. You can listen to it here

Sunday, March 16, 2008 6:32:37 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, January 24, 2008

I reader recently sent me an email asking why the PTWebService project in the CSLA .NET ProjectTracker reference app has support for using the data portal to talk to an application server. His understanding was that web services were end points, and that they should just talk to the database directly. Here's my answer:

A web service is just like any regular web application. The only meaningful difference is that it exposes XML instead of HTML.


Web applications often need to talk to application servers. While you are correct – it is ideal to build web apps in a 2-tier model, there are valid reasons (mostly around security) why organizations choose to build them using a 3-tier model.


The most common scenario (probably used by 40% of all organizations) is to put a second firewall between the web server and any internal servers. The web server is then never allowed to talk to the database directly (for security reasons), and instead is required to talk through that second firewall to an app server, which talks to the database.


The data portal in CSLA .NET helps address this issue, by allowing a web application to talk to an app server using several possible technologies. Since different organizations allow different technologies to penetrate that second firewall this flexibility is important.


Additionally, the data portal exists for other scenarios, like Windows Forms or WPF applications. It would be impractical for me to create different versions of CSLA for different types of application interface. In fact that would defeat the whole point of the framework – which is to provide a consistent and unified environment for building business layers that support all valid interface types (Windows, Web, WPF, Workflow, web service, WCF service, console, etc).

Thursday, January 24, 2008 8:01:56 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, November 5, 2007

I was recently asked how to get the JSON serializer used for AJAX programming to serialize a CSLA .NET business object.

From an architectural perspective, it is better to view an AJAX (or Silverlight) client as a totally separate application. That code is running in an untrusted and terribly hackable location (the user’s browser – where they could have created a custom browser to do anything to your client-side code – my guess is that there’s a whole class of hacks coming that most people haven’t even thought about yet…)

So you can make a strong argument that objects from the business layer should never be directly serialized to/from an untrusted client (code in the browser). Instead, you should treat this as a service boundary – a semantic trust boundary between your application on the server, and the untrusted application running in the browser.

What that means in terms of coding, is that you don’t want to do some sort of field-level serialization of your objects from the server. That would be terrible in an untrusted setting. To put it another way - the BinaryFormatter, NetDataContractSerializer or any other field-level serializer shouldn't be used for this purpose.

If anything, you want to do property-level serialization (like the XmlSerializer or DataContractSerializer or JSON serializer). But really you don’t want to do this either, because you don’t want to couple your service interface to your internal implementation that tightly. This is, fundamentally, the same issue I discuss in Chapter 11 of Expert 2005 Business Objects as I talk about SOA.

Instead, what you really want to do (from an architectural purity perspective anyway) is define a service contract for use by your AJAX/Silverlight client. A service contract has two parts: operations and data. The data part is typically defined using formal data transfer objects (DTOs). You can then design your client app to work against this service contract.

In your server app’s UI layer (the aspx pages and webmethods) you translate between the external contract representation of data and your internal usage of that data in business objects. In other words, you map from the business objects in/out of the DTOs that flow to/from the client app. The JSON serializer (or other property-level serializers) can then be used to do the serialization of these DTOs - which is what those serializers are designed to do.

This is a long-winded way of saying that you really don’t want to do JSON serialization directly against your objects. You want to JSON serialize some DTOs, and copy data in/out of those DTOs from your business objects – much the way the SOA stuff works in Chapter 11.

Monday, November 5, 2007 11:53:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, June 1, 2007

The WCF NetDataContractSerializer is an almost, but not quite perfect, replacement for the BinaryFormatter.

The NDCS is very important, because without it WCF could never be viewed as a logical upgrade path for either Remoting or Enterprise Services users. Both Remoting and Enterprise Services use the BinaryFormatter to serialize objects and data for movement across AppDomain, process or network boundaries.

Very clearly, since WCF is the upgrade path for these core technologies, it had to include a serialization technology that was functionally equivalent to the BinaryFormatter, and that is the NDCS. The NDCS is very cool, because it honors both the Serializable model and the DataContract model, and even allows you to mix them within a single object graph.

Unfortunately I have run into a serious issue, where the NDCS is not able to serialize the System.Security.SecurityException type, while the BinaryFormatter has no issue with it.

The issue shows up in CSLA in the data portal, because it is quite possible for the server to throw a SecurityException. You'd like to get that detail back on the client so you can tell the user why the server call failed, but instead you get a "connection unexpectedly closed" exception instead. The reason is that WCF itself blew up when trying to serialize the SecurityException to return it to the client. So rather than getting any meaningful result, the client gets this vague and nearly useless exception instead.

By the way, if you want to see the failure, just run this code:

    Dim buffer As New System.IO.MemoryStream
    Dim formatter As New System.Runtime.Serialization.NetDataContractSerializer
    Dim ex As New System.Security.SecurityException("a test")
    formatter.Serialize(buffer, ex)

And if you want to see it not fail run this code:

    Dim buffer As New System.IO.MemoryStream
    Dim formatter As New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter
    Dim ex As New System.Security.SecurityException("a test")
    formatter.Serialize(buffer, ex)

I've been doing a lot of work with the NDCS over the past several months. And this is the first time I've encountered a single case where NDCS didn't mirror the behavior of the BinaryFormatter - which is why I do think this is a WCF bug. Now just to get it acknowledged by someone at Microsoft so it can hopefully get fixed in the future...

The immediate issue I face is that I'm not entirely sure how to resolve this issue in the data portal. One (somewhat ugly) solution is to catch all exceptions (which I actually do anyway), and then scan the object graph that is about to be returned to the client to see if there's a SecurityException in the graph. If so perhaps I could manually invoke the BinaryFormatter and just return a byte array. The problem with that is in the case where the object graph is a mix of Serializable and DataContract objects - in which case the BinaryFormatter won't work because it doesn't understand DataContract...

In the end I may just have to leave it be, and people will need to be aware that they can never throw a SecurityException from the server...

Friday, June 1, 2007 11:34:14 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, February 4, 2007

It seems that people are always looking for The Next Big Thing™, which isn’t surprising. What I do find surprising is that they always assume that TNBT must somehow destroy the Previous Big Thing™, which in the field of software development was really object-orientation (at least for most people).


Over the past few years Web Services and then their reincarnation as SOA or just SO were going to eliminate the use of OO. As more and more people have actually implemented and consumed services I think it has become abundantly clear that SOA is merely a repackaging of EAI from the 90’s and EDI from before that. Useful, but hardly of the sweeping impact of OO, at least not in the near term.


Though to be fair, OO took more than 20 years before it became remotely “mainstream”, and even today only around 30% of developers (based on my informal, but broad, polling over the past couple years) apply OOD in their daily work. So you could question whether OO is “mainstream” even today, when the vast majority of developers use procedural programming techniques.


Even though the majority of .NET and Java developers don’t actually use OO themselves, both the .NET and Java platforms are OO throughout. So there’s no doubt that OO has had an incredible impact on our industry, and that it has shaped the major platforms and development styles used by almost everyone.


So perhaps in 15-20 years SO really will have the sweeping impact that OO has had on software development. And I’m not sure that would be a bad thing, but it isn’t a near-term concern.


However, I was recently confronted by a (so-called) newcomer: workflow. Yes, now workflow (WF) is going to kill OO, or so I’ve been told. In fact, Through indirect channels, my challenger suggested that the CSLA .NET framework is obsolete going forward because it is based on these “old-school” OO concepts.


Obviously I beg to differ :)


It is true that some of the concepts I employ in CSLA .NET are quite old, but I am not ready to through OO into the “old-school” bucket just yet... The idea is somewhat ironic however, because WF is being put up as The Next Big Thing™. Depressingly, there's virtually no technology today that isn't a rehash of something from 20 years ago. That's more true of SOA and WF than OO. Remember that SOA is just repacking of message-based architecture, only with XML, and workflow is an extension of procedural design and the use of flowcharts. At least OO can claim to be younger than either of those two root technologies.


Personally, I very much doubt the use of objects as rich behavioral entities will go away any time soon. Sure, for non-interactive tasks procedural technologies like WF might be better than OO (though that's debatable). And certainly for connecting disparate systems you need to use loosely coupled architectures, of which SOA is one (but not the only one).


But the question remains: how do you create rich, interactive GUI applications in a productive and yet maintainable manner?


Putting your logic in a workflow reduces interactivity. Putting your logic behind a set of services reduces both interactivity and performance. Putting your logic in the UI is "VB3 programming". So what's left? For interactive apps you need a business layer that can run as close to the user as possible and using business objects is a particularly good way to do exactly that.


Neither WF nor WCF/SOA are, to me, competitors to CSLA. Rather, I see them as entirely complimentary.


Perhaps ADO.NET EF and/or LINQ are competitors - that depends on what they look like as they stabilize over the next several months. Everything I’ve seen thus far leads me to believe these technologies are complimentary as well. (And Scott Guthrie agrees – check out his recent interview on DNR) Neither of them addresses the issues around support for data binding, or centralization of business logic in a formal business layer.


But when it comes to SOA I still think it is a “fad”. It is either MTS done over with XML - which is what 95% of the people out there are doing with "services", or it is EDI/EAI with XML - which is a good thing and a move forward, but which is too complex and has too much overhead for use in a typical application. I think that most likely in 5-10 years we'll have a new acronym for it – just adding to the EDI/EAI/SOA list.


Just like "outsourcing" became ASP which became SaaS. The idea doesn't die, it just gets renamed so more money can be made by selling the same stuff over again - often to people who really don't need/want it.


The point is, our industry is cyclical. And we're heading toward into a period where "procedural programming" is once again reaching a peak of usage. This time under the names SOA and workflow. But if you step back from the hype and look at it, what's proposed is that we create a bunch of standalone code blocks that exchange formatted data. A set of procedures that exchange parameters. Dress it up how you like, that's what is being proposed by both the SOA and WF crowds.


And there's nothing wrong with that. Procedural design is well understood.


And it could actually work this time around if we don't cheat. I still maintain that the reason procedural programming "failed" was because we cheated and used common blocks and global variables. Why? Because it was too inefficient and required too much code to formally package up all the data and send it as parameters.


If we can stomach the efficiency and coding costs this time around - packing the data into XML messages - then I see no reason why we can't make procedural programming work in many cases. And by naming it something trendy, like SOA and/or workflow, we can avoid the negative backlash that would almost certainly occur if people realized they were abandoning OO to return to procedural programming.


Having spent the first third of my career doing procedural programming, I wouldn't necessarily mind going back. In some ways it really was easier than using objects - though there are some very ugly traps that people will have to rediscover again on this go-round. Most notably the trap that a procedure can't be reused without introducing fragility into the system, not due to syntactic coupling or API changes, but due to semantic coupling (about which I've blogged).


This is true for procedures and any other "reusable" code block like a workflow activity or a service. They all suffer the same limitation - and it is one that neither WSDL nor compilers can help solve...


Anyway, kind of went on a rant here - but I find it amusing that, in some circles at least, OO is viewed as the "legacy" technology in the face of services or workflow, which are even more legacy than OO ;)

Sunday, February 4, 2007 1:03:57 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, October 9, 2006

For better or worse, SOA (service-oriented architecture) continues to be the current industry fad. As SOA continues along the “hype curve” (a term I’m borrowing from Gartner), more and more people are starting to realize that SOA isn’t a silver bullet, and that it doesn’t actually replace n-tier client/server or object-orientation.


What will most likely happen over the next couple years, is that SOA will fall into the “pit of disillusionment” (part of the hype curve, that I think of as the “pit of despair”), and many people will decide, as a result, that it is totally useless. This will happen, not in small part, because some organizations are investing way too much money into SOA now, when it is overly hyped – and they’ll feel betrayed when “reality” sets in.


After a period of disrepute, SOA may then rise to a “plateau of productivity”, where it will finally be used to solve the problems it is actually good at solving.


Some technologies don’t live through the “despair” part of the process. Sometimes the harsh light of reality is too bright, and the technology can’t hold up. Other times, a competing technology or concept hits the top of its hype curve, derailing a previous technology. Over the next very few years, we’ll see if SOA holds up to the despair or not.


This is a pattern Gartner has observed for virtually all technologies over many, many years. If you think about any technology introduced over the past 20 years or more, almost all of them have following this pattern: over-hyping, over-reacting-to-reality and finally used-as-a-real-solution.


My colleague and mentor, David Chappell, recently blogged about some of the realities people are discovering as they actually move beyond the hype and try to apply SOA. It turns out, not surprisingly, that achieving real benefits in terms of reuse is much harder than the SOA evangelists would have anyone believe.


I think this is because SOA focuses on only one part of the problem: syntactic coupling. SOA, or at least service-oriented design and programming, is very much centered around rules for addressing and binding to services, and around clear definition of syntactic contracts for the API and message data sent to and from services.


And that’s all good! Minimizing coupling at the syntactic level is absolutely critical, and SOA has moved us forward in this space, picking up where EAI (enterprise application integration) left off in the 90’s.


Unfortunately, syntactic coupling is the easy part. Semantic coupling is the harder part of the problem, and SOA does little or nothing to address this challenging issue.


Semantic coupling refers to the behavioral dependencies between components or services. There’s actual meaning to the interaction between a consumer and a service.


Every service implements some tangible behavior. A consumer calls the service, thus becoming coupled to that service, at both a syntactic and semantic level. At the syntactic level, the consumer must use the address, binding and contract defined by the service – all of which are forms of coupling. But the consumer also expects some specific behavior from the service – which is a form of semantic coupling.


And this is where things get very complex. The broader the expected behavior, the tighter the coupling.


As an example, a service that does something trivial, like adding two numbers, is relatively easy to replace with an equivalent. Such a service can even be enhanced to support other numeric data types with virtually no chance of breaking existing consumers. So the semantic coupling between a consumer and such a service is relatively light.


Another example is credit card verification. Obviously the internal implementation of this behavior is much more complex, but the external expectations of behavior remain very limited. Like adding two numbers, verifying a credit card is a behavior that accepts very little data, and returns a very simple result (yes/no).


Contrast this with many other possible business services, such as shipping an order, or generating manufacturing documentation. In these (quite common) scenarios, the service performs, or is expected to perform, a relatively broad set of behaviors. The result is a whole group of effects and side-effects – all of which should be considered as black-box effects by any caller. But the more a service does, the less “black-box” it can be to its callers, and the tighter the coupling.


And this leaves us in a serious quandary. There’s a high cost to calling a service. There’s a lot of overhead to creating a message, serializing it into text (XML), routing it through some communications stack onto the wire, getting the electrons across the wire through some protocol (probably TCP) and all the attendant hardware involved, picking it up off the wire on the server, routing it through another communications stack, deserializing the text (XML) back into a meaningful message and finally interpreting the message. Only then can the service actually act on the message to do real work.


Worse, that’s only half the story, because most people are creating synchronous request/response services, and so that whole overhead cost must be paid again to get the result back to the caller!


Before going further, let me expand on this “overhead cost” concept to be more precise.


I worked for many years in manufacturing. In that industry there’s the concept of cost accounting – people make their living at tracking costs. They divide costs into overhead, setup and run (there are other models, but this one’s pretty standard).


To make this somewhat more clear, I’ll use the metaphor of baking cookies.


Overhead cost are all the salaried people, the buildings, equipment and so forth. Costs that are paid whether widgets are manufactured or not. When baking cookies, this is the cost of having a kitchen, a stove, electricity, natural gas, and of course the person doing the baking. In most homes these costs exist regardless of whether cookies are baked or not.


Setup costs are applied overhead. They are costs that are required to build a set of widgets, but they are only incurred when widgets are being manufactured. These costs include setting up machines, programming devices, getting organized, printing documents, etc. When baking cookies, this is the cost (in terms of time) of getting out the various ingredients, bowls, spoons and other implements. It is also the cost of cleaning up after the baking is done – all the washing, drying and putting-away-of-implements that follows. These costs are directly applied to the process, but are pretty much the same whether you bake one dozen or ten dozen cookies.


Run costs are those costs that are incurred on a per-widget basis to make a widget. This includes the hourly rate of the workers manning the assembly line, the materials that go into the widget and so forth. When baking cookies, this is the time spent by the baker, the cost of the flour, eggs and other ingredients consumed in the process. Ideally it would include the amount of electricity or natural gas used to run the stove as well. Obviously detailed run costs can be hard to determine in some cases!


When calculating the cost of your cookies, each of these three costs is added together. The run rate is easy, as it is per-cookie by definition. The setup rate is variable – the more cookies you make in a batch the lower the relative setup cost, and the fewer cookies the higher the relative setup cost. Overhead is typically aggregated – the annual overhead cost is known, and is divided by the number of cookies (and other things) made over a year’s time. Obviously there’s lots of wiggle room in this last number.


For my purposes, in discussing services, the overhead rate isn’t all that meaningful. In our industry this is the cost of the IT staff, the servers, the server room, electricity and cooling and so forth.


But the setup rate and run rate become very meaningful when talking about services.


Calling a service, as I noted earlier, incurs a lot of overhead. This overhead is relatively constant: you pay about the same whether you send 1 byte or 1024 bytes to or from the service.


The run rate is the actual work done by the service. Once the message is parsed and available to the service, then the service does real, valuable work. This is the run rate for the service.


In manufacturing it is always important to manage the overhead and setup costs – they are a “pure cost”. The run rate cost must also be managed, but it is directly applicable to a product, and so that cost can be factored into the price you charge. Perhaps more importantly, your competitors typically have a comparable run rate (materials and labor cost about the same), but the overhead can vary radically.


To switch industries just a bit, this is why Walmart does so well (and is so feared). They have managed their overhead and setup costs to such a degree that they actually do focus on reducing their run rate (in their case, the per-unit acquisition cost of items).


Coming back to services, we face the same issue. Typically we deal with this using intuition rather than thinking it through, but the core problem is very tangible.


Would you call a service to add two numbers? Of course not! The setup/overhead cost would outweigh the run cost to such a degree that this makes no sense at all.


Would you call a service to ship an order, with all the surrounding activities that implies? This makes much more sense. The setup/overhead cost becomes trivial when compared to the run cost for such a service.


And yet coupling has the opposite effect. Which of those services can be more loosely coupled? The addition service of course, because it performs a very narrow, discrete, composable behavior.


Do you even know what the ship-an-order service might do? Of course not, it is too big and vague. Will it trigger invoicing? Will it contact the customer? Will it print pick lists for inventory? Will it update the customer’s sales history?


I would hope it does all these things, but very few of us would be willing to blindly assume it does them. And so we are forced to treat ship-an-order as something other than a black box. At best it is gray, but probably downright clear. We’ll require that the service’s actual behaviors be documented. And then we’ll fill in the gaps for what it does not provide, or doesn’t provide in a way we like.


(Or, failing to get adequate documentation, we’ll experiment with the service, probing to find its effects and side-effects and limitations. And then we’ll fill in the gaps for the bits we don’t like. Sadly, this is the more common scenario…)


At this point we (the caller of the service) have become so coupled to the service, that any change to the service will almost certainly require a change to our code. And at this point we’ve lost the primary goal/benefit of SOA.


Why? How can this be, when we’re using all the blessed standards for SOA communication? Maybe we’re even using an Enterprise Service Bus, or Biztalk Server or whatever the latest cool technology might be. And yet this coupling occurs!


This is because I am describing semantic coupling. Yes, all the cool, whiz-bang SOA technologies help solve the syntactic coupling issues. But without a solution to the semantic, or behavioral, coupling it really doesn’t get us very far…


What’s even scarier, is that the vision of the future portrayed by the SOA evangelists is one where we build services (systems) that aggregate other services together to provide higher-level functionality. Like assembling simple blocks into more complex creations, that in turn can be assembled into more complex creations or used as-is.


Except that each level of aggregation creates a service that provides broader behaviors – and by extension tighter coupling to any callers (though the setup vs run costs become more and more favorable at the top level).


To bring this (rather long) post to a close, I want to return to the beginning. SOA is heading down the steep slope into the pit of disillusionment. You can head this off for yourself and your organization by realizing ahead of time, right now, that SOA only addresses syntactic issues. You must address the much harder semantic issues yourself.


And the tools exist. They have for a long time. Good procedural design, use of flow charts, data flow diagrams, control diagrams, state diagrams: these are all very valid tools that can help you manage the semantic coupling. Unfortunately the majority of people with expertise in these tools are nearing retirement (or have retired) – but the tools and techniques are there if you can find some old, dusty books on procedural design. Just remember to include the setup/overhead cost vs run cost in your decisions on whether to make each procedure into a "service".


SOA solves some serious and important issues, but it is overhyped. Fortunately the hype is fading, and so we can look forward (perhaps 18 or 36 months) to a time when we can, with any luck, start focusing on the “next big thing”. Maybe, just maybe, that “big thing” will be some new and interesting way of addressing semantic coupling.

Monday, October 9, 2006 5:07:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, March 13, 2006

Somebody woke up on the wrong side of the bed, fortunately it wasn't me this time :)

This is a nice little rant about the limitations of OO-as-a-religion. I think you could substitute any of the following for OO and the core of the rant would remain quite valid:

  • procedural programming/design
  • modular programming/design
  • SOA/SO/service-orientation
  • Java
  • .NET

Every concept computer science has developed over the past many decades came into being to solve a problem. And for the most part each concept did address some or all of that problem. Procedures provided reuse. Modules provided encapsulation. Client/server provided scalability. And so on...

The thing is, that very few of these new concepts actually obsolete any previous concepts. For instance, OO doesn't elminate the value of procedural reuse. In fact, using the two in concert is typically the best of both worlds.

Similarly, SO is a case where a couple ideas ran into each other while riding bicycle. "You got messaging in my client/server!" "No! You got client/server in my messaging!" "Hey! This tastes pretty good!" SO is good stuff, but doesn't replace client/server, messaging, OO, procedural design or any of the previous concepts. It merely provides an interesting lense through which we can view these pre-existing concepts, and perhaps some new ways to apply them.

Anyway, I enjoyed the rant, even though I remain a staunch believer in the power of OO.

Monday, March 13, 2006 11:44:39 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, August 5, 2005

Ted Neward believes that “distributed objects” are, and always have been, a bad idea, and John Cavnar-Johnson tends to agree with him.


I also agree. "Distributed objects" are bad.


Shocked? You shouldn’t be. The term “distributed objects” is most commonly used to refer to one particular type of n-tier implementation: the thin client model.


I discussed this model in a previous post, and you’ll note that I didn’t paint it in an overly favorable light. That’s because the model is a very poor one.


The idea of building a true object-oriented model on a server, where the objects never leave that server is absurd. The Presentation layer still needs all the data so it can be shown to the user and so the user can interact with it in some manner. This means that the “objects” in the middle must convert themselves into raw data for use by the Presentation layer.


And of course the Presentation layer needs to do something with the data. The ideal is that the Presentation layer has no logic at all, that it is just a pass-through between the user and the business objects. But the reality is that the Presentation layer ends up with some logic as well – if only to give the user a half-way decent experience. So the Presentation layer often needs to convert the raw data into some useful data structures or objects.


The end result with “distributed objects” is that there’s typically duplicated business logic (at least validation) between the Presentation and Business layers. The Presentation layer is also unnecessarily complicated by the need to put the data into some useful structure.


And the Business layer is complicated as well. Think about it. Your typical OO model includes a set of objects designed using OOD sitting on top of an ORM (object-relational mapping) layer. I typically call this the Data Access layer. That Data Access layer then interacts with the real Data layer.


But in a “distributed object” model, there’s the need to convert the objects’ data back into raw data – often quasi-relational or hierarchical – so it can be transferred efficiently to the Presentation layer. This is really a whole new logical layer very akin to the ORM layer, except that it maps between the Presentation layer’s data structures and the objects rather than between the Data layer’s structures and the objects.


What a mess!


Ted is absolutely right when he suggests that “distributed objects” should be discarded. If you are really stuck on having your business logic “centralized” on a server then service-orientation is a better approach. Using formalized message-based communication between the client application and your service-oriented (hence procedural, not object-oriented) server application is a better answer.


Note that the terminology changed radically! Now you are no longer building one application, but rather you are building at least two applications that happen to interact via messages. Your server doesn't pretend to be object-oriented, but rather is service-oriented - which is a code phrase for procedural programming. This is a totally different mindset from “distributed objects”, but it is far better.


Of course another model is to use mobile objects or mobile agents. This is the model promoted in my Business Objects books and enabled by CSLA .NET. In the mobile object model your Business layer exists on both the client machine (or web server) and application server. The objects physically move between the two machines – running on the client when user interaction is required and running on the application server to interact with the database.


The mobile object model allows you to continue to build a single application (rather than 2+ applications with SO), but overcomes the nasty limitations of the “distributed object” model.

Friday, August 5, 2005 9:12:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 24, 2005

It is clear to me that Service Oriented Architecture has gone about as far as it can go in a vacuum. And Service Oriented Programming continues to evolve somewhat randomly as vendors and/or standards bodies see fit (WSE, Indigo, etc).


So there needs to be more rigorous thought apply to Service Oriented Design. If we assume the SOA concepts of autonomous entities communicating via contract-defined messages over negotiated channels. And if we assume we’ll have programming tools that provide contractual synchronous and asynchronous communication vectors, then we can move ahead and work on thought models for designing services and systems composed of services.


Note that there are two very different aspects here. One is how to create a service, the other is now to build systems where services interact. The inside view and the outside view. And the rules and concepts are very different for each.


In that vein, here's a good/interesting article on SO design from Rich Turner at Microsoft. He’s taking the inside view and discussing some high-level design concepts for how you might build an autonomous service that still requires other services to function. He’s proposing a couple thought models that might be useful in design efforts – exactly the kind of dialog that is required to move SOD forward.


Personally I think that a number of the object-oriented design concepts behind CRC cards (class, responsibility, collaboration) apply equally well to services. In such a scheme, Rich’s relationships (constellation or fractal) would be listed as collaborators from a service.


I’m convinced that at least the first side of a CRC card, where the class (now service), responsibility and list of collaborators (other services) are a good design tool for services. Some of the other sides (like the list of behaviors) aren’t so useful, since a service is an atomic procedure and thus by definition has exactly one behavior. Still, applying those parts of CRC that do work is very helpful.

Tuesday, May 24, 2005 9:02:10 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, May 8, 2005
I try not to do too much simple link-fowarding, but this is a very good, clear article on SOA.
Sunday, May 8, 2005 6:45:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 3, 2005

Over the past few weeks I've given a number of presentations on service-oriented design and how it relates to both OO and n-tier models. One constant theme has been my recommendation that service methods use a request/response or just request method signature:


   response = f(request)




"request" and "response" are both messages, defined by a type or schema (with a little "s", not necessarily XSD).


The actual procedure, f, is then implemented as a Message Router, routing each call to an appropriate handler method depending on the specific type of the request message.


In concept this is an ideal model, as it helps address numerous issues around decoupling the client and service, and around versioning of a service over time. I discussed many of these issues in my article on SOA Covenants.


Of course the devil is in the details, or in this case the implementation. Today's web service technologies don't make it particularly easy to attach multiple schemas to a given parameter - which we must do to the "request" parameter in order to allow our single endpoint to accept multiple request messages.


One answer is to manually create the XSD and/or WSDL. Personally I find that answer impractical. Angle-brackets weren't meant for human creation or consumption, and programmers shouldn't have to see (much less type) things like XSD or WSDL.


Fortunately there's another answer. In this blog entry, Microsoft technology expert Bill Wagner shows how to implement "method overloading" in asmx using C# to do the heavy lifting.


While the outcome isn't totally seamless or perfect, it is pretty darn good. In my mind Bill's answer is a decent compromise between getting the functionality we need, and avoiding the manual creation of cryptic WSDL artifacts.


The end result is that it is totally possible to construct flexible, maintainable and broadly useful web services using VB.NET or C#. Web services that follow a purely message-based approach, implement a message router on the server and thus provide a good story for both versioning and multiple message format stories.


Indigo, btw, offers a similar solution to what Bill talks about. In the Indigo situation however, it appears that we’ll have more fine-grained control over the generation of the public schema through the use of the DataContract idea, combined with inheritance in our VB or C# code.

Tuesday, May 3, 2005 10:15:53 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, April 18, 2005

Indigo is Microsoft’s code name for the technology that will bring together the functionality in today’s .NET Remoting, Enterprise Services, Web services (including WSE) and MSMQ. Of course knowing what it is doesn’t necessarily tell us whether it is cool, compelling and exciting … or rather boring.


Ultimately beauty is in the eye of the beholder. Certainly the Indigo team feels a great deal of pride in their work and they paint this as a very big and compelling technology.


Many technology experts I’ve talked to outside of Microsoft are less convinced that it is worth getting all excited.


Personally, I must confess that I find Indigo to be a bit frustrating. While it should provide some absolutely critical benefits, in my view it is merely laying the groundwork for the potential of something actually exciting to follow a few years later.


Why do I say this?


Well, consider what Indigo is again. It is a technology that brings together a set of existing technologies. It provides a unified API model on top of a bunch of concepts and tools we already have. To put it another way, it lets us do what we can already do, but in a slightly more standardized manner.


If you are a WSE user, Indigo will save you tons of code. But that’s because WSE is experimental stuff and isn’t refined to the degree Remoting, Enterprise Services or Web services are. If you are using any of those technologies, Indigo won’t save you much (if any) code – it will just subtly alter the way you do the things you already do.


Looking at it this way, it doesn’t sound all that compelling really does it?


But consider this. Today’s technologies are a mess. We have at least five different technologies for distributed communication (Remoting, ES, Web services, MSMQ and WSE). Each technology shines in different ways, so each is appropriate in different scenarios. This means that to be a competent .NET architect/designer you must know all five reasonably well. You need to know the strengths and weaknesses of each, and you must know how easy or hard they are to use and to potentially extend.


Worse, you can’t expect to easily switch between them. Several of these options are mutually exclusive.


But the final straw (in my mind) is this: the technology you pick locks you into a single architectural world-view. If you pick Web services or WSE you are accepting the SOA world view. Sure you can hack around that to do n-tier or client/server, but it is ugly and dangerous. Similarly, if you pick Enterprise Services you get a nice set of client/server functionality, but you lose a lot of flexibility. And so forth.


Since the architectural decisions are so directly and irrevocably tied to the technology, we can’t actually discuss architecture. We are limited to discussing our systems in terms of the technology itself, rather than the architectural concepts and goals we’re trying to achieve. And that is very sad.


By merging these technologies into a single API, Indigo may allow us to elevate the level of dialog. Rather than having inane debates between Web services and Remoting, we can have intelligent discussions about the pros and cons of n-tier vs SOA. We can apply rational thought as to how each distributed architecture concept applies to the various parts of our application.


We might even find that some parts of our application are n-tier, while others require SOA concepts. Due to the unified API, Indigo should allow us to actually do both where appropriate. Without irrational debates over protocol, since Indigo natively supports concepts for both n-tier and SOA.


Now this is compelling!


As compelling as it is to think that we can start having more intelligent and productive architectural discussions, that isn’t the whole of it. I am hopeful that Indigo represents the groundwork for greater things.


There are a lot of very hard problems to solve in distributed computing. Unfortunately our underlying communications protocols never seem to stay in place long enough for anyone to really address the more interesting problems. Instead, for many years now we’ve just watched as vendors reinvent the concept of remote procedure calls over and over again: RPC, IIOP, DCOM, RMI, Remoting, Web services, Indigo.


That is frustrating. It is frustrating because we never really move beyond RPC. While there’s no doubt that Indigo is much easier to use and more clear than any previous RPC scheme, it is also quite true that Indigo merely lets us do what we could already do.


What I’m hoping (perhaps foolishly) is that Indigo will be the end. That we’ll finally have an RPC technology that is stable and flexible enough that it won’t need to be replaced so rapidly. And being stable and flexible, it will allow the pursuit of solutions to the harder problems.


What are those problems? They are many, and they include semantic meaning of messages and data. They include distributed synchronization primitives and concepts. They include standardization and simplification of background processing – making it as easy and natural as synchronous processing is today. They include identity and security issues, management of long-running processes, simplification of compensating transactions and many other issues.


Maybe Indigo represents the platform on which solutions to these and other problems can finally be built. Perhaps in another 5 years we can look back and say that Indigo was the turning point that finally allowed us to really make distributed computing a first-class concept.

Monday, April 18, 2005 10:13:06 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, February 23, 2005
Just a super-quick entry that I'll hopefully follow up on later. Ted wrote a bunch of stuff on contracts related to Indigo, etc.
I think this is all wrong-headed. The idea of being loosely coupled and the idea of being bound by a contract are in direct conflict with each other. If your service only accepts messages conforming to a strict contract (XSD, C#, VB or whatever) then it is impossible to be loosely coupled. Clients that can't conform to the contract can't play, and the service can never ever change the contract so it becomes locked in time.
Contract-based thinking was all the rage with COM, and look where it got us. Cool things like DoWork(), DoWork2(), DoWorkEx(), DoWorkEx2() and more.
Is this really the future of services? You gotta be kidding!
Wednesday, February 23, 2005 3:00:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, February 8, 2005

I just finished watching Eric Rudder’s keynote on Indigo at VS Live in San Francisco. As with all keynotes, it had glitz and glamour and gave a high-level view of what Microsoft is thinking.


(for those who don’t know, Eric is the Microsoft VP in charge of developer-related stuff including Indigo)


Among the various things discussed was the migration roadmap from today’s communication technologies to Indigo. I thought it was instructive.


From asmx web services the proposed changes are minor. Just a couple lines of code change and away you go. Very nice.


From WSE to Indigo is harder, since you end up removing lots of WSE code and replacing it with an attribute or two. The end result is nice because your code is much shorter, but it is more work to migrate.


From Enterprise Services (COM+, ServicedComponent) the changes are minor – just a couple lines of changed code. But the semantic differences are substantial because you can now mark methods as transactional rather than the whole class. Very nice!


From System.Messaging (MSMQ) to Indigo the changes are comparable in scope to the WSE change. You remove lots of code and replace it with an attribute or two. Again the results are very nice because you save lots of code, but the migration involves some work.


From .NET Remoting to Indigo the changes are comparable to the asmx migration. Only a couple lines of code need to change and away you go. This does assume you listened to advice from people like Ingo Rammer, Richard Turner and myself and avoided creating custom sinks, custom formatters or custom channels. If you ignored all this good advice then you’ll get what you deserve I guess :-)


As Eric pointed out however, Indigo is designed for the loosely coupled web service/SOA mindset, not necessarily for the more tightly coupled n-tier client/server mindset. He suggested that many users of Remoting may not migrate to Indigo – directly implying that Remoting may remain the better n-tier client/server technology.


I doubt he is right. Regardless of what Indigo is designed for, it is clear to me that it offers substantial benefits to the n-tier client/server world. These benefits include security, reliable messaging, simplified 2-phase transactions and so forth. The fact that Indigo can be used for n-tier client/server even if it is a bit awkward or not really its target usage won’t stop people. And from today’s keynote I must say that it looks totally realistic to (mis)use Indigo for n-tier client/server work.

Tuesday, February 8, 2005 12:35:52 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, February 3, 2005

One last post on SOA from my coffee-buzzed, Chicago-traffic-addled mind.


Can a Service have Tiers?


Certainly a Service can have layers. Any good software will have layers. In the case of a Service these layers will likely be:


1.      Interface

2.      Business

3.      Data access

4.      Data management


This only makes sense. You’ll organize your message-parsing and XML handling code into the interface layer, which will invoke the business layer to do actual work. The business layer may invoke the Data access layer to get/save data into the Data management (database) layer.


But layers are logical constructs. They are just a way of organizing code so it is maintainable, readable and reusable. Layers say nothing about how the code is deployed – that is the realm of tiers.


So the question remains, can a Service be divided into tiers?


I’ll argue yes.


You deploy layers onto different tiers in an effort to get a good trade-off between performance, scalability, fault-tolerance and security. More tiers mean worse performance, but may result in better scalability or security.


If I create a service, I may very well need to deploy it such that I can provide high levels of scalability or security. To do this, I may need to deploy some of my service’s layers onto different tiers.


This is no different – absolutely no different – than what we do with web applications. This shouldn’t be a surprise, since a web service is nothing more than a web application that spits out XML instead of HTML. It seems pretty obvious that the rules are the same.


And there are cases where a web application needs to have tiers to scale or to be secure. It follows then that the same is true for web services.


Thus, services can be deployed into multiple tiers.


Yet the SOA purists would argue that any tier boundary should really be a service boundary. And this is where things get nuts. Because a service boundary implies lack of trust, while a layer boundary implies complete trust. Tiers are merely deployments of layers, so tiers imply complete trust too.


(By trust here I am not talking about security – I’m talking about data trust. A service must treat any caller – even another service – as an untrusted entity. It must assume that any inbound data breaks rules. If a service does extend trust then it instantly becomes unmaintainable in the long run and you just gave up the primary benefit of SOA.)


So if a tier is really a service boundary, then we’re saying that we have multiple services, one calling the next. But services pretty much always have those four layers I mentioned earlier, so now each “was-tier-now-is-service” will have those layers.


Obviously this is a lot more code to write, and a lot of overhead, since the lower-level service (that would have been a tier) can’t trust the higher level one and must replicate much of its validation and possibly other business logic.


To me, at this point, it is patently obvious that the idea of entirely discarding tiers in favor of services is absurd. Rather, a far better view is to suggest that services can have tiers – private, trusting communications between layers, even across the network between the web server hosting the service interface and the application server hosting the data access code.


And of course this ties right back into my previous post for today on remoting. Because it is quite realistic to expect that you’ll use DCOM/ES/COM+ or remoting to do the communication between the web server and application server for this private communication.


While DCOM might appear very attractive (and is in many cases), it is often not ideal if there’s a firewall between the web server and application server. While it is technically possible to get DCOM to go through a firewall, I gotta say that this one issue is a major driver for people to move to remoting or web services.


And web services might be very attractive (and is in many cases), it is not ideal if you want to use distributed OO concepts in the implementation of your service.


And there we are back at remoting once again as being a perfectly viable option.


Of course there’s a whole other discussion we could have about whether there’s any value to using any OO design concepts when implementing a service – but that can be a topic for another time.

Thursday, February 3, 2005 11:32:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

I am afraid that I'm rapidly becoming more convinced than even Ted that SOA == web services == RPC with angle brackets.

The more people I talk to, the more I realize that virtually no one is actually talking about service-oriented analysis, architecture or design. They are using SOA as a synonym for web services, and they are using web services as a replacement for DCOM, RMI, remoting or whatever RPC protocol they used before.

I think the battle is lost, if battle there was. The idea of a loosely-coupled, message-based architecture where autonomous entities interact with each other over policy-based connections is a really cool idea, but it doesn’t resonate with typical development teams.

The typical development team is building line-of-business systems and just needs a high performance, reliable and feature-rich RPC protocol. Sometimes web services fits that bill, and even if it doesn’t it is the currently fad so it tends to win by default.

People are running around creating web services that do not follow a message-based design. What would a message-based design look like you ask? Like this:

result = procedure(request)

Where ‘procedure’ is the method/procedure name, ‘request’ is the idempotent message containing the request from the caller and ‘result’ is the idempotent message containing the result of the procedure.

Then if you want to be a real purist, you’d make this asynchronous, so the design would actually be:


And any result message would be returned as a service call from ‘procedure’. But that really goes out of bounds for almost everyone, because then you are truly doing distributed parallel processing and that’s just plain hard to grok.

So in our pragmatic universe, we’re talking about the

result = procedure(request)

form and that’s enough. But that isn’t what most people are doing. Most people are creating services as though they were components. Creating methods/procedures that accept parameters rather than messages. Stuff like this:

customerList = GetCustomerData(firstName As String, lastName As String)

Where ‘customerList’ is a DataSet containing the results of any matches.

There’s not a message, idempotent or not, to be found here. This is components-on-the-web. This is COM-on-the-web or CORBA-on-the-web. This is not SOA, this is just RPC redux.

And that’s OK. I have no problem with that necessarily. But since this is the norm, I am pretty much ready to concede that the “Battle of SOA” is lost. SOA has already become just another acronym in the long list of RPC acronyms we’ve left behind over the decades.

Too bad really, because I found the distributed parallel, loosely coupled, message-based concepts to be extremely interesting and challenging. Hard, and impractical for normal business development, but really ranking high on the geek-cool chart.

Thursday, February 3, 2005 10:59:31 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

I’ve hashed and rehashed this topic numerous times. In particular, read this and this. But the debate rages on, paralyzing otherwise perfectly normal development teams in a frenzy of analysis paralysis over something that is basically not all that critical in a good architecture.


I mean really. If you do a decent job of architecting, it really doesn’t matter a whole hell of a lot which RPC protocol you use, because you can always switch away to another one when required.


And while web services a pretty obvious choice for SOA, the reality is that if you are debating between remoting, web services and DCOM/ES/COM+ then you are looking for an RPC protocol – end of story.


If you were looking to do service-oriented stuff you’d reject remoting and DCOM/ES/COM+ out of hand because they are closed, proprietary, limited technologies that just plain don’t fit the SO mindset.


In an effort to summarize to the finest point possible, I’ll try once again to clarify my view on this topic. In particular, I am going to focus on why you might use remoting and how to use it if you decide to. Specifically I am focusing on the scenario where remoting works better than either of its competitors.


First, the scenario that remoting does that web services and ES/COM+/DCOM don’t do (without hacking them):


1.      You want to pass rich .NET types between AppDomains, Processes or Machines (we’re talking Hashtables, true business objects, etc.)

2.      You are communicating between layers of an application that are deployed in different tiers (We’re not talking services here, we’re talking layers of a single application. We’re talking client-server, n-tier, etc.)

3.      You want no-touch deployment on the client


If you meet the above criteria then remoting is the best option going today. If you only care about 1 and 2 then ES/COM+/DCOM is fine – all you lose is no-touch deployment (well, and a few hours/days in configuring DCOM, but that’s just life :-) ).


If you don’t care about 1 then web services is fine. In this case you are willing to live within the confines of the XmlSerializer and should have no pretense of being object-oriented or anything silly like that. Welcome to the world of data-centric programming. Perfectly acceptable, but not my personal cup of tea.  To be fair, it is possible to hack web services to handle number 1, and it isn't hard. So if you feel that you must avoid remoting but need 1, then you aren't totally out of luck.


But in general, assuming you want to do 1, 2 and 3 then you should use remoting. If so, how should you use remoting?


1.      Host in IIS

2.      Use the BinaryFormatter

3.      Don’t create custom sinks or formatters, just use what Microsoft gave you

4.      Feel free to use SSL if you need a secure line

5.      Wrap your use of the RPC protocol in abstraction objects (like my DataPortal or Fowler’s Gateway pattern)


Hosting in IIS gives you a well-tested and robust process model in which your code can run. If it is good enough for and it sure should be good enough for you.


Using the BinaryFormatter gives you optimum performance and avoids the to-be-deprecated SoapFormatter.


By not creating custom sinks or formatters you are helping ensure that you’ll have a relatively smooth upgrade path to Indigo. Indigo, after all, wraps in the core functionality of remoting. They aren’t guaranteeing that internal stuff like custom sinks will upgrade smoothly, but they have been very clear that Indigo will support distributed OO scenarios like remoting does today. And that is what we’re talking about here.


If you need a secure link, use SSL. Sure WSE 2.0 gives you an alternative in the web service space, but there’s no guarantee it will be compatible with WSE 3.0, much less Indigo. SSL is pretty darn stable comparatively speaking, and we’ve already covered the fact that web services doesn’t do distributed OO without some hacking.


Finally, regardless of whether you use remoting, web services or ES/COM+/DCOM make sure – absolutely sure – that you wrap your RPC code in an abstraction layer. This is simple good architecture. Defensive design against a fluid area of technology.


I can’t stress this enough. If your business or UI code is calling web services, remoting or DCOM directly you are vulnerable and I would consider your design to be flawed.


This is why this whole debate is relatively silly. If you are looking for an n-tier RPC solution, just pick one. Wrap it in an abstraction layer and be happy. Then when Indigo comes out you can easily switch. Then when Indigo++ comes out you can easily upgrade. Then when Indigo++# comes out you are still happy.

Thursday, February 3, 2005 10:11:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, January 24, 2005

Sahil has some interesting thoughts on the web service/DataSet question as well.


He spends some time discussing whether “business objects” should be sent via a web service. His definition of “business object” doesn’t match mine, and is closer to Fowler’s data transfer object (DTO) I think.


It is important to remember that web services only move boring data. No semantic meaning is included. At best (assuming avoidance of xsd:any) you get some limited syntactic meaning along with the data.


When we talk about moving anything via a web service, we’re really just talking about data.


When talking about moving a “business object”, most people think of something that can be serialized by web services – meaning by the XmlSerializer. Due to the limitations of the XmlSerializer this means that the objects will have all their fields exposed as public fields or read-write properties.


What this means in short, is that the “business objects” can not follow good OO design principles. Basically, they are not “business objects”, but rather they are a way of defining the message schema for the web service. They are, at best, data transfer objects.


In my business objects books and framework I talk about moving actual business objects across the wire using remoting. Of course the reality here is that only the data moves – but the code must exist on both ends. The effective result is that the object is cloned across the network, and retains both its data and the semantic meaning (the business logic in the object).


You can do this with web services too, but not in a “web service friendly” way. Cloning an object implies that you get all the data in the object. And to do this while still allowing for encapsulation means that the serialization must get private, friend/internal and protected fields as well as public ones. This is accomplished via the BinaryFormatter. The BinaryFormatter generates and consumes streams, which can be thought of as byte arrays. Thus, you end up creating a web service that moves byte arrays around. Totally practical, but the data is not human-readable XML – it is Base64 encoded binary data. I discuss how to do this in CSLA .NET on my web site.


Now we are talking about moving business objects. Real live, OO designed business objects.


Of course this approach is purely for n-tier scenarios. It is totally antithetical to any service-oriented model!


For an SO model you need to have clearly defined schemas for your web service messages, and those should be independent from your internal implementation (business objects, DataSets or whatever). I discuss this in Chapter 10 of my business objects books.

Monday, January 24, 2005 10:10:21 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

OK, so Shawn has some good points about the use of a DataSet for the purpose of establishing a formal contract for your web service messages. This is in response to my previous entry about not using DataSets across web services.


The really big distinction between using web services for SO vs n-tier remains.


If you are doing SO, you need to clearly define your message schema, and that schema must be independent from your internal data structures or representations. This independence is critical for encapsulation and is a core part of the SO philosophy. Your service interface must be independent from your service implementation.


What technology you use to define your schema is really up to you. Shawn’s entry points out that you can use strongly typed DataSet objects to generate the schema – something I suppose I knew subconsciously but didn’t really connect until he brought it up. Thank you Shawn!


Any tool that generates an XSD will work to define your schema, and from there you can define proxy classes (or apparently use DataSet objects as proxies). Alternately you can create your “schema” in VB or C# code and get the XSD from the code using the wsdl.exe tool from the .NET SDK (though you have less control over the XSD this way).


But my core point remains, that this is all about defining your interface, and should not be confused with defining your implementation. Using a DataSet for a proxy is (according to Shawn) a great thing, and I’ll roll with that.


But to use that same DataSet all the way down into your business and data code is dangerous and fragile. That directly breaks encapsulation and makes you subject to horrific versioning issues.


And versioning is the big bugaboo hiding in web services. Web services have no decent versioning story. At the service API level the story is the same as it was for COM and DCOM – which was really bad and I believe ultimately helped drive the success of Java and .NET.


At the schema level the versioning story is better. It is possible to have your service accept different variations on your XML schema, and you can adapt to the various versions. But this implies that your service (or a pre-filter/adapter) isn’t tightly coupled to some specific implementation. I fear that DataSets offer a poor answer in this case.


And in any case, if you to create maintainable code you must be able to alter your internal implementation and data representation independently from your external contract. The XSD that all your clients rely on can’t change easily, and you must be able to change your internal structures more rapidly to accommodate changing requirements over time.


Again, I’m talking about SO here. If you are using web services for n-tier then the rules are all different.


In the n-tier world, your tiers are tightly coupled to start with. They are merely physical deployments of layers, and layers trust each other implicitly because they are just a logical organization of the code inside your single application. While there are still some real-world versioning issues involved, the fact is that they aren’t remotely comparable to the issues faced in the SO world.

Monday, January 24, 2005 9:13:49 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, January 23, 2005

Every now and then the question comes up about whether to pass DataSet or DataTable objects through a web service.


I agree with Ted Neward that the short answer is NO!!


However, nothing is ever black and white…


For the remainder of this discussion remember that a DataSet is just a collection of DataTable objects. There’s no real difference between a DataTable and DataSet in the context of this discussion, so I’m just going to use the term DataSet to mean 1..n DataTables.


There are two “types” of DataSet – default and strongly typed.


Default DataSet objects can be converted to relatively generic XML. They don’t do this by default of course. So you must choose to either pass a DataSet in a form that is pretty much only useful to .NET code, or to force it into more generic XML that is useful by anyone.


To make this decision you need to ask yourself why you are using web services to start with. They are designed, after all, for the purpose of interop/integration. If you are using them for that intended purpose then you want the generic XML solution.


On the other hand, if you are misusing web services for something other than interop/integration then you are already on thin ice and can do any darn thing you want. Seriously, you are on your own anyway, so go for broke.


Strongly typed DataSet objects are a different animal. To use them, both ends of the connection need the .NET assembly that contains the strongly typed code for the object. Obviously interop/integration isn’t your goal here, so you are implicitly misusing web services for something else already, so again you are on your own and might as well go for broke.


Personally my recommendation is to avoid passing DataSet objects of any sort via web services. Create explicit schema for your web service messages, then generate proxy classes in VB or C# or whatever language based on that schema. Then use the proxy objects in your web service code.


Your web service (asmx) code should be considered the same as any other UI code. It should translate between the “user” presentation (XML based on your schema/proxy classes) and your internal implementation (DataSet, business objects or whatever).


I discuss this in Chapter 10 of my Business Objects books, but the concepts apply directly to data-centric programming just as they do to OO programming.

Sunday, January 23, 2005 9:06:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Sunday, January 2, 2005

Don Box put out his predictions for 2005, including #3 which states that the term SOA will be replaced by some new hype-word invented by the marketing wonks of various software and industry analysis companies.

As I said months ago, the S in SOA is a dollar sign. SOA is much more hype than reality, which means it is created and defined by marketing people far more than it is by actual computer scientists or engineers.

As is often the case with over-hyped concepts, the terms related to those concepts rapidly become meaningless. What is SOA? Depends on which vendor you ask. What is a service? Depends on which vendor you ask. It is all a meaningless jumble here at the start of 2005.

Of course the concept is too valuable to marketeers[1] for them to give up just because a term has been lost. So I am sure Don is right and marketing groups at Microsoft, IBM, Gartner and many other organizations are busily working to decide on the next term or terms to use for a while.

So for better or worse SOA itself isn’t dead, but I agree that the term is living on borrowed time.


[1] Yes, I know the term is marketers, but that extra “e” in there makes it sound so much more, well, Disney somehow :-)

Sunday, January 2, 2005 5:08:12 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, December 21, 2004

In recent SOA related posts I’ve alluded to this, but I wanted to call it out directly. SOA is founded on the principal of passive data. Dima Maltsev called this out in a recent comment to a previous blog entry:


In one of the posts Rockford is saying that SOA doesn't address the need of moving data and also the logic associated with that data. I think this is one of the main ideas of SOA and I would rather state it positively: SOA wants you to separate the logic and data.


Yes, absolutely. SOA is exactly like structured or procedural programming in this regard. All those concepts we studied way back in the COBOL/FORTRAN days while drawing flowcharts and data flow diagrams are exactly what SOA is all about.


Over the past 20-30 years two basic schools of thought have evolved. One ways that data is a passive entity, the other attempts to embed data into active entities (often called objects or components).


Every now and then someone tries to merge the two concepts, providing a scheme by which sometimes data is passive, while other times it is contained within active entities. Fowlers Data Transfer Objects (DTO) design pattern and my CSLA .NET are examples of recent efforts in this space, though we are hardly alone.


But most people stick with one model or the other. Mixing the models is complex and typically requires extra coding.


Thus, most software is created using a passive data model, or an active entity model alone. And the reality is that the vast majority of software uses a passive data model.


In my speaking I often draw a distinction between data-centric and object-oriented design.


Data-centric design is merely a variation on procedural programming, with the addition of a formalized data container of some sort. In some cases this is as basic as a 2-dimensional array, but in most cases it is a RecordSet, ResultSet, DataSet or some variation on that theme. In .NET it is a DataTable (or a collection of DataTables in a DataSet).


The data-centric model is one where the application goes to the database and retrieves data. The data in the database is passive, and when the application gets it, the data is in a container – say a DataTable. This container is passive as well.


I hear you arguing already. “Both the database and DataTable make sure certain columns are numeric!”, or “They both make sure the primary key is unique!”. Sure, that is true. Over the past decade some tiny amount of intelligence has crept into our data containers, but nothing really interesting. Nothing that makes sure the number in the column is a good number – that it is in the right range, or that it was calculated with the right formula.


Anything along the lines of validation, calculation or manipulation of data occurs outside the data container. That outside entity is the actor, the data container is merely a vessel for passive data.


And that’s OK. That works. Most software is written this way, with the business logic in the UI or a function library (or maybe a rules engine), acting against the data.


The problem is that most people don’t recognize this as procedural programming. Since the DataSet is an object, and your UI form is an object, the assumption is that we’re object-oriented. Thus we don’t rigorously apply the lessons learned back in the FORTRAN days about how to organize our code into reusable procedures and organize those procedures into function libraries.


Instead we plop the code into the UI behind button clicks and key press events. Any procedural organization is a token effort, unorganized and informal.


Which is why I favor an active entity approach – in the form of object-oriented business entity objects.


In an active entity model, data is never left out “on its own” in a passive state. Well, except for when it is in the database, because we’re stuck using RDBMS databases and the passive data concept is so deeply embedded in that technology it is hopeless…


But once the data comes out of the database it is in an active entity. Again, this is typically an entity object, designed using object-oriented concepts. The primary point here is that the data is never exposed in a raw form. It is never passive. Any external entity using the data can count on the data being validated, calculated and manipulated based on a consistent set of rules that are included with the data.


In this model we avoid putting the logic in the UI. We avoid the need to create procedures and organize them into function libraries like in the days of COBOL. Instead, the logic is part and parcel with the data. They are one.


Most people don’t take this approach. Historically it has required more coding and more effort. With .NET 1.x some of the overhead is gone, since basic data binding to objects is possible. However, there’s still the need to map the objects to/from the database and that is certainly extra effort. Also, the data binding isn’t on a par with that available for the DataSet.


In .NET 2.0 the data binding of objects will be on a par (or better than) binding with a DataSet, so that end of things is improving nicely. The issue of mapping data to/from the database remains, and appears that it will continue to be an issue for some time to come.


In any case, along comes SOA. SOA is all about active entities sending messages to each other. When phrased like that it sounds almost object-oriented, but don’t be fooled.


The active entities are procedures. Each one is stateless, and each one is defined by a formal contract that specifies the name of the procedure, the parameters it accepts and the results it returns.


Some people will argue that they aren’t stateless. Indigo, for instance, will allow stateful entities just like DCOM and Remoting to today. But we all know (after nearly 10 years experience with DCOM) that stateful entities don’t scale and don’t lead to reliable systems. So if you really want to have an unscalable and unreliable solution then go ahead and use stateful designs. I’ll be over here in stateless land where things actually work. :-)


The point being, these service-entities are not objects, they are procedures. They accept messages. Message is just another word for data, so they are procedures that accept data.


The data is described by a schema – often an XSD. This schema information has about the same level of “logic” as a database or DataSet – which is to say it is virtually useless. It can make sure a value is numeric, but it can’t make sure the number is any good.


So Dima is absolutely correct. SOA is all about separating the data (messages) from the logic (procedures aka services).


Is this a good thing? Well that’s a value judgement.


Did you like how procedural programming worked the first time around? Do you like how data-centric (aka procedural) programming works in VB or Powerbuilder? If so, then you’ll like SOA – at least conceptually.


Or do you like how object-oriented programming works? Do you appreciate the consistency and centralized nature of having active entities that wrap your data at all times? Do you feel that the extra effort of doing OO is worth it to gain the benefits? If so, then you’ll probably balk at SOA.


Note that I say that people happy with procedural programming will conceptually like SOA. That’s because there’s always the distributed parallel thing to worry about.


SOA is inherently a distributed design approach. Yes, you might configure your service-entities to all run on the same machine, or even in the same process. But the point of SOA is that the services are location-transparent. You don’t know that they are local, and at some point in the future they might not be. Thus you must design under the assumption that each call to a service is bouncing off two satellites as it goes half-way around the planet and back.


SOA (as described by Pat Helland at least) is also inherently parallel. Calls to services are asynchronous. Your proxy object might simulate a synchronous call, so you don’t even realize it was async. But a core idea behind SOA is that of transport-transparency. You don’t know how your message is delivered. Is it via web services? Is it via a queue? Is it via email? You don’t know. You aren’t supposed to care.


But even procedural aficionados may not be too keen to program in a distributed parallel environment. The distributed part is complex and can be slow. The parallel part is complex and, well, very complex.


My predication? Tools like Indigo will avoid the whole parallel part and it will never come to pass. The idea of transport-transparency (or protocol-transparency) will go the way of the dodo before SOA ever becomes truly popular.


The distributed part of the equation can’t really go away though, so we’re kind of stuck with that. But we’ve dealt with that for nearly 10 years now with DCOM, so it isn’t a big thing.


In the end, my prediction is that SOA will fade away as people realize that the reality is that we’ll be using it to do exactly what we did with DCOM, RMI, IIOP and various other RPC technologies.


The really cool ideas – the ones with the power to be emergent – are already fading away. They aren’t included (at least in the forefront) of Indigo, and they’ll just slip away like a bright and shining dream and we’ll be left with a cool new way to do RPC.


And RPC is fundamentally a passive data technology. Active entities (procedures) send passive data between them – often in the form of arrays or more sophisticated containers like a DataSet or perhaps an XML document with an XSD. It is all the same stuff with tiny variations. Like ice cream – chocolate, vanilla or strawberry, it is all cold and fattening.


So welcome to the new world, same as the old world.

Tuesday, December 21, 2004 8:27:28 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, December 15, 2004

If SOA really is a revolutionary force on a par with OO, it is because it has emergent behavior. Here's some more information on emergence as a field of study and what it really means.

Wednesday, December 15, 2004 5:40:35 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

Earlier this week my employer, Magenic worked with the Microsoft office in Atlanta to set up a series of speaking engagements with some Atlanta-based companies. I had a good time, as I got some very good questions and discussion through some of the presentations.


In one of the presentations about SOA, an attendee challenged my assertion that “pure” SOA would lead to redundant business logic. He agreed with my example that validation logic would likely be replicated, but asserted that replication of validation logic was immaterial and that it was business logic which shouldn’t be replicated.


Now I’m the first to acknowledge that there are different types of business logic. And this isn’t the first time that I’ve had people argue that validation is not business logic. But that’s never rung true for me.


Every computer science student (and I assume CIS and MIS students) gets inundated with GIGO early in their career. Garbage-In-Garbage-Out. The mantra that illustrates just how incredibly important data validation really is.


You can write the most elegant and wonderful and perfect business logic in the world, and if you mess up the validation all your work will be for naught. Without validation, nothing else matters. Of course without calculation and manipulation and workflow the validation doesn’t matter either.


My point here is that validation, calculation, manipulation, workflow and authorization are all just different types of business logic. None are more important than others, and a flaw in any one of them ripples through a system wreaking havoc throughout.


Redundant or duplicated validation logic is no less dangerous or costly than duplicated calculation or manipulation logic. Nor is it any easier to manage, control or replicate validation logic than any other type of logic.


In fact, you could argue that it is harder to control replicated validation logic in some ways. This is because we often want to do validation in a web browser where the only real options are javascript or flash. But our “real” code is in VB or C# on the web server and/or application server. Due to the difference in technology, we can’t share any code here, but must manually replicate and maintain it.


Contrast this with calculation, manipulation or workflow, which is almost always done as part of “back end processing” and thus would be written in VB or C# in a DLL to start with. It is comparatively easy to share a well-designed DLL between a web server and app server. Not trivial I grant you, but it doesn’t require manual porting of code between platforms/languages like most validation logic we replicate.


So given the comparable importance of validation logic to the other types of business logic, it is clear to me that it must be done right. And given the relative complexity of reusing validation logic across different “tiers” or services, it is clear that we have a problem if we do decide to replicate it.


And yet changing traditional tiered applications into a set of cooperative services almost guarantees that validation logic will be replicated.


The user enters data into a web application. It validates the data of course, because the web application’s job is to give the user a good experience. Google’s new search beta is reinforcing the user expectation that even web apps can provide a rich user experience comparable to Windows. Users will demand decent interactivity, and thus validation (and some calculation) will certainly be on the web server and probably also replicated into the browser.


The web application calls the business workflow service. This service follows SOA principles and doesn’t trust the web app. After all, it is being used by other front-end apps as well, and can’t assume any of them do a good job. We’re not talking security here either – we’re talking data trust. The service just can’t trust the data coming from unknown front-end apps, as that would result in fragile coupling.


So the service must revalidate and recalculate any thing that the web app did. This is clear and obvious replication of code.


At this point we’ve probably got three versions of some or all of our validation logic (browser, web server, workflow service). We probably have some calculation/manipulation replication too, as the web server likely did some basic calculations rather than round-trip to the workflow service.


Now some code could be centralized into DLLs that are installed on both the web server and app server, so both the web app and workflow service can use the same code base. This is common sense, and works great in a homogenous environment.


But it doesn’t extend to the browser, where the replicated validation logic is in javascript. And it doesn’t work all that well if the web server is running on a different platform from the app server.


In any case, we have replication of code – all we’re doing is trying to figure out how to manage that replication in some decent way. I think this is an important conversation to have, because in my mind it is one of the two major hurdles SOA must overcome (the other being versioning).

Wednesday, December 15, 2004 5:14:03 PM (Central Standard Time, UTC-06:00)  #    Disclaimer just published an article of mine titled “The Fallacy of the Data Layer”, in which I basically carry some SOA ideas to their logical conclusion to see what happens. Unfortunately the sarcasm in the article was pretty well hidden and so the comments on the article are kinda harsh - but funny if you actually see the sarcasm :-)


Wednesday, December 15, 2004 5:00:16 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, December 9, 2004

Last night a group of top developers from the Minneapolis area had dinner with David Chappell. Great company, great food and great conversation; what more could you want?


The conversation ranged wide, from whether parents should perpetuate the myth of the Tooth Fairy and Santa all the way down to how Whidbey’s System.Transactions compares to the transactional functionality in BizTalk Server.


At one point during the conversation David mentioned that there’s not only an impedance mismatch between objects and relational data, but also between users and objects and between services and objects. While the user-object mismatch is somewhat understood, we’re just now starting to learn the nature of the service-object mismatch.


(The OO-relational impedance mismatch is well-known and is discussed in many OO books, perhaps most succinctly in David Taylor’s book)


I think that his observation is accurate, and it is actually the topic of an article I submitted to a few days ago, so look for that to stir some controversy in the next few weeks :-)


I also heard that some people have read my blog entries on remoting (this one in particular I think) and somehow came to the conclusion that I was indicating that Indigo would be binary compatible with remoting. To be clear, as far as I know Indigo won’t be binary compatible with remoting. Heck, remoting isn’t binary compatible with remoting. Try accessing a .NET 1.1 server from a 1.0 client and see what happens!


What I said is that if you are careful about how you use remoting today you’ll have a relatively easy upgrade path to Indigo. Which is no different than using asmx or ES today. They’ll all have an upgrade path to Indigo, but that doesn’t imply that Indigo will natively interoperate with remoting or ES (DCOM).


(Presumably Indigo will interop with asmx because that’s just SOAP, but even then you’ll want to change your code to take advantage of the new Indigo features, so it is still an upgrade process)


What does “be careful how you use remoting” mean? This:


  1. Use the Http channel
  2. Host in IIS
  3. Use the BinaryFormatter
  4. Do not create or use custom sinks, formatters or channels


If you follow these rules you should have a relatively direct upgrade path to Indigo. Will you have to change your code? Of course. That’s what an upgrade path IS. It means you can, with relative ease, update your code to work with the new technology.


Thursday, December 9, 2004 1:24:47 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, November 17, 2004

For some time now I’ve been writing relatively cautionary entries on service-orientation. Speculating whether it is really anything new, or is just another communication scheme not unlike RPC, DCOM, RMI and others before. It seems that Grady Booch is of a similar mind, suggesting that SOA is not the be-all-and-end-all of architectures, and that it may not be an architecture at all.


In a couple weeks I’m giving the keynote at the Heartland Developer’s Conference in Des Moines. I’ll be talking about service-orientation from a historical perspective. How does it relate and compare to procedural, object-oriented and component-based schemes, as well as client/server and n-tier. In putting together the presentation, I had to formalize a lot of my thoughts on the topic, and the more I’ve considered it, the more skeptical I have become that service-orientation or SOA is a tangibly new thing.


If it is tangibly new, then it is a new way of thinking or conceptualizing the problem space, not a new technology solution. Web services, SOAP, XML and related technologies are merely incrementally new twists on very old ideas. In and of themselves they offer nothing terribly new from an architecture perspective.


But SOA itself isn’t about technology. It is about a new way of modeling reality. It is very data-centric and thus is diametrically opposed to object-orientation. It is much more similar to the older procedural world-view, where data is passive and procedures are the only active entities in a system.


But it isn’t really procedural either, at least not in a traditional sense. This is because procedural design has an underlying assumption that procedure calls are virtually free. In an SO model, service calls have very high overhead – or at least we must assume they do, because that is one possible (and likely) deployment scenario.


Ultimately SO appears to be procedural programming, with the addition of high overhead per call (thus forcing serious granularity changes in design) and dynamic negotiation of the communication semantics (protocol, orthogonal services, etc.) – though there’s no real implementation yet of this dynamic negotiation… And even that isn’t really an architecture thing as much as technological coolness.


So in the end, SO is procedural programming redux, with the caveat that the procedures are very expensive and should be called rarely. Kinda boring in that light…


Or, as I’ll say in my keynote, SO could be emergent (as defined here and here). It could be that the new combination of old concepts and simpler ideas combine to make something new and unexpected. Something that isn’t obvious. Much the same way the simple moves of individual chess pieces combine to create a highly sophisticated whole. The end result is certainly not obvious based on the simplicity of the core rules. And if SO is emergent, then it is far from boring!!


Only time will tell. The next decade will be quite interesting in this regard.

Wednesday, November 17, 2004 9:27:19 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, November 2, 2004

Regardless of your political leanings, if you are a US citizen it is your civic duty to get out and vote, so please make sure to take the time today and make it to the polls.


If Minneapolis is any indication, allow extra time this year. There are already lines, and there are a lot people doing same-day registration as well.


I realize that we are all autonomous entities (to tie this to SOA). This means we have self-determination and can make our own choices to do or not do things. Just like the entities that make up services.


But self-determination is the secondary definition. Autonomy's primary meaning is self-governance. And if you don't vote today then you are giving up that primary meaning to the rest of America. You are allowing others to choose your future and the future of our nation.


Just think what will happen when SOA becomes reality in our software. We’ll have autonomous entities (the implementation behind a service) deciding whether it wants to do its “civic duty” or not. Because these entities are autonomous, they’ll be programmed with self-determination and self-governance. They can say “no, I don’t feel like waiting in that queue” and just blow off client requests.


For decades our industry has been trying to create software models of the real world. SOA finally acknowledges that software entities need to be autonomous to reflect the real world. Some 40% of US citizens are likely to exercise their autonomy today by choosing to avoid self-governance and to give that responsibility to the rest of us. If 40% of our software entities started exercising their autonomy in a similar fashion I wonder what would happen…


The point being that service-oriented design might allow us to model the real world better than procedural or OO or component-based design. But maybe that's not all it is cracked up to be, because real world systems aren't particularly reliable and certainly aren't deterministic...


Be a reliable and deterministic autonomous entity - get out and vote!

Tuesday, November 2, 2004 8:45:05 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, November 1, 2004

Here is a good summary of service-oriented concepts and tenents.

The only quibble I have is that the first section of the document combines SOA and web services into a unified concept. I think this is too limiting, but at the same time it is workable, since services do imply a design that may be distributed. Whether you use web services or some other technology to do the distribution, the fact is that the principles listed in the document will apply.

Monday, November 1, 2004 12:06:36 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, October 21, 2004

This is actually a shameless plug regarding me being quoted in the press...

In the very same Enterprise Architect magazine whose article I maligned in my previous blog entry I am actually quoted in a different article. Check in the Editors Note and you'll find that Jim Fawcette does a good job of paraphrasing a statement I made about SOA during a discussion panel at VS Live in Orlando this fall.

Thursday, October 21, 2004 9:24:39 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

In the just published Enterprise Architect magazine is an article titled “SOA: Debunking 3 Common Myths”. The first and third myth I generally agree with. The second is problematic on several levels.

The myth in question is that SOA implies a distributed system. The author explains that SOA does not imply a distributed system, but rather is merely a message-based architecture of providers and consumers. So far I’m in agreement. SOA can be done in a single process. Whether that’s wise or not is another discussion, but it certainly can be done.

Then the author goes on to suggest that a good architecture should include a Locator service so service providers can be moved to other machines if needed. Again, so far so good. This kind of flexibility is very important.

But then we get my point of disgreement. The closing statement is that “services (and SOA) do not imply any distribution semantics”. Say what?

If you are going to actually have the option of running a service on another machine, then you must design it to follow distributed semantics from day one. You must design it with course-grained, message-based interfaces. You must ensure that the service provider never trusts the service consumer and visa versa. You must ensure that there is no shared state between provider and consumer - including that they can't share a database or any other common resource.

In short, you must design the service provider and consumer with the assumption that they are running on separate machines. Then, having written them with distributed semantics, you can choose to collapse the physical deployment so they run in the same process on the same machine.

Failure to view SOA as a distributed architecture is a big mistake. In SOA there’s the absolute implication that the provider and consumer may be distributed. Philosophically we are designing for a distributed scenario. The fact that we might not be distributed in some cases is immaterial. SOA means you always design using distribution semantics.

Thursday, October 21, 2004 12:48:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

I’ve been involved in yet another discussion about whether people should use Remoting (vs DCOM/Enterprise Services or ASMX/WSE), and I thought I’d share some more of my thoughts in this regard.


One key presupposition in this discussion is often than we’re talking about communication across service boundaries. The train of thought goes that Microsoft (at least the Indigo team) seems to indicate that the only cross-network communication we’ll ever do in the future is across service boundaries. In short, n-tier is dead and will never happen. Anywhere n-tier would exist should be treated as crossing a service (and thus trust) boundary.


I consider that assumption to be fundamentally flawed.


Just because the Indigo team sees a future entirely composed of services doesn't mean that is a correct answer. When MSMQ first came out the MSMQ team was telling everyone that all communication should be done through MSMQ. Filter through the absurd bias of the product teams and apply some basic judgment and it is clear that not all cross-network communication will be service-oriented.


The idea of crossing trust boundaries between parts of my application incurs a terribly high cost in terms of duplicate logic and overhead in converting data into/out of messages. The complexity and overhead is tremendous and makes no sense for communication between what would have been trusted layers in my application…


In a world where service-orientation exists, but is not the only story, there's a big difference between creating a n-layer/n-tier application and creating a set of applications that work together via services. They are totally different things. 


I totally agree that neither Remoting nor DCOM/ES are appropriate across service boundaries. Those technologies are not appropriate between applications in general.


However, I obviously do not for a minute buy the hype that service-orientation will obviate the need for n-tier architectures. And between tiers of your application, ASMX/WSE is a very limited and poor choice. As an example, ASMX/WSE uses the highly limited XmlSerializer, which is virtually useless for transferring objects by value. Between tiers of your application either Remoting or DCOM/ES are good choices.


Application tiers are just application layers that happen to be separated across a network boundary. By definition they are tightly coupled, because they are just layers of the same application. There are many techniques you can use here, all of which tend to be tightly coupled because you are inside the same service and trust boundary even though you are crossing tiers.


I’ve personally got no objection to using DCOM/ES between tiers running on different servers (like web server to app server). The deployment and networking complexity incurred is relatively immaterial in such scenarios.


But if you are creating rich or smart client tiers that talk to an app server then the loss of no-touch deployment and the increased networking complexity incurred by DCOM/ES is an overwhelming negative. In most cases this complexity and cost is far higher than the (rather substantial) performance benefit. In this scenario Remoting is usually the best option.


Of course, if you can’t or won’t use Remoting in this case, you can always hack ASMX/WSE to simulate Remoting to get the same result (with a small performance penalty). You can wrap both ends of the process manually so you can use the BinaryFormatter and actually get pass-by-value semantics with ASMX/WSE. The data on the wire is a Base64 byte array, but there's no value in being "open" between layers of your own application anyway so it doesn't matter.


There's a performance hit due to the Base64 encoding, but it may be worth it to gain the benefits of WSE. And it may be worth it so you can say that you are conforming to recommendations against Remoting.


For better or worse, Remoting does some things that neither DCOM/ES nor ASMX/WSE do well. You have the choice of either using Remoting (with its commensurate issues) or of hacking to make DCOM or ASMX/WSE do what Remoting can do. Neither choice is particularly appealing, and hopefully Indigo will provide a better solution overall. Only time will tell.

Thursday, October 21, 2004 7:34:31 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, October 14, 2004

I'm listening to Pat Helland (a serious thought leader in the service-oriented space) speak and it has me seriously thinking.


I think that one fundamental difference between service-oriented thinking and n-tier is this:


We've spent years trying to make distributed systems look and feel like they run in a single memory space on a single computer. But they DON'T. Service-oriented thinking is all about recognizing that we really are running across multiple systems. It is recognition that you can’t trivialize, abstract or simplify this basic fact. It is acknowledgement that we just can’t figure it out.


Things like distributed transactions that are designed to make many computers look like one. Heck, CSLA .NET is all about making multiple computers look like a single hosting space for objects.


And these things only fit into a service-oriented setting inside a service, never between them. The whole point of service-orientation is that services are autonomous and thus don’t trust each other. Transactions (including work-arounds like optimistic concurrency) are high-trust operations and thus can not exist between autonomous services – they can only exist inside them.


Personally I don’t think this rules out n-layer or n-tier applications, or n-layer and n-tier services. But I do think it reveals how the overall service-oriented may be fundamentally different from n-tier (or OO for that matter).

Thursday, October 14, 2004 1:12:59 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 28, 2004

There’s a lot of talk about Service-Oriented Architecture (SOA), and Service-Oriented Programming-related constructs like web services and SOAP. There’s even a bit of talk about Service-Oriented Analysis. But where’s the discussion of Service-Oriented Design?


I am quite convinced that service-orientation (SO) is no less of a force than object-orientation (OO). For over 30 years our understanding OO has been evolving and slowly, ever so slowly, gaining mindshare in the industry. I think we can apply a (slightly modified) Grady Booch OO quote to SO as well:


“Let there be no doubt that service-oriented design is fundamentally different than traditional structured or object-oriented design approaches: it requires a different way of thinking about decomposition, and it produces software architectures that are largely outside the realm of the structure or object-oriented design culture.”


The bits in bold are my alterations/additions to this famous quote.


OO includes architecture, analysis, design and programming. Understanding and use of all these components is required to make effective use of OO in an enterprise. Precious few organizations ever pull this off. It requires a major culture and skill shift, which is one key reason why OO is still a popular sideline approach rather than the mainstream approach. (The other key reason is lack of tool support for OO design and programming, but that’s another topic.)


If SO is as big a thing as OO, and I suspect it is, then SO has decades ahead of it on the way to becoming a popular sideline approach for software development. Maybe in that time OO will have become mainstream, but I’m not holding my breath…


As an aside, I very much hope that OOA/D/P do become mainstream at some point. After all, we will need to create services, and a service is just an application with an XML interface (vs HTML or GUI). Creating services requires programming in a more conventional sense, and we’ll end up choosing between data-centric and OO just like today. Personally I really want to see more OO, because I believe it is far superior. But I’m pragmatic enough to know that it is far more likely that services will be created using the same data-centric (Recordset, DataSet, XML document) mindsets that are prevalent today in Windows and Web applications…


In my wilder dreams I envision a new programming language that is specifically geared around the creation of services. It might be neither data-centric nor OO, but rather would directly include language constructs to represent service-oriented concepts. I know Microsoft is grafting some of these concepts into VB and C# through the WSE and Indigo technologies, but that’s like prototyping for something that should come later. Perhaps S#? A language with natural expressions for the ideas inherent in SOA/D/P?


In any case, I am convinced that over the next many years we’ll need to develop SO architecture, analysis, design and programming as disciplines. Processes, techniques and tools will need to be created around all these areas. Perhaps we can do a better job with SO than we did with OO, but that remains to be seen.


But the focus of my question today is this: where is service-oriented design? There’s lots of talk and debate about service-oriented architecture. There’s lots of movement around the creation of technologies (like web services, SOAP and the WS-* stuff) to support service-oriented programming. There’s tiny bits and pieces of discussion about service-oriented analysis. But I have yet to see any serious discussion of service-oriented design.


Service-oriented design is critical. Doing service-oriented programming without service-oriented design is sure to lead to all sorts of unfortunate and unproductive results. Any modern developer knows that you need to translate your architecture into a tactical design for the architecture to be useful. You also know that programming without a design is a sure-fire way to end up with lots of rework and wasted effort.


Service-oriented design isn’t about the enterprise messaging standards, or about the transport protocols (HTTP, MSMQ, Biztalk, etc.). Nor is it about the application of WS-this or WS-that.


Service-oriented design is about the design of service-based applications. It is philosophy. It includes the design of services, service clients and other service-related software constructs. It must focus on concepts like


  • relationships
  • encapsulation
  • message structures
  • data flow
  • process flow
  • coupling
  • cohesion
  • decomposition
  • functional grouping
  • abstraction
  • the service interface
  • the service implementation
  • granularity
  • shared context
  • and more…


Click here for an interesting discussion of procedural vs OO design. I think we need to have a similar discussion around SO design. What is it that makes SO unique and different from procedural/structured and OO design? Intuitively I tend to think that it is unique, but at this point it hasn’t been fleshed out and so we can’t have intelligent or objective discussions about how or when it is different.


In the end, I think that SO will require a comparable culture and skill shift to OO. In other words, moving to SO from either the procedural/structured or OO worlds will require major cultural and philosophical changes, and substantial retraining and relearning.


Does this mean OOA/D/P are dead? Not on your life. Neither are procedural/structured design or programming. Most software today is built using procedural techniques, mixed with half-hearted OO code. As we move (over the next 30 years) toward SO, you can bet that procedural and OO will be along for the ride as well.

Tuesday, September 28, 2004 8:54:07 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, September 21, 2004

I read a lot of science fiction. I travel a lot (which you can tell from my list of speaking engagements) and I find it very hard to work on a plane. You might think I’d read computer books, but I regard computer books as reference material, so I very rarely actually read a computer book, as much as skim it or use the index to find answers to specific questions… Besides, they are all so darn big, and it gets heavy carrying them around airports :)


So what’s the point of this post? I recently read a very excellent book that I believe has a great deal of meaning for the computer industry. Sure, it is science fiction, but it is also paints a picture of what computing could look like if today’s service-oriented concepts come to true fruition.


The book is Vernor Vinge’s A Deepness in the Sky, which is a prequel to A Fire Upon the Deep. Fire is an excellent novel, and was written first. I highly recommend it. However, Deepness is the book to which I’m referring when I talk about the links to the computer industry.


I doubt the links are accidental. Vernor Vinge is a retired professor of Computer Science in San Diego, and obviously has some pretty serious understanding of computer science and related issues.


I should point out that what follows could give away key points in the story. The story is awesome, so if you are worried about having it lose some of its impact then STOP READING NOW. Seriously. Take my word for it, go buy and read the book, then come back.


The technology used by the Qeng Ho (the protagonist race, pronounced “cheng ho”) in the book is based on two core concepts: aggregate systems and the wrapping/abstraction of older subsystems.


First is the idea that all the systems are aggregates of smaller systems, which are aggregates of smaller systems and so forth. This is service-orientation incarnate. If you know the protocol/interface to a system, you can incorporate it into another system, thus creating a new system that is an aggregate of both. Extending this into something as complex as a starship or a fleet of starships gives you the effect Vinge describes in the book.


And this is a compelling vision. I have been in a bit of a funk over the past few months. It seems like our industry is lost in the same kind of rabid fanaticism that dominates the US political scene, and that is very depressing. You are either Java or .NET. You are either VB or C#. That gets very old, very fast.


But Vinge has reminded me why I got into computers in the first place. My original vision – way back when – was to actually link two Asteroids arcade games together so multiple people could play together. It may sound corny, but I have been all about distributed systems since before such things existed.


And Vinge’s vision of a future where massively complex systems are constructed by enabling communication between autonomous distributed systems is exactly what gets me excited! It is like object-oriented design meets distributed architecture in a truly productive and awesome manner. If this really is the future of service-orientation then sign me up!


The second main theme is the idea that most systems wrap older sub-systems. Rather than rewriting or enhancing a subsystem, it is just wrapped by a newer and more useful system. Often the new system is more abstract, leaving hidden functionality available to those who know how to tap directly into the older subsystem directly.


This second theme enables the first. Unless systems (and subsystems) are viewed as autonomous entities, it is impossible to envision a scenario where service-oriented architecture is a dominant idea. For better or worse, this includes dealing with the fact that you may not like the way a system works. You just deal with it, because it is autonomous.


To digress a bit, what we’ve been trying to do for decades now is get computers to model real life. We tried with procedures, then objects, then components. They all miss the boat, because the real world is full of unpredictable behavior that we all just deal with.


We deal with the jerks that use the shoulder as an illegal turn lane. We deal with the fact that our federal tax cuts just get redirected so we can pay local taxes and school levies. We deal with the fact that some lady is so busy on the cell phone that she rear-ends you at 40 mph when you are stopped at a red light. We deal with the fact that the networks run R rated movie commercials during primetime when our kids are watching TV.


The point is that people and all our real world systems and devices and machines are autonomous. All our interactions in the real world are done through a set of protocols and we just hope to high heaven that the other autonomous entities around us react appropriately.


Service-oriented architecture/design/programming is the same thing. It has the potential to be the most accurate model of real life yet. But this won’t be painless, because any software we write must be ready to deal with the autonomous nature of all the other entities in the virtual setting. Those other autonomous entities are likely to do the direct equivalent of using the shoulder as a turn lane – they’ll break the law for personal advantage and from time to time they’ll cause an accident and every now and then they’ll kill someone. This is true in the real world, and it is the future of software in the service-oriented universe.


To bring this back to Vinge’s Deepness, the various combatants in the book make full use of both the distributed autonomous nature of their computer systems, and of the fact that the systems are wrapping old – even ancient – subsystems with hidden features. It isn't understanding a computer language that counts, it is understanding of ancient, lost features of deeply buried subsystems that gives you power.


We are given a vision of a future where systems and subsystems are swapped and tweaked and changed and the overall big system just keeps on working. At most, there’s adaptation to some new protocol or interface, but thanks to autonomy and abstraction the overall system keeps working.


It is perfect? Absolutely not. In fact I never saw the big twist coming – nor did the protagonists or antagonists in the book. The overall system was too complex, too autonomous. There was no way they could have anticipated or monitored what eventually occurred. Very cool (if a bit scary). But I don’t want to spoil it entirely, so I’ll leave it there.


Go read the book, hopefully it will inspire some of the same excitement I got by reading it. It certainly gave me an appreciation for the cool (and scary) aspects of a truly service-oriented world-view!


Tuesday, September 21, 2004 7:59:04 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, September 9, 2004

I just love a good discussion, and thanks to Ted I’m now part of one :-)


A few days ago I posted another blog entry about service-oriented concepts, drawing a parallel to procedural design. Apparently and unsurprisingly, this parallel doesn’t sit too well with everyone out there.


I agree that SO has the potential to be something new and interesting. Maybe even on par with fundamental shifts like event-driven programming and object-oriented design have been over the past couple decades. However, I am far from convinced that it really is such a big, radical shift – at least in most people’s minds.


After all, at its core SO is just about enabling my code to talk to your code. A problem space that has been hashed and rehashed for more years than most of us have been in the industry.


Most people view SO as SOAP over HTTP, which basically makes it the newest RPC/RMI/DCOM technology. And if that’s how it is viewed, then it is a pretty poor replacement for any of them…


Some people view SO as a technology independent concept that can be used to describe business systems, technology systems – almost anything. Pat Helland makes this case in some of his presentations. I love the analogy he makes by using SO to describe an old-fashioned experience of coming in and ordering breakfast in a diner. That makes all the sense in the world to me, and there’s not a computer or XML fragment involved in the whole thing.


Yet other people view SO as a messaging technology. Basically this camp is all about SOAP, but isn’t tied to HTTP (even though there are precious few other transports that are available for SOAP today).


Obviously for SO to be a radical new thing, one of the latter two views must be adopted, since otherwise RPC has it covered.


By far the most interesting view (to me) is of SO as a modeling construct, not a technological one. If SO is a way of describing and decomposing problem spaces in a new way, then that is interesting. Not that there’s a whole lot of tool/product money to be had in this view – but there’s lots of consulting/speaking/writing money to be made. This could be the next RUP or something :-)


Less interesting, but more tangible, is the messaging view. Having worked with companies who have their primary infrastructure built on messaging (using MQ Series or MSMQ), I find this view of SO to be less than inspiring.


It has been done people! And it works very nicely – but let’s face it, this view of SO is no more interesting or innovative than the RPC view. The idea of passing messages between autonomous applications is not new. And adding angle brackets around our data merely helps the network hardware vendors sell more equipment, it doesn’t fundamentally change life.


Again, I call back to FORTRAN and procedural programming. The whole idea behind procedural programming was to have a set of autonomous procedures that could be called in an appropriate order (orchestrated) in order to achieve the desired functionality. If we had not cheated – if we had actually passed all our data as parameters from procedure to procedure, it might have actually worked. But we didn’t (for a variety of good reasons), and so it collapsed under its own weight.


Maybe SO can avoid this fate, since its distributed nature makes cheating much more difficult. But even so, the design concepts developed 20 years ago for procedural design and programming should apply to SO today, since SO is merely procedural programming with a wire between the procedures.


So in short, other than SO as an analysis tool, I continue to seriously struggle with how it is a transformative technology.


Either SO is a new RPC technology, enabling cross-network component access using XML, or SO is a new messaging technology, enabling the same autonomous communication we have with queuing technologies – but with the dubious advantage of angle brackets. Neither is overly compelling in and of itself.


And yet intuitively SO feels like a bigger thing. Maybe it is the vendor hype: IBM with their services and Microsoft with their products – everyone wants to turn the S in SOA into a $.


But there is this concept of emergent behavior, which is something I’ve put a fair amount of energy into. Emergent behavior is the idea that simple elements – even rehashed technology concepts – can combine in unexpected ways such that new and interesting behaviors emerge.


Maybe, just maybe, SO will turn out to be emergent by combining the concepts of RPC, messaging and angle brackets into something new and revolutionary. Then again, maybe it is a passing fad, and SO is really just another acronym for EDI or EAI.

Thursday, September 9, 2004 8:42:46 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, July 7, 2004

I rarely read magazines or articles - I just skim them to see if there's anything I actually do want to invest the time to read. This is useful, because it means I can get the gist of a lot of content and only dive into the bits that I find actually interesting. It also means that I can (and do) read a lot of Java related stuff as well as .NET related material.

If you've followed my blog or my books, you know that I have a keen interest in distributed object-oriented systems - which naturally has spilled over into this whole SOA/service fad that's going on at the moment. So I do tend to actually read stuff dealing with these topic areas in both the .NET and Java space.

Interestingly enough, there's no difference between the two spaces. Both the .NET and Java communities are entirely confused and both have lots of vendors trying to convince us that their definitions are correct, so we'll buy the “right” products to solve our theoretical problems. What a mess.

Ted Neward has been engaged in some interesting discussions about the message-based or object-based nature of services. Now Daniel F. Savarese has joined the fray as well. And all this in the Java space, where they claim to be more mature and advanced than us lowly .NET guys... I guess not, since their discussions are identical to those happening in the .NET space.

I think the key point here is that the distributed OO and SOA issues and confusion totally transcend any technical differences between .NET and Java. What I find unfortunate is that most of the discussions on these topics are occuring within artificial technology silos. Too few discussions occur across the .NET/Java boundary.

When it comes to SOA and distributed object-oriented systems (which are two different things!), every single concept, issue, problem and solution that applies to .NET applies equally to Java and visa versa. Sure, the vendor implementations vary, but that's not where the problem (or the solution) is found. The problems and solutions are more abstract and are common across platforms.

What's needed is an intelligent, honest and productive discussion of these issues that is inclusive of people from all the key technology backgrounds. I imagine that will happen about the same time that we have any sort of intelligent, honest or productive discussion between Republicans and Democrats in the US government...


Wednesday, July 7, 2004 11:52:56 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, June 11, 2004

I recently had a reader of my business objects book ask about combining CSLA .NET and SOA concepts into a single application.


Certainly there are very valid ways of implementing services and client applications using CSLA such that the result fits nicely into the SOA world. For instance, in Chapter 10 I specifically demonstrate how to create services based on CSLA-style business objects, and in some of my VS Live presentations I discuss how consumption of SOA-style services fits directly into the CSLA architecture within the DataPortal_xyz methods (instead of using ADO.NET).


But the reader put forth the following, which I wanted to discuss in more detail. The text in italics is the reader’s view of Microsoft’s “standard” SOA architecture. My response follows in normal text:


1. Here is the simplified MS’s “standard” layering,


(a) UI Components

(b) UI Process Components [UI never talk to BSI/BC directly, must via UIP]


(c) Service Interfaces

(d) Business Components [UIP never talk to D directly, must via BC]

(e) Business Entities


(f) Data Access Logic Components


I’m aware of its “transaction script” and “anemic domain model” tendency, but it is perhaps easier for integration and using tools.


To be clear, I think this portrayal of ‘SOA’ is misleading and dangerous. SOA describes the interactions between separate applications, while this would lead one into thinking that SOA is used inside an application between tiers. I believe that is entirely wrong – and if you pay attention to what noted experts like Pat Helland are saying, you’ll see that this approach is invalid.


SOA describes a loosely-coupled, message-based set of interactions between applications or services. But in this context, the word ‘service’ is synonymous with ‘application’, since services must stand alone and be entirely self-sufficient.


Under no circumstance can you view a ‘tier’ as a ‘service’ in the context of SOA. A tier is part of an application, and implements part of the application’s overall functionality. Tiers are not designed to stand alone or be used by arbitrary clients – they are designed for use by the rest of their application. I discussed this earlier in my post on trust boundaries, and how services inherently create new trust boundaries by their very nature.


So in my view you are describing at least two separate applications. You have an application with a GUI or web UI (a and b or ab), and an application with an XML interface (c, d, e and f or cdef).


To fit into the SOA model, each of these applications (ab and cdef) is separate and must stand alone. They are not tiers in a single application, but rather are two separate applications that interact. The XML interface from cdef must be designed to service not only the ab application, but any other applications that need its functionality. That’s the whole point of an SOA-based model – to make this type of functionality broadly available and reusable.


Likewise, application ab should be viewed as a full-blown application. The design of ab shouldn’t necessarily eschew business logic – especially validation, but likely also including some calculations or other data manipulation. After all, it is the job of the designer of ab to provide the user with a satisfactory experience based on their requirements – which is similar, but not the same, as the set of requirements for cdef.


Yes, this does mean that you’ll have duplicate logic in ab and cdef. That’s one of the side-effects of SOA. If you don’t like it, don’t use SOA.


But there’s nothing to stop you from using CSLA .NET as a model to create either ab or cdef or both.


To create ab with CSLA .NET, you’d design a set of business objects (with as little or much logic as you felt was required). Since ab doesn’t have a data store of its own, but rather uses cdef as a data store, the DataPortal_xyz methods in the business objects would call the services exposed by cdef to retrieve and update data as needed.


To create cdef with CSLA .NET, you’d follow the basic concepts I discuss in Chapter 10 in my VB.NET and C# Business Objects books. The basic concept is that you create a set of XML web services as an interface that provides the outside world with data and/or functionality. The functionality and data is provided from a set of CSLA .NET objects, which are used by the web services themselves.


Note that the CSLA-style business objects are not directly exposed to the outside world – there is a formal interface definition for the web services which is separate from the CSLA-style business objects themselves. This provides separation of interface (the web services) from implementation (the CSLA-style business objects).


Friday, June 11, 2004 6:25:44 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, May 29, 2004

This week at Tech Ed in San Diego I got an opportunity to talk to David Chappell and trade some thoughts about SOA, Biztalk and Web services. I always enjoy getting an opportunity to talk to David, as he is an incredibly intelligent and thoughtful person. Better still, he always brings a fresh perspective to topics that I appreciate greatly.


In this case we got to talking about his Biztalk presentation, which he delivered at Tech Ed. Unfortunately I missed the talk due to other commitments, but I did get some of the salient bits from our conversation.


David put forth that the SOA world is divided into three things: services, objects and data. Services interact with each other following SOA principles of loose coupling and message-based communication. Services are typically composed of objects. Objects interact with data – typically stored in databases.


All of this makes sense to me, and meshes well with my view of how services fit into the world at large. Unfortunately we had to cut the conversation short, as I had a Cabana session to deliver with Ted Neward, Chris Kinsman and Keith Pleas. Interestingly enough, Ted and I got into some great discussions with the attendees of the session about SOA, the role of objects and where (or if) object-relational mapping has any value.


But back to my SOA conversation with David. David’s terminology scheme is nice, because it avoids the use of the word ‘application’. Trying to apply this word to an SOA world is problematic because it becomes rapidly overloaded. Let’s explore that a bit based on my observations regarding the use of the word ‘application’.


Applications expose services, thus implying that applications are “service hosts”, or that services are a type of interface to an application. This makes sense to most people.


However, we are now talking about creating SOA applications. These are applications that exist only because they call services and aggregate data and functionality from those services. Such applications may or may not host services in and of themselves.


If that wasn’t enough, there’s the idea being that tiers within an application should expose services for use by other tiers in the application. It is often postulated that these tier-level services should be public for use by other applications as well as the ‘hosting’ application.


So now things are very complex. An application is a service. An application hosts services. An application is composed of services. An application can contain services that may be exposed to other parts of itself and to others. Oy!


Thus, the idea of avoiding the word ‘application’ strikes me as being a Good Thing™.

The use of the trademark symbol for this phrase flows, as far as I know, from J. Michael Straczynski (jms). It may be a bit of Babylon 5 insider trivia, but every time I use the phrase, the ™ goes through my head...

Later in the conference, David and I met up again and talked further. In the meantime I’d been thinking that something was missing from David’s model. Specifically, it is the people. Not only users, but other people that might interact with the overall SOA system at large. David put forth that there’s a couple other things in the mix – specifically user interfaces and workflow processes, and that this is where the people fit in.


However, I think there’s a bit more to it than this. I think that for an SOA model to be complete, the people need to truly fit into the model itself. In other words, they must become active participants in the model in the same way that services, objects and data are active participants. Rather than saying that a user interface is another element of the model, I propose that the human become another element of the model.


As an aside, I realize that there’s something a bit odd or even scary about treating human beings as just another part of a system model, but I think there’s some value in this exercise. This is no different than what business process analysts do when they are looking at manufacturing processes. Humans are put at the same level as machines that are part of the process and are assigned run rates, costs, overhead and so forth. From that perspective, applying this same view to software seems quite valid to me.


So, the question then becomes whether a human is a new entity in the model, or is a sub-type of service, object or data. The way to solve this is to see if a human can fit the role of one of the three pre-existing model concepts.


My first instinct was to view a human as a service. After all, services interact with other services, and the human interacts with our system and visa versa. However, services interact using loose coupled, message-based communication. Typically our systems interact with users through a Windows or Web interface. This is not message-based, but rather is really quite tightly coupled. We give the user specific, discrete bits of data and we get back specific, discrete bits of data.


Certainly there are examples of message-based interaction with humans. For instance, some systems send humans an email or a fax, and the human later responds in some fashion. Even in this case, however, the human’s response is typically tightly coupled rather than message-based.


Ultimately, then, I don’t think we can view a human as a type of service due to the violation of SOA communication requirements.


How about having a human be an object? Other than the obvious puns about objectification of humans (women, men or otherwise), this falls apart pretty quickly. Objects live inside of services and interact with other objects within that service. While it is certainly true that objects use tightly coupled communication, and so do humans, I think it is hard to argue that a human is contained within a given service.


Only one left. Are humans data? Data is used by objects within a service. More generally, data is an external resource that our system uses by calling a service, which uses its objects to interact with this external resource. Typically this is done by calling stored procedures, where we send the resource a set of discrete data or parameters, and the resource returns a set of discrete data.


This sounds terribly similar to how our system interacts with humans. If we turn the idea of a user interface on its head, we could view our system as the actor, and the human as a resource. In such a case, when we display data on a screen, we are basically providing discrete data parameters to a resource (the human) in much the same way we provide parameters to a database via a stored procedure call. The human then responds to our system by providing a set of results in the form of discrete data. This could easily be viewed as equivalent to the results of a stored procedure call.


Following this train of thought, a human really can be viewed as a resource very much like a data source. Any user interfaces we implement are really nothing more than a fancy data access layer within a service. A bit more complex than ADO.NET perhaps, but still the same basic concept. Our system needs to retrieve or alter data and we’re calling out to an external resource to get it.


I put this idea forward to David, who thought it sounded interesting, but didn’t see where it helped anything. I can’t say that I’m convinced that it helps anything either, but intuitively I think there’s value here.


I think the value may lie in how we describe our overall system. And this is important, because SOA is all about the overall systems in our organizations.


Nothing makes this more evident than Biztalk 2004 and its orchestration engine. This tool is really all about orchestrating services together into some meaningful process to get work done. However, people are part of almost any real business process, which implies that somewhere in most Biztalk diagrams there should be people. But there aren’t. Maybe that’s because people aren’t services, so Biztalk can’t interact with them directly. Instead, Biztalk needs to interact with services that in turn interact with the people as a resource – asking for them to provide data, or to perform some external action and report the result (which is again, just data).


Then again, maybe my thought processes are just totally randomized after being on the road for four weeks straight and this last week being Tech Ed, which is just a wee bit busy (day and night)… I guess only time will tell. When we start seeing humans show up in our system diagrams we’ll get a better idea how and where we, as a species, fit into this new SOA world order.

Saturday, May 29, 2004 8:53:54 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, April 8, 2004

Totally unrelated to my previous blog entry, I received an email asking me about the ‘future of remoting’ as it relates to web services and SOA.

It seems that the sender of the email got some advice from Microsoft indicating that the default choice for all network communication should be web services, and that instead of using conventional tiers in applications, that we should all be using SOA concepts instead. Failing that, we should use Enterprise Services (and thus presumably DCOM (shudder!)), and should only use remoting as a last resort.

Not surprisingly, I disagree. My answer follows…

Frankly, I flat out disagree with what you've been told. Any advice that starts with "default choices" is silly. There is no such thing - our environments are too complex for such simplistic views on architecture.

Unless you are willing to rearchitect your system to be a network of linked _applications_ rather than a set of tiers, you have no business applying SOA concepts. Applying SOA is a big choice, because there is a very high cost to its application (in terms of complexity and performance).

SOA is very, very cool, and is a very good thing when appropriately applied. Attempting to change classic client/server scenarios into SOA scenarios is typically the wrong approach.

While SOA is a much bigger thing than web services, it turns out that web services are mostly only useful for SOA models. This is because they employ an open standard, so exposing your 'middle tier' as a set of web services really means you have exposed the 'middle tier' to any app on your network. This forces the 'middle tier' to become an application (because you have created a trust boundary that didn't previously exist). As soon as you employ web services you have created a separate application, not a tier. This is a HUGE architectural step, and basically forces you to employ SOA concepts - even if you would prefer to use simpler and faster client/server concepts.

Remoting offers superior features and performance when compared to web services. It is the most appropriate client/server technology available in .NET today. If you aren't crossing trust boundaries (and don't want to artificially impose them with web services), then remoting is the correct technology to use.

Indigo supports the _concepts_ of remoting (namely the ability to pass .NET types with full fidelity, and the ability to have connection-based objects like remoting's activated objects). Indigo has to support these concepts, because it is supposed to be the future of both the open web service technology and the higher performance client/server technology for .NET.

The point I'm making, is that using remoting today is good if you are staying within trust boundaries. It doesn't preclude moving to Indigo in the future, because Indigo will also provide solutions for communication within trust boundaries, with full .NET type fidelity (and other cool features).

Web services force the use of SOA, even if it isn't appropriate. They should be approached with caution and forethought. Specifically, you should give serious thought to where you want or need to have trust boundaries in your enterprise and in your application design. Trust boundaries have a high cost, so inserting them between what should have been tiers of an application is typically a very, very bad idea.

On the other hand, using remoting between applications is an equally bad idea. In such cases you should always look at SOA as a philosophy, and potentially web services as a technical solution (though frankly, today the better SOA technology is often queuing (MQ Series or MSMQ), rather than web services).


Thursday, April 8, 2004 7:13:51 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

A number of people have asked me about my thoughts on SOA (service-oriented architecture). I have a great deal of interest in SOA, and a number of fairly serious thoughts and concerns about it. But my first response is always this.


The S in SOA is actually a dollar sign ($OA).


It is a dollar sign because SOA is going to cost a lot of people a lot of money, and it will make a lot of other people a lot of money.


It will make money for product vendors, as they slap the SOA moniker on all sorts of tools and products around web service creation, consumption and management. And around things like XML translators and transformers, and queue bridging software and transaction monitors and so forth. Nothing particularly new, but with the SOA label, these relatively old concepts become fresh and worthy of high price tags.


It will make money for service vendors (consultants) in a couple ways.


The first are the consultants that convince clients that SOA is the way to go – and that it is big and scary and that the only way to make it work is to pay lots of consulting dollars. SOA, in this world-view, involves business process reengineering, rip-and-replace of applications, massive amounts of new infrastructure (planning and implementation) and so forth.


The second are the consultants who come in to clean up after the first group make a mess of everything. This second group will also get to clean even after many clients who didn’t spend on big consulting dollars, but bought into the product vendors’ hype around SOA and their quasi-related products.


Who loses money? The clients who buy the hyped products, or fall for the huge consulting price tags due to FUD or the overall hype of SOA.


Since I work in consulting, SOA means dollars flowing toward me, so you’d think I’d be thrilled. But I am not. This is a big train wreck just waiting to derail what should be a powerful and awesome architectural concept (namely SOA).


So, in a (most likely vain) effort to head this off, here are some key thoughts.


  1. SOA is not about products, or even about web services. It is an architectural philosophy and concept that should drive enterprise and application architecture.

  2. SOA is not useful INSIDE applications. It is only useful BETWEEN applications.

    If you think you can replace your TIERS with SOA concepts, you are already in deep trouble. Unless you turn your tiers into full-blown applications, you have no business applying SOA between them.

  3. Another way to look at #2 is to discuss trust boundaries. Tiers within an application trust each other. That’s why they are tiers. Applications don’t trust each other. They can’t, since there’s no way to know if different applications implement the same rules the same ways.

    SOA is an idea philosophy and concept to help cross trust boundaries. If you don’t cross a trust boundary, you don’t want SOA in there causing complexity. But if you do cross a trust boundary, you want SOA in there helping you to safely manage those trust and connectivity issues.

  4. To turn #3 around, if you are using SOA concepts, then you are crossing a trust boundary. Even if you didn’t plan for, or intend to insert a trust boundary at that location in your design, you just did.

    The side-effect of this is that anywhere that you use SOA concepts between bits of code, those bits of code must become applications, and must stop trusting each other. Each bit of code must implement its own set of business rules, processing, etc.

    Thus, sticking SOA between the UI and middle tiers of an application, means that you now have two applications – one that interacts with the user, and one that interacts with other applications (including, but obviously not limited to your UI application). Because you’ve now inserted a trust boundary, the UI app can’t trust the service app, nor can the service app trust the UI app. Both apps must implement all your validation, calculation, data manipulation and authorization code, because you can’t trust that either one has it right.

    More importantly, you can’t trust that FUTURE applications (UI or service) will have it right, and SOA will allow those other application to tap into your current applications essentially without warning. This, fundamentally, is why inserting SOA concepts means you have inserted a trust boundary. If not today, in the future you WILL have untrusted applications interacting with your current applications and they’ll try to do things wrong. If you aren’t defensive from the very start, you’ll be in for a world of hurt.


In the end, I think the trust boundary mindset is most valuable guide for SOA. Using SOA is expensive (in terms of complexity and performance). It is not a price you should be willing to pay to have your UI talk to a middle tier. There are far simpler, cheaper and faster philosophies, concepts and technologies available for such interaction.


On the other hand, crossing trust boundaries is expensive and complex all by itself. This is because we have different applications interacting with each other. It will always be expensive. In this case, SOA offers a very nice way to look at this type of inter-application communication across trust boundaries, and can actually decrease the overall cost and complexity of the effort.


Thursday, April 8, 2004 9:27:35 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, February 11, 2004

Applications or Components?


When you create a system with web services, are your web services components or are they standalone applications?


While this might seem like a semantic question, it is actually a very big deal, with a lot of implications in terms of your overall application design, as well as the design of the web services themselves.


I specifically argue that web services are the ‘user interface’ to a standalone application. Any other view of web services is flawed and will ultimately lead to a seriously flawed application environment.


Another, simpler, way to put this is that “a web service is an application, not a tier in an application”.


Perhaps the key reason this is such an important thing to realize is that almost anyone can use a web service. It isn’t like we get to pick which other apps will call our web service. Web services are discoverable, and someone, somewhere in our organization (or outside) will discover our web service and will start using it.


If we designed the web service as a tier in our application, we would design it to trust our higher level tiers. If the higher level tier did a bunch of calculations or validations, it would be silly to replicate that functionality in a lower level tier.


On the other hand, if the web service is a standalone application that merely has an XML interface (as compared to a GUI or HTML interface), it must implement its own calculations and validations. It would be silly for an application not to implement this type of functionality.


So let’s get back to the component vs application thing.


Components are generally viewed as entities that live within applications. A component provides some functionality that our application can use. But components often have a high degree of trust in the application, and the application certainly has a high degree of trust in the component.


Think of Windows Forms controls, ActiveX controls, encryption libraries, data access (like ADO.NET) and so forth. These are all examples of components, and it is pretty clear that there’s a high level of trust between them and their hosting applications.


Viewing a web service as a component implies that it is part of an application, rather than actually being an application. This implies a higher level of trust than can be granted to something that is essentially a public API that can be invoked by any application in our organization (or beyond).


Applications, on the other hand, don’t trust each other. They trade data, they send messages, but they don’t trust. We’ve probably all be burned (multiple times) by having too much trust between applications, and we know that it is a bad thing.


By viewing a web service as an application, we can apply full application design principles to its creation. It has a presentation tier (the XML web service itself), a business tier (hopefully a set of business objects, or at least a business DLL), and a data tier (whatever that might be). It is self-contained, doing its own calculation, validation, authentication and so forth.


Other applications can use our web service as a data source, or however they choose. But our web service will be internally consistent and self-protecting.


This ultimately means that the web service will be reliable, useful across multiple applications and in the end will be a good investment in terms of development dollars.

Wednesday, February 11, 2004 4:00:44 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Why microservice is a horrible term
Why Containers are the Future
Some thoughts on current trends
SOA Manifesto
Webcast on SOA, REST and related topics
Why enable the data portal behind a web service?
Serializing business objects to Silverlight/AJAX?
WCF NetDataContractSerializer and SecurityException
Does workflow compete with CSLA .NET?
Semantic coupling: the elephant in the SOA room
An (accurate) rant on OO
“Distributed Objects” are bad?
Some Service-Oriented Design thought
Good SOA article
Message-based WebMethod "overloading"
Is Indigo compelling or boring?
Contracts and interop
Indigo: the future of asmx/wse/es/msmq/remoting
Can a Service have Tiers?
SOA really is just RPC with angle-brackets
Remoting vs Web Services vs ES/COM+/DCOM
Web services, DataSets and now...Business Objects
More on web services and DataSet objects
Thoughts on passing DataSet objects via web services
SOA is dead
Passive Data vs “Active” Data
More information on "emergence"
Duplication of business logic in SOA
"The Fallacy of the Data Layer" article available on
Conversations about services, remoting and Santa
SOA: procedural programming redux or emergent technology?
Make sure to vote
Good summary of service-oriented concepts and tenents
Final SOA post for today
Not to keep harping on SOA, but really...
Technologies for crossing tiers vs crossing trust boundaries
One way SOA is fundamentally new and different
Where is Service-Oriented Design?
Science fiction and SOA
More discussion on service-oriented and procedural programming
Distributed Object-Oriented systems, SOA and Java?
SOA, CSLA .NET and trust boundaries
Services, Objects and People, Oh my!
Remoting, web services, SOA and trust boundaries
SOA, dollar signs and trust boundaries
Are web services components or applications?
Feed your aggregator (RSS 2.0)
April, 2019 (2)
January, 2019 (1)
December, 2018 (1)
November, 2018 (1)
October, 2018 (1)
September, 2018 (3)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)

Powered by: newtelligence dasBlog 2.0.7226.0

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2019, Marimer LLC

Send mail to the author(s) E-mail

Sign In