Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Thursday, June 24, 2004

I got this question via email. I get variations on this question a lot, so I thought I’d blog my answer.

 

Hope you don't mind imposing on you for a second. I actually spoke to you very briefly after the one of the sessions and you seemed to concur with me that for my scenario - which is a typical 3-tier scenario, all residing on separate machines, both internal and external clients - hosting my business components on IIS using HTTP/binary was a sensible direction. I've recently had a conversation with someone suggesting that Enterprise Services was a far better platform to pursue. His main point in saying this is the increased productivity - leveraging all the services offered there (transactions, security, etc.). And not only that, but that ES is the best migration path for Indigo, which I am very interested in. This is contrary to what I have read in the past, which has always been that ES involves interop, meaning slower (which this person also disputes, by the way), and Don Box's explicit recommendation that Web Services were the best migration path. I just thought I'd ask your indulgence for a moment to get your impressions. Using DCOM is a little scary, we've had issues with it in the past with load-balancing etc. Just wondering if you think this is a crazy suggestion or not, and if not, do you know of any good best-practices examples or any good resources.

 

 

Here are some thoughts from a previous blog post.

 

The reason people are recommending against remoting is because the way you extend remoting (creating remoting sinks, custom formatters, etc) will change with Indigo in a couple years. If you aren't writing that low level type of code then remoting isn't a problem.

 

Indigo subsumes the functionality of both web services and remoting. Using either technology will get you to Indigo when it arrives. Again, assuming you aren't writing low-level plug-ins like custom sinks.

 

Enterprise Services (ES) provides a declarative, attribute-based programming model. And this is good. Microsoft continues to extend and enhance the attribute-based models, which is good. People should adopt them where appropriate.

 

That isn't to say, however, that all code should run in ES. That's extrapolating the concept beyond its breaking point. Just because ES is declarative, doesn’t make ES the be-all and end-all for all programming everywhere.

 

It is true that ES by itself causes a huge performance hit - in theory. In reality, that perf hit is lost in the noise of other things within a typical app (like network communication or the use of XML in any way). However, specific services of ES may have larger perf hits. Distributed transactions, for instance, have a very large perf hit - which is essentially independent of any interop issues lurking in ES. That perf hit is just due to the high cost of 2-phase transactions. Here's some performance info from MSDN supporting these statements.

 

The short answer is to use the right technology for the right thing.

 

  1. If you need to interact between tiers that are inside your application but across the network, then use remoting. Just avoid creating custom sinks or formatters (which most people don't do, so this is typically a non-issue).
  2. If you need to communicate between applications (even .NET apps) then use web services. Note this is not between tiers, but between applications – as in SOA.
  3. If you need ES, then use it. This article may help you decide if you need any ES features in your app. The thing is, if you do use ES, your client still needs to talk to the server-side objects. In most cases remoting is the simplest and fastest technology for this purpose. This article shows how to pull ES and remoting together.

Note that none of the above options use DCOM. The only case I can see for DCOM is where you want the channel-level security features it provides. However, WSE is providing those now for web services, so even there, I'm not sure DCOM is worth all the headaches. Then the only scenario is where you need stateful server-side objects and channel-level security, because DCOM is the only technology that really provides both features.

 

Thursday, June 24, 2004 4:38:44 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, June 21, 2004

Last week I had the privilege to speak at a series of Microsoft Architect Councils alongside Roger Sessions and Jack Greenfield. The cities where we spoke were Birmingham, Tampa, Orlando and Fort Lauderdale.

 

Roger Sessions is quite well-known as an expert in architecture. His latest book discusses architecture using a ‘software fortress’ metaphor, where applications are a Fortress, accepting input through a Guard and using Envoys to communicate with other applications. I found the metaphor to be very interesting and accurate in many ways. It dovetails very nicely with my viewpoint on most things, including application design, service design and the role of SOA, web services and remoting.

 

In particular, I think it fits nicely with my view on distributed object-oriented application design. CSLA .NET enables the creation of the code that goes inside one of Roger’s Fortress constructs – be that an application or a service. While XML messages pass back and forth between fortresses, inside the fortress we need some way to actually create functionality and implement business features. This is where object-oriented programming comes into play.

 

And Roger’s model allows for the use of distributed technologies inside a fortress as well. So where appropriate, it is perfectly realistic to employ distributed object-oriented concepts in such an environment.

 

 

Jack Greenfield is a very well-known personality in the patterns space and in the software modeling space. He currently works for Microsoft as an Architect for Enterprise Frameworks and Tools (the Visual Studio Team System product), focusing specifically on modeling tools, including the SOA designer, class designer, etc. Jack comes from Rational and really provides some excellent insight into how both that world and the Microsoft world work. His current focus is on ‘industrializing’ the software industry – moving us to a place where we create software factories rather than creating software directly. Very interesting stuff!

 

I had a great time discussing various topics with Jack, including whether industrialization of software is actually a good thing. After all, the industrial revolution was not particularly good for the people working in factories. In the long run, it enabled our society to move beyond an industrial economy and then things got better for everyone, but during the industrial period life pretty much sucked for all but an elite few. I think it would be a shame if the industrial metaphor for software included those details…

 

Jack doesn’t think this will occur, and I hope he’s right. I’m not entirely convinced, but I guess we’ll see.

 

The whole things smells a lot like CASE to me, and that ultimately didn’t work out. It cost corporations a lot of money, and made a lot of money for some vendors and consultants, but ultimately didn’t work. I asked Jack why he thought the current crop of tools would work where CASE failed. His answer is that we’ve matured as an industry. Specifically our ability to describe complex systems through metadata has matured, as has our ability to generate code based on that metadata. I guess we’ll find out one way or the other over the next 5-10 years. If he’s right, then the stuff he’s working on will have a far greater impact on software development than web services, SOAP or SOA could ever dream of.

 

In any case, I do think that the creation of more powerful modeling tools that are more closely linked to our code and actual deployment environment are very compelling. Certainly the designers coming in Team System are version 1.0 products, and will only go so far, but there’s no doubt in my mind that this type of close-to-the-code designer has a strong future.

 

Of course Jack had to fend off critical questions about why the Class Designer (and the others) deviate from UML. I thought he had very good answers for each question, and I personally agree with the direction Microsoft is going. But then again, I am coming from a VB background – very pragmatic and focused on getting actual work done.

 

Pure UML may be able to accurately describe .NET code through various arcane notations, but the reality as that most developers will never understand anything that must be described as “arcane”… The more obvious and comprehensible a notation, the better it will serve those of us trying to just build decent software. The whole idea behind academic research is to come up with concepts that can be adapted to the real world and made practical and useful. I think UML fits the academic definition nicely, and the Class Designer is a good shot at adapting it to something that’s tangibly useful for the majority of .NET developers.

 

Monday, June 21, 2004 11:11:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, June 11, 2004

I recently had a reader of my business objects book ask about combining CSLA .NET and SOA concepts into a single application.

 

Certainly there are very valid ways of implementing services and client applications using CSLA such that the result fits nicely into the SOA world. For instance, in Chapter 10 I specifically demonstrate how to create services based on CSLA-style business objects, and in some of my VS Live presentations I discuss how consumption of SOA-style services fits directly into the CSLA architecture within the DataPortal_xyz methods (instead of using ADO.NET).

 

But the reader put forth the following, which I wanted to discuss in more detail. The text in italics is the reader’s view of Microsoft’s “standard” SOA architecture. My response follows in normal text:

 

1. Here is the simplified MS’s “standard” layering,

 

(a) UI Components

(b) UI Process Components [UI never talk to BSI/BC directly, must via UIP]

 

(c) Service Interfaces

(d) Business Components [UIP never talk to D directly, must via BC]

(e) Business Entities

 

(f) Data Access Logic Components

 

I’m aware of its “transaction script” and “anemic domain model” tendency, but it is perhaps easier for integration and using tools.

 

To be clear, I think this portrayal of ‘SOA’ is misleading and dangerous. SOA describes the interactions between separate applications, while this would lead one into thinking that SOA is used inside an application between tiers. I believe that is entirely wrong – and if you pay attention to what noted experts like Pat Helland are saying, you’ll see that this approach is invalid.

 

SOA describes a loosely-coupled, message-based set of interactions between applications or services. But in this context, the word ‘service’ is synonymous with ‘application’, since services must stand alone and be entirely self-sufficient.

 

Under no circumstance can you view a ‘tier’ as a ‘service’ in the context of SOA. A tier is part of an application, and implements part of the application’s overall functionality. Tiers are not designed to stand alone or be used by arbitrary clients – they are designed for use by the rest of their application. I discussed this earlier in my post on trust boundaries, and how services inherently create new trust boundaries by their very nature.

 

So in my view you are describing at least two separate applications. You have an application with a GUI or web UI (a and b or ab), and an application with an XML interface (c, d, e and f or cdef).

 

To fit into the SOA model, each of these applications (ab and cdef) is separate and must stand alone. They are not tiers in a single application, but rather are two separate applications that interact. The XML interface from cdef must be designed to service not only the ab application, but any other applications that need its functionality. That’s the whole point of an SOA-based model – to make this type of functionality broadly available and reusable.

 

Likewise, application ab should be viewed as a full-blown application. The design of ab shouldn’t necessarily eschew business logic – especially validation, but likely also including some calculations or other data manipulation. After all, it is the job of the designer of ab to provide the user with a satisfactory experience based on their requirements – which is similar, but not the same, as the set of requirements for cdef.

 

Yes, this does mean that you’ll have duplicate logic in ab and cdef. That’s one of the side-effects of SOA. If you don’t like it, don’t use SOA.

 

But there’s nothing to stop you from using CSLA .NET as a model to create either ab or cdef or both.

 

To create ab with CSLA .NET, you’d design a set of business objects (with as little or much logic as you felt was required). Since ab doesn’t have a data store of its own, but rather uses cdef as a data store, the DataPortal_xyz methods in the business objects would call the services exposed by cdef to retrieve and update data as needed.

 

To create cdef with CSLA .NET, you’d follow the basic concepts I discuss in Chapter 10 in my VB.NET and C# Business Objects books. The basic concept is that you create a set of XML web services as an interface that provides the outside world with data and/or functionality. The functionality and data is provided from a set of CSLA .NET objects, which are used by the web services themselves.

 

Note that the CSLA-style business objects are not directly exposed to the outside world – there is a formal interface definition for the web services which is separate from the CSLA-style business objects themselves. This provides separation of interface (the web services) from implementation (the CSLA-style business objects).

 

Friday, June 11, 2004 6:25:44 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, June 08, 2004

My recent .NET Rocks! interview was fun and covered a lot of ground. One technology that we discussed was the Whitehorse designers - the SOA designer and Class Designer - that are slated for Visual Studio 2005. In fact, Brian Randell and I co-authored an article on these designers that will come out in the July issue of MSDN Magazine.

Since the Class Designer is an extension of the UML static class diagram concept, the conversation looped briefly through a discussion of the value and history of UML. At least one blog discussion was spurred by this brief interlude.

I use UML on a regular basis, primarily in Visio. I use it as a design tool and for documentation. I don't use it for code-generation at all, because I have yet to find a UML tool that generates code I consider to be acceptable. I think UML is a decent high level modeling tool for software, and I strongly recommend that anyone doing object-oriented design or programming use UML to design and document their system. Having a decent OO diagram is totally comparable to having an ER diagram for your database. You shouldn't be caught without either one!

At the same time, I think UML has become rather dated. There are concepts in both Java and .NET that just can't be expressed in UML diagrams in a reasonable manner. For instance, component-level scoping (Friend in VB, internal in C#) just isn't there. Nor is the concept of a property as opposed to a field. Nor can you really express events or delegates.

If you are going to even try to do code-gen from a diagram, the diagram must include all the concepts of your platform/language. Otherwise you'll never be able to generate the kind of code that a programmer would actually write. In other words, you can use UML to provide abstract concept diagrams for .NET programs, but you can't use UML to express the details because UML simply doesn't have what it takes.

Recognizing this, Microsoft has created this new Class Designer tool for Visual Studio 2005. It is very comparable to a static class diagram. It uses the same symbology and notation as UML. However, it extends UML to include the concepts of field vs property, events, component-level scoping and more. In short, the Class Diagram tool allows us to create UML static class diagrams that can express all the details of a VB or C# program in .NET.

If you are familiar with UML static class diagrams and with .NET, learning the Class Designer notation will take you all of 10 seconds or less. Seriously - it is just UML with a handful of concepts we've all been wishing for over the past several years.

But the real benefit is the real-time code synchronization. The Class Designer is in your project and is directly generated from your code. It is not generated from meta-data, but instead is generated from your VB or C# (or J#) code directly. This means that changes to the diagram are changes to your code. Changes to your code are changes to the diagram. The two are totally linked, because the source for the diagram is the code.

This is not reverse-engineering like we have today with Visio or other tools, this is directly using the VB/C# code as the 'meta-data' for the diagram.

(To be fair, there is an XML file for the diagram as well. This file contains layout information about how you've positioned the graphical elements of the diagram, but it doesn't include anything about your types, methods, properties, fields, etc. - that all comes from the code.)

My only reservation about the Class Designer is that I've mostly moved beyond doing such simple code-gen. Since coming out with my CSLA .NET">VB.NET and C# Business Objects books, I've really started to appreciate the power of code generators that generate complete business classes, including basic validation logic, data access logic and more. Readers of my books have created CodeSmith templates for my CSLA .NET framework, Kathleen Dollard wrote an XSL-based generator in her code-gen book and Magenic has a really powerful one we use when building software for our clients.

While the Class Designer is nice for simple situations, for enterprise development the reality is that you'll want to code-gen a lot more than the basic class skeleton...

Tuesday, June 08, 2004 9:16:04 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, June 06, 2004

I am very happy to report that I'll be giving a full-day workshop on distributed object-oriented application design in .NET this fall at VS Live Orlando. If you are interested in this topic in general, or CSLA .NET specifically, then here's an opportunity to get a full day brain-dump as part of a great overall conference!

Over the past year I've given some 2 hour sessions on this topic as part of the main VS Live conference, and I've had several people come up after the session asking why it isn't a full-day workshop. After all, it is a very big topic to cover in just a couple hours!

Fawcette agrees, and so has decided to do a pre-con for the Orlando conference in September on the topic.

While I'll certainly cover some basic concepts and ideas around distributed object-oriented systems, the fact is that I'll be focusing mostly on how these concepts shaped CSLA .NET and how CSLA .NET addresses many of the issues we face when designing and building such systems. I'll discuss serialization, remoting, n-level undo, why reflection is sometimes very useful, abstraction of business logic and encapsulation of business data and more. I'll also discuss the creation of Windows, Web and Web services interfaces to the business objects.

I'll also dive into some of the more advanced issues that have come up on the CSLA .NET email forum - including items such as concurrency, near real-time notification of simultaneous edits, centralization of business logic (the RuleManager functionality), object-relational mapping issues (including dealing with inheritance) and some thoughts on batch processing and reporting.

Finally, I also plan to spend a bit of time discussing what .NET 2.0 means to object-oriented programming. I'll discuss the (very cool) new data binding enhancements in Windows Forms, and the (not quite as cool) enhancements in Web Forms. I'll also talk about what Indigo is likely to mean for distributed object-oriented systems - including a discussion about why using remoting today is not at all a bad thing!

 

Sunday, June 06, 2004 8:08:00 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, June 02, 2004

Most people (including me) don’t regularly Dispose() their Command objects when doing data access with ADO.NET. The Connection and DataReader objects have Close() methods, and people are very much in the habit (or should be) of ensuring that either Close() or Dispose() or both are called on Connection and DataReader objects.

 

But Command objects do have a Dispose() method even though they don’t have a Close() method. Should they be disposed?

 

I posed this question to some of the guys on the ADO.NET team at Microsoft. After a few emails bounced around I got an answer: “Sometimes it is important”

 

It turns out that the reason Command objects have a Dispose method is because they inherit from Component, which implements IDisposable. The reason Command objects inherit from Component is so that they can be easily used in the IDE on designer surfaces.

 

However, it also turns out that some Command objects really do have non-managed resources that need to be disposed. Some don’t. How do you know which do and which don’t? You need to ask the dev that wrote the code.

 

It turns out that SqlCommand has no un-managed resources, which is why most of us have gotten away with this so far. However, OleDbCommand and OdbcCommand do have un-managed resources and must be disposed to be safe. I don’t know about OracleCommand – as that didn’t come up in the email discussions.

 

Of course that’s not a practical answer, so the short answer to this whole thing is that you should always Dispose() your Command objects just to be safe.

 

So, follow this basic pattern in VB.NET 2002/2003 (pseudo-code):

 

Dim cn As New Connection("…")

cn.Open()

Try

  Dim cm As Command = cn.CreateCommand()

  Try

    Dim dr As DataReader = cm.ExecuteReader()

    Try

      ' do data reading here

 

    Finally

      dr.Close() ' and/or Dispose() – though Close() and Dispose() both work

    End Try

 

  Finally

    cm.Dispose()

  End Try

 

Finally

  cn.Close() ' and/or Dispose() – though Close() and Dispose() both work

End Try

 


And in C# 1.0 and 2.0 (pseudo-code):

 

using(Connection cn = new Connection("…"))

{

  cn.Open()

  using(Command cm = cn.CreateCommand())

  {

    using(DataReader dr = cm.ExecuteReader())

    {

      // do data reading here

    }

  }

}

 

And in VB 8 (VB.NET 2005) (pseudo-code):

 

Using cn As New Connection("…")

  cn.Open()

  Using cm As Command = cn.CreateCommand()

    Using dr As DataReader = cm.ExecuteReader()

      ' do data reading here

    End Using

  End Using

End Using

 

 

Wednesday, June 02, 2004 10:33:20 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
July, 2014 (2)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In