Rockford Lhotka

 Tuesday, November 27, 2007

Every now and again I blog about the fact that Magenic needs good developers (.NET, SQL, Sharepoint, BizTalk, CSLA .NET) in San Francisco, Minneapolis, Chicago, Atlanta and Boston. This continues to be true, and if you are at all interested, read on!

My colleague, Brant Estes, put together a little web site to try and seduce you into applying for work at Magenic. But more fun, he put together a broad-reaching tech quiz to illustrate the kind of knowledge we expect people to have before we hire them.

I'll be brutally honest, and say that I scored 84%, which entitles me to keep working at Magenic (whew!). I figure that's not too bad, given that the quiz covers areas (like HTML) that I try to avoid like the plague (and I didn't cheat and use Google while taking the test :) ). Brant tells me that passing is 70%.

Whether you are interested in working at Magenic or not, the quiz is a fun challenge, so feel free to take a look (just give yourself around 10-15 minutes to go through it).

Tuesday, November 27, 2007 3:10:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, November 16, 2007

Next week, on 21 November, I'm speaking at a user group in Barcelona. I'll be giving an overview of CSLA .NET 3.0, with a little glimpse of 3.5 if I have time.

The organizers have also arranged for this to be broadcast as a Live Meeting. The link for registering and assisting to the live meeting session is: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032360496&Culture=es-ES

Friday, November 16, 2007 1:25:15 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, November 15, 2007

I'm busy getting my work and home life in order, as I head off to Barcelona with my oldest son for a week. I'm speaking at the Barcelona .NET user group on Wednesday, 21 November, so if you are in the area stop by and say hello! Otherwise, we'll be around the Barcelona area, enjoying Spain - I really love Spain!

Some of what I'm busy trying to get organized are some of the upcoming changes to CSLA .NET in version 3.5. Thematically of course, the focus is on support for LINQ, primarily LINQ for Objects (like CSLA objects) and LINQ for SQL. As usual however, I'm also tackling some items off the wish list, and making other changes that I think will make things better/easier/more productive - and some that are just fun for me :)

When talking about LINQ for Objects, there are a couple things going on. First, Aaron Erickson, a fellow Magenicon, is incorporating his i4o concepts into CSLA .NET to give CSLA collections the ability to support indexed queries. Second, there are some basic plumbing things that need to happen for LINQ to play well with CSLA objects, most notably ensuring that a result (other than a projection) from a query against a CSLA collection results in a live, updatable view of the original collection.

By default LINQ just creates a new IEnumerable list with the selected items. But this can be horribly unproductive, because a user might add a new item or remove an existing item from that list - which would have no impact at all on the original list. Aaron has devised an approach by which a query against a BusinessListBase or ReadOnlyListBase will result in a richer resulting list, that is still connected to the original - much like SortedBindingList and FilteredBindingList are today. This way, when an item is removed from the view, it is also removed from the real list.

Back to the i4o stuff, allowing indexing of properties on child objects contained in a list can make for much faster queries. If you only do a single query on a list it might not be worth building an index. But if you do multiple queries to get different views of a list (projections or not), having an index on one or more child object properties can make a very big performance difference!

When talking about LINQ for SQL, the focus is really on data access. LINQ for SQL, in the CSLA world-view, is merely a replacement for ADO.NET. The same is true for the ADO.NET Entity Framework. Architecturally, both of these technologies simply replace the use of a raw DataReader or a DataSet/DataTable like you might use today. Alternately, you can say that they compete with existing tools like nHibernate or LLBLgen - tools that people already use instead of ADO.NET.

In short, this means that your DataPortal_XYZ methods may make LINQ queries instead of using an ADO.NET SqlCommand. They may load the business object's fields from a resulting LINQ object rather than from a DataReader. In my testing thus far, I find that some of my DataPortal_Fetch() methods shrink by nearly 50% by using LINQ. That's pretty cool!

Of course nothing is faster than using a raw DataReader. LINQ loads its objects using a DataReader too - so using LINQ does incur overhead that you don't suffer when using raw ADO.NET. However, many applications aren't performance-bound as much as they are maintenance-bound. That is to say that it is often more important to improve maintainability and reduce coding than it is to eek out that last bit of performance.

To that end, I'm working on enhancing and extending DataMapper to be more powerful and more efficient. Since nHibernate, LINQ and ADO.NET EF all return data transfer objects or entity objects, it becomes important to be able to easily copy the data from those objects into the fields of your business objects. And that has to happen without incurring too much overhead. You can do the copies by hand with code that copies each DTO property into a field, which is fast, but is a lot of code to write (and debug, and test). If DataMapper can do all that in a single line of code that's awesome, but the trick is to avoid incurring too much overhead from reflection or other dynamic mapping schemes.

One cool side-effect of this DataMapper work, is that it will work for LINQ and EF, but also for XML services and anything else where data comes back in some sort of DTO/entity object construct.

There are also some issues with LINQ for SQL and context. Unfortunately, LINQ's context object wasn't designed to support n-tier deployments, and doesn't work well with any n-tier model, including CSLA. This means that LINQ doesn't/can't keep track of things like whether an object is new/old or marked for deletion. On the upside, CSLA already does all that. On the downside, it makes doing inserts/updates/deletes with LINQ for SQL more complex than if you used LINQ in the simpler 2-tier world for which it was designed. I hope to come up with some ways to bridge this gap a little so as to preserve some of the simpler LINQ syntax for database updates, but I'm not overly confident...

I'm also doing a bunch of work to minimize and standardize the coding required to build a business object. Each release of CSLA has tended to standardize the code within a business object more and more. Sometimes this also shrinks the code, though not always. At the moment I'm incorporating some methods and techniques to shrink and standardize how properties are implemented, and how child objects (collections or single objects) are managed by the parent.

By borrowing some ideas from Microsoft's DependencyProperty concept, I've been able to shrink the typical property implementation by about 35% - from 14 lines to 9:

Private Shared NameProperty As PropertyInfo(Of String) = RegisterProperty(Of String, Customer)("Name")
Private _name As String = NameProperty.DefaultValue
Public Property Name() As String
  Get
    Return GetProperty(Of String)(NameProperty, _name)
  End Get
  Set(ByVal value As String)
    SetProperty(Of String)(NameProperty, _name, value)
  End Set
End Property

Unlike a DependencyProperty, this technique continues to use a strongly typed local field, which I think is a better approach (faster, though slightly less abstract).

Child objects are similar:

Private Shared LineItemsProperty As PropertyInfo(Of LineItems) = RegisterProperty(Of LineItems, Order)("LineItems")
Public ReadOnly Property LineItems() As LineItems
  Get
    If Not ChildExists(LineItemsProperty) Then
      SetChild(Of LineItems)(LineItemsProperty, LineItems.NewList())
    End If
    Return GetChild(Of LineItems)(LineItemsProperty)
  End Get
End Property

In this case the implementation is more similar to a DependencyProperty, in that BusinessBase really does contain and manage the child object. There's value to that though, because it means that BusinessBase now automatically handles IsValid and IsDirty so you don't need to override them. The next step is to make BusinessBase automatically handle PropertyChanged/ListChanged/CollectionChanged events from the child objects and echo those as a PropertyChanged event from the parent. Along with that I'll also make sure that the child's Parent property is automatically set. This should eliminate virtually all the easy-to-forget code related to child objects - resulting in a lot less code and much more maintainable parent objects.

As I noted earlier, there are a lot of things on the wish list, and I'll hit some of those as time permits.

One item that I've done in VB and need to port to C# is changing authorization to call a pluggable IsInRole() provider method, allowing people to alter the algorithm used to determine if the current user is in a specific role. The default continues to do what it has always done by calling principal.IsInRole(), but for some advanced scenarios this extensibility point is very valuable.

Beyond that I'll hit items on the wish list given time and energy.

However, I need to have all major work done on 3.5 by the end of January 2008, because I'll start working on Expert 2008 Business Objects, the third edition of the Business Objects book for .NET, in January. The tentative plan is to have the book out around the end of June, which will be a tall order given that it will probably expand by 300 pages or so by the time I'm done covering .NET 3.0 and 3.5...

Now I must do some packing, and get into the mindset required to survive 11 hours in the air - they just don't make airplane seats for people who are 6'5" tall.

Thursday, November 15, 2007 9:34:59 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, November 13, 2007

I always encourage people to go with 2-tier deployments if possible, because more tiers increases complexity and cost.

But it is important to recognize when the value of 3-tier outweighs that complexity/cost. The primary three motivators are:

  1. Security – using an app server allows you to shift the database credentials from clients to the server, adding security because a would-be hacker (or savvy end-user) would need to hack into the app server to get those credentials.
  2. Network infrastructure limitations – sometimes a 3-tier solution can help offload processing from CPU-bound or memory-bound machines, or it can help minimize network traffic over slow or congested links. Direct database communication is a chatty dialog, and using an app server allows a single blob of data to go across the slow link, and the chatty dialog to occur across a fast link, so 3-tier can result in a net win for performance if the client-to-server link is slow or congested.
  3. Scalability – using an app server provides for database connection pooling. The trick here, is that if you don’t need it, you’ll harm performance by having an app server, so this is only useful if database connections are taxing your database server. Given the state of modern database hardware/software, scalability isn’t a big deal for the vast majority of applications anymore.

The CSLA .NET data portal is of great value here, because it allows you to switch from 2-tier to 3-tier with a configuration change - no code changes required*. You can deploy in the cheaper and simpler 2-tier model to start with, and if one of these motivators later justifies the complexity of moving to 3-tier, you can make that move with relative ease.

* disclaimer: if you plan to make such a move, I strongly suggest testing in 3-tier mode during development. While the data portal makes this 2-tier to 3-tier switch seamless, it only works if the code in the business objects follows all the rules around serialization and layer seperation. Testing in a 3-tier mode helps ensure that none of those rules get accidentally broken during development.

Tuesday, November 13, 2007 4:37:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

Magenic is producing a series of webcasts on various IT related topics. One of them is CSLA .NET, and I'll be the speaker for that webcast. Click here for details on the date/time and how to register.

Tuesday, November 13, 2007 2:49:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

I was recently asked whether CSLA incorporates thread synchronization code to marshal calls or events back onto the UI thread when necessary. The short answer is no, but the question deserves a bit more discussion to understand why it isn't the business layer's job to handle such details.

The responsibility for cross-thread issues resides with the layer that started the multi-threading.

So if the business layer internally utilizes multi-threading, then that layer must abstract the concept from other layers.

But if the UI layer utilizes multi-threading (which is more common), then it is the UI layer's job to abstract that concept.

It is unrealistic to build a reusable business layer around one type of multi-threading model and expect it to work in other scenarios. Were you to use Windows Forms components for thread synchronization, you'd be out of luck in the ASP.NET world, for example.

So CSLA does nothing in this regard, at the business object level anyway. Nor does a DataSet, or the entity objects from ADO.NET EF, etc. It isn't the business/entity layer's problem.

CSLA does do some multi-threading, specifically in the Csla.Wpf.CslaDataProvider control, because it supports the IsAsynchronous property. But even there, WPF data binding does the abstraction, so there's no thread issues in either the business or UI layers.

Tuesday, November 13, 2007 11:09:54 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

As long as I'm doing simple blog links, here's another tool worth looking at, if you need to skin a SharePoint or other web site: 

SharePoint Skinner Overview - eLumenotion Blog

Web
Tuesday, November 13, 2007 10:53:57 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

I haven't tried this yet, but it looks like a very nice tool for WPF developers:

Woodstock for WPF - The Code Project - Windows Presentation Foundation

WPF
Tuesday, November 13, 2007 10:48:41 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Monday, November 5, 2007

I was recently asked how to get the JSON serializer used for AJAX programming to serialize a CSLA .NET business object.

From an architectural perspective, it is better to view an AJAX (or Silverlight) client as a totally separate application. That code is running in an untrusted and terribly hackable location (the user’s browser – where they could have created a custom browser to do anything to your client-side code – my guess is that there’s a whole class of hacks coming that most people haven’t even thought about yet…)

So you can make a strong argument that objects from the business layer should never be directly serialized to/from an untrusted client (code in the browser). Instead, you should treat this as a service boundary – a semantic trust boundary between your application on the server, and the untrusted application running in the browser.

What that means in terms of coding, is that you don’t want to do some sort of field-level serialization of your objects from the server. That would be terrible in an untrusted setting. To put it another way - the BinaryFormatter, NetDataContractSerializer or any other field-level serializer shouldn't be used for this purpose.

If anything, you want to do property-level serialization (like the XmlSerializer or DataContractSerializer or JSON serializer). But really you don’t want to do this either, because you don’t want to couple your service interface to your internal implementation that tightly. This is, fundamentally, the same issue I discuss in Chapter 11 of Expert 2005 Business Objects as I talk about SOA.

Instead, what you really want to do (from an architectural purity perspective anyway) is define a service contract for use by your AJAX/Silverlight client. A service contract has two parts: operations and data. The data part is typically defined using formal data transfer objects (DTOs). You can then design your client app to work against this service contract.

In your server app’s UI layer (the aspx pages and webmethods) you translate between the external contract representation of data and your internal usage of that data in business objects. In other words, you map from the business objects in/out of the DTOs that flow to/from the client app. The JSON serializer (or other property-level serializers) can then be used to do the serialization of these DTOs - which is what those serializers are designed to do.

This is a long-winded way of saying that you really don’t want to do JSON serialization directly against your objects. You want to JSON serialize some DTOs, and copy data in/out of those DTOs from your business objects – much the way the SOA stuff works in Chapter 11.

Monday, November 5, 2007 11:53:58 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, October 24, 2007

I've worked at numerous clients who've done offshoring projects, and for the most part I've watched the potential savings evaporate due to various factors (typically a ton of rework on the back end, as the clients ended up spending months "fixing" the software so it would actually meet their needs).

Having watched that happen, I always wanted to write a blog post detailing the issues, risks and costs. Now I don't have to because a fellow Magenicon has done a really nice job with this article: 

Latest in Offshoring to India: Considerations in Evaluating Onshore vs. Offshore Software Development

Interestingly enough, I would be that a lot of that rework I've seen happen at these clients would not have been necessary if they'd recognized the real costs and risks ahead of time - most of which are covered in Matt's article. By recognizing the real risks ahead of time, it would have been possible to put procedures in place, and assign local resources to the project up front to help mitigate or manage those risks. Sure, doing those things does eat into the savings, but would almost certainly help avoid the incredibly costly rework that is otherwise required on the back end of the project.

My brother manages a global team. His team exists in the US, France, India and Japan. They do 24 hour development. He has a lot of interesting stories to tell about cultural communication barriers (missing deadlines because a US phrase meaning "we'll get it done" means "we'll think about it" in another country, that sort of thing). And the joy of rotating conference calls through all hours of the day and night so the team can coordinate and so no one nation's team has to get up at 2 AM all the time. I've stayed at his house when he had to get up for a 2 AM conference call - what fun!

But his team is successful. After years of working through the various issues, and using a high level of process and formalization (he's a Six Sigma Black Belt and instructor), they have been able to get this all working. Prior to having that level of formalization and rigor of process though, I know they really struggled.

So the costs and risks associated with offshoring are very real. Whether they make local resources cheaper or not is a case-by-case thing, but it is clearly ridiculous to expect a huge savings by offshoring after you factor in all the issues involved.

Wednesday, October 24, 2007 8:05:03 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, October 18, 2007

My email provider just emailed me to let me know that emails to my lhotka.net addresses (and in particular my admin @ lhotka.net address) may have been lost between October 10-14.

I am posting this on my blog, because that admin address is the location people email to get support from http://store.lhotka.net.

So if you emailed me for support or help with purchasing my ebooks within the past few days, and I haven't responded, it is probably because the emails were lost. Please email me again and I'll do what I can to help resolve any issues.

Thank you!

Thursday, October 18, 2007 7:47:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, October 17, 2007

   

Now available in both VB and C# editions!

CSLA .NET version 3.0 adds support for Microsoft .NET 3.0 features. This ~120 page ebook covers how to use these new capabilities:

  • Windows Presentation Foundation (WPF)
    • Creating WPF forms using business objects
    • Using the new controls in the Csla.Wpf namespace
      • CslaDataProvider
      • Validator
      • Authorizer
      • ObjectStatus
      • IdentityConverter
    • Maximizing XAML and minimizing C#/VB code
  • Windows Communication Foundation (WCF)
    • Using the new WCF data portal channel to seamlessly upgrade from Remoting, Web services or Enterprise Services
    • Building WCF services using business objects
    • Applying WCF security to encrypt data on the wire
    • Sending username/password credentials to a WCF service
      • Including use of the new Csla.Security.PrincipalCache class
    • Using the DataContract attribute instead of the Serializable attribute
  • Windows Workflow Foundation (WF)
    • Creating activities using business objects
    • Invoking a workflow from a business object
    • Using the WorkflowManager class in the Csla.Workflow namespace

Version 3.0 is an additive update, meaning that you only need to use the .NET 3.0 features if you are using .NET 3.0. CSLA .NET 3.0 is useful for people using .NET 2.0!! These features include:

  • Enhancements to the validation subsystem
    • Friendly names for properties
    • Better null handling in the RegExMatch rule method
    • New StringMinLength rule method
    • Help for code generation through the DecoratedRuleArgs class
  • Data binding issues
    • Fixed numerous bugs in BusinessListBase to improve data binding behavior
    • Throw exception when edit levels get out of sync, making debugging easier
    • N-level undo changed to provide parity with Windows Forms data binding requirements
  • AutoCloneOnUpdate
    • Automatically clone objects when Save() is called, but only when data portal is local
  • Enhancements to the authorization subsystem
    • CanExecuteMethod() allows authorization for arbitrary methods

CSLA .NET 3.0 includes numerous bug fixes and some feature enhancements that benefit everyone. If you are using version 2.0 or 2.1, you should consider upgrading to 3.0 to gain these benefits, even if you aren't using .NET 3.0.

See the change logs for version 3.0, version 3.0.1 and version 3.0.2 for a more detailed list of changes.

Using CSLA .NET 3.0 is completely focused on how to use the new features in version 3.0. The book does not detail the internal changes to CSLA .NET itself, so all ~120 pages help you use the enhancements added since version 2.1.

Get the book at store.lhotka.net.

Download the 3.0.2 code from the CSLA .NET download page.

Wednesday, October 17, 2007 5:10:06 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, October 10, 2007

Earlier this week I co-presented a session at ReMIX in Boston, with Anthony Handley, a fellow Magenicon. Anthony is a User Experience Specialist, and the two of us have been working on a WPF/Silverlight project (that also happens to use CSLA .NET of course :) ) and in this presentation we discussed our experiences using WPF from the perspectives of a developer and a designer.

Here are the links to the video of the session.

Real World Experiences Building Applications Using WPF and Silverlight

Part 1
http://channel9.msdn.com/Showpost.aspx?postid=346810
Part 2
http://channel9.msdn.com/Showpost.aspx?postid=346812
Part 3
http://channel9.msdn.com/Showpost.aspx?postid=346815
Part 4
http://channel9.msdn.com/Showpost.aspx?postid=346818

Wednesday, October 10, 2007 3:10:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

I recently branched the CSLA .NET codebase in the svn repository (www.lhotka.net/cslacvs for a web view).

Version 3.0.3, which is the maintenance branch for the current 3.0.2 version, is on a branch now. This is a VS 2005 solution (though ProjectTracker does include some VS 2008 projects for .NET 3.0 concepts).

Active development is still on the trunk, and is for version 3.5. I'll be switching that version to a VS 2008 solution in the near future.

Switching to VS 2008 is a big deal. I realize that many people will be using VS 2005 for a long time to come, and so I don't do this lightly.

However, CSLA .NET 3.5 will include features around LINQ and other .NET 3.5 concepts. As such, I must use VS 2008 to do that work, because that is the tool for .NET 3.5 (and really for .NET 3.0).

If possible, I'll continue to use the compiler directive approach to allow building Csla.dll for .NET 2.0, 3.0 and 3.5. Whether this is realistic or not depends on the level at which LINQ ends up integrating into the framework. If LINQ gets too deep, the complexity introduced by the compiler directive approach may become higher than I'm willing to deal with, at which point I'll drop that idea. But for now, my goal is to continue to allow backward targeting of the framework.

Fortunately VS 2008 itself also supports backward targeting, so it is possible to use VS 2008 to build assemblies for .NET 2.0, 3.0 or 3.5. This may help mitigate some of the pain of having the Csla project switch to 2008, because you can still use 2008 to build for a .NET 2.0 deployment environment.

Wednesday, October 10, 2007 11:02:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

In CSLA .NET 3.0 I implemented Csla.Wpf.Validator. This control provides functionality similar to the Windows Forms ErrorProvider control, only in WPF. Of course the ErrorProvider relies on the standard IDataErrorInfo interface, which CSLA supports on behalf of your business objects, and so my Validator control used that same interface.

While I was researching and designing the Validator control, I was in contact with the WPF product team. As a result, I had (shall we say) a "strong suspicion" that my control was a temporary stop-gap until Microsoft provided a more integrated solution. And that's fine - we needed something that worked, and Validator was the ticket.

This blog post provides some good, detailed, insight into the real solution in .NET 3.5: 

Windows Presentation Foundation SDK : Data Validation in 3.5

A couple people have emailed me, asking what I think about this. My answer: I'm happy as can be!

As I say, I knew Validator was temporary. WPF is a version 1.0 technology, and it is very clear that Microsoft will be evolving it rapidly over the next few years. And it is equally clear that WPF must evolve to catch up to, and hopefully exceed, Windows Forms. That means more robust data binding support, including an ErrorProvider equivalent.

So to me, this just means that CSLA .NET 3.5 can drop the Validator control, because there's now a directly supported solution in .NET itself. While this will require changing XAML when moving from .NET 3.0 to 3.5, it is a worthwhile change to make.

Wednesday, October 10, 2007 9:31:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer