Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Monday, June 23, 2008

A couple people have suggested that I might be abandoning VB. Not so!

My first love was Pascal - VAX Pascal actually, which was more like Modula II in many ways. What an awesome language!

My next love was VAX Basic. Now that VB has structured exception handling in .NET, it has finally caught up to where VAX Basic was in the early 90's. No kidding.

Of course after VAX Basic came VB.

And after VB came .NET. I love .NET. I love .NET with VB and C#. C# is just VB with semi-colons, but VB is just C# without semi-colons too. I gave up on the silly language war thing a couple years ago, and am happy to let either or both language live or die by the hand of its users. The language thing was distracting me from truly enjoying .NET.

When it comes to writing books, it is really important to remember that they fund CSLA .NET. As much as I love what I do, I've got kids that will go to college in the (scarily near) future, so I can't work for free. So when I write a book, I can't ignore that C# books outsell VB books around 3:1 (it used to be 2:1, but the market has continued to shift). I still think it is worth writing a VB edition to get that 25% of the market, but you must admit that it makes a lot of sense to go for the 75% first!

It takes several weeks to port a book from one language to the other. The current plan for Expert 2008 Business Objects in C# is October (though I fear that may slip), and with the conversion time and publication schedule constraints, that pushes the VB edition into early 2009. Apress just hasn't put the VB book on their public release list yet, but that doesn't mean I don't plan to do that edition.

When it comes to CSLA Light, I'm doing it in C# because of the 3:1 split, and so again am focusing on C# first.

Whether I do a VB version of the framework or not depends on whether I decide to write a book on the creation and design of CSLA Light. I may or may not. If I don't write a book on the design of the actual framework, I won't port (and then maintain) the framework into a second language.

It is a ridiculous amount of work to maintain CSLA .NET twice, and I really don't like the idea of maintaining CSLA Light twice too. You have no idea how much writing, testing and debugging everything twice slows down progress (and eliminates fun). As wonderful as Instant C# and Instant VB are, the dual effort is a continually increasing barrier to progress.

I might write an ebook on using CSLA Light, in which case I'd leave the framework in C#, but create reference apps in both VB and C# so I can do both editions of the ebook. I think this is the most likely scenario. Certainly VB-compatibility has shaped a couple CSLA Light design decisions already - I won't allow a design that precludes the use of VB to build a CSLA Light app.

(The lack of multi-line lambdas and/or anonymous delegates in VB is a real barrier though... Worse even, than the poor way C# handles implementation of interfaces...)

In the end though, like all of us, I need to be where the market is vibrant. Where I can make money from my hard work. Just now, there's more money to be had from C# content and so that takes priority. But there are a lot of people using VB, and (assuming the sales ratio doesn't slip further) in my view it is worth producing content in the VB space as well.

Monday, June 23, 2008 9:18:31 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Saturday, June 14, 2008

We're nearly done with the final serialization implementation. At this point it is possible to clone objects on Silverlight and on .NET using the same MobileFormatter. By extension, this means that objects can be cloned from Silverlight to .NET and visa versa, though we don't have tests for that yet. (and there's a bunch of cleanup and detail work to be done for completeness - but the core engine is there and working)

The approach we've taken is automatic for managed backing fields in a subclass of BusinessBase or ReadOnlyBase. It is also automatic for BusinessListBase, ReadOnlyListBase, etc. It is not automatic for custom criteria classes, but is automatic for SingleCriteria.

The approach does allow for serialization of private backing fields, as long as you override a couple methods in your business class to get/set the field values. This is quite comparable to the PropertyBag scheme from VB6, and is conceptually similar to implementing ISerializable (though not the same due to some required features that aren't in Silverlight - there's a reason Microsoft didn't implement something like the BinaryFormatter).

What this means is that when using only managed backing fields, you have to do no work at all. Your objects will just serialize. If you are using one or more private backing fields you need to do something like this:

namespace MyApp
{
  [Serializable]
  public class CustomerEdit : BusinessBase<CustomerEdit>
  {
    protected override void OnGetState(SerializationInfo info)
    {
      base.GetState(info);
      info.AddValue("MyApp.CustomerEdit._id", _id);
      info.AddValue("MyApp.CustomerEdit._name", _name);
    }

    protected override void OnSetState(SerializationInfo info)
    {
      base.SetState(info);
      _id = info.GetValue<int>("MyApp.CustomerEdit._id";
      _name = info.GetValue<string>("MyApp.CustomerEdit._name");
    }
  }
}

Assuming, of course, that you have two private backing fields, _id and _name. The one important restriction is that the types of all fields must be primitive types, or types that can be coerced using Csla.Utilities.CoerceValue() (which means the value can be converted to/from a string as required by the DataContractSerializer).

Of course if you have a field value that can't be automatically converted to/from a string, you can do the conversion yourself in the OnGetState/OnSetState methods.

This will get a little more complex when we implement UndoableBase (which is the next item on the list), because your overrides must differentiate between "NonSerialized" and "NotUndoable" fields - by hand of course. I expect the final result will look more like this:

namespace MyApp
{
  [Serializable]
  public class CustomerEdit : BusinessBase<CustomerEdit>
  {
    protected override void OnGetState(SerializationInfo info, StateModes mode)
    {
      base.GetState(info);
      if (mode == StateModes.Serialization)
        info.AddValue("MyApp.CustomerEdit._id", _id);
      info.AddValue("MyApp.CustomerEdit._name", _name);
    }

    protected override void OnSetState(SerializationInfo info, StateModes mode)
    {
      base.SetState(info);
      if (mode == StateModes.Serialization)
        _id = info.GetValue<int>("MyApp.CustomerEdit._id";
      _name = info.GetValue<string>("MyApp.CustomerEdit._name");
    }
  }
}

The StateModes enum will have two values: Serialization and Undo. This will allow you to exclude certain fields that are serialization-only or perhaps undo-only. I think this is an edge case for the most part, but the option will be there.

Child objects in CSLA .NET 3.5 and later should always be kept in managed backing fields, and so will be handled automatically. If you have a pressing need to store a child object reference in a private backing field, there are also OnGetChildren() and OnSetChildren() methods you can override. The code is slightly more complex, but is very prescriptive (you either do it right or it won't work :) ) - but I see this as an edge case too, and so I'm not worried if it is a little more complex.

On the whole I'm pretty pleased with this approach. It automates as many cases as can be automated without using reflection, but provides a great deal of extensibility and flexibility for those who don't want to use managed backing fields.

Saturday, June 14, 2008 8:58:12 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Thursday, June 12, 2008

This is a cool site, that I hope takes off. In the words of its creator:

I wanted to drop you all a note about a project I’ve been working on called Community Megaphone. Community Megaphone is a web site I started building about a year ago to fill what I saw as a need in the local community: simple, low-friction promotion and discovery of developer events, particularly community-run events. Initially, the site was limited to a portion of the east coast, but this week I’ve opened the site up to events from the entire United States.

The need for some central location where you can find community events is high. While it is good that each event has its own web site for details and discussions, the value of a central index would be incredible! Please support Andrew's initiative!

Thursday, June 12, 2008 2:44:17 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

The MobileFormatter serializer being created for CSLA Light serializes objects that implement IMobileObject. CSLA Light and CSLA .NET classes will implement this interface so the business developer doesn't normally need to worry about it.

(the exception being when a business object uses private backing fields, in which case the business object developer will need to override OnGetState() and OnSetState() to get/set the private fields they've declared)

To organize the implementation, we are planning to introduce two new base classes to CSLA: MobileObject and MobileList<T>. The following diagram shows how the new inheritance hierarchy will work:

MobileObjectCD

Note that Silverlight doesn't have a BindingList<T> type, and so CSLA Light will supply that type. CSLA .NET already uses BindingList<T> extensively, and so this will provide parity on both sides of the data portal.

In other words, this will be the inheritance hierarchy for both CSLA .NET and CSLA Light.

It is also the case that FieldDataManager, the object that manages all the managed backing field values in CSLA 3.5, must serialize itself as well. And so will the BrokenRulesCollection.

FMCD

The overall hierarchy remains as it is in CSLA .NET, we're just inserting MobileObject and MobileList at the top of the hierarchy (or as high as possible in the case of .NET).

This will allow the MobileFormatter to interact with any CSLA-derived business class, automatically serializing all managed backing fields and the child objects contained in the FieldManager of a business object, or the inner list of any MobileList (BindingList<T>).

Thursday, June 12, 2008 2:28:48 PM (Central Standard Time, UTC-06:00)  #    Disclaimer

WCF is very cool, but configuring WCF can virtually derail a project. Even relatively simple-seeming configurations can take hours or days to get working. It is frustrating! And the most complex part is getting security working.

The Microsoft Patterns and Practices group recently released beta guidance for WCF security (http://www.codeplex.com/wcfsecurityguide), and it is probably the single best resource for information about configuring WCF security you'll find anywhere.

WCF
Thursday, June 12, 2008 9:30:48 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

I've been engaged in a discussion around CSLA .NET on an MSDN architecture forum - you can see the thread here. I put quite a bit of time/effort into one of my replies, and I wanted to repost it here for broader distribution:

I don’t think it is really fair to relate CSLA .NET to CSLA Classic. I totally rewrote the framework for .NET (at least 4 times actually – trying different approaches/concepts), and the only way in which CSLA .NET relates to CSLA Classic is through some of the high level architectural goals around the use of mobile objects and the minimization of UI code.

The use of LSet in VB5/6, btw, was the closest approximation to the concept of a union struct from C or Pascal or a common block from FORTRAN possible in VB. LSet actually did a memcopy operation, and so wasn’t as good as a union struct, but was radically faster than any other serialization option available in VB at the time. So while it was far from ideal, it was the best option available back then.

Obviously .NET provides superior options for serialization through the BinaryFormatter and NetDataContractSerializer, and CSLA .NET makes use of them. To be fair though, a union struct would still be radically faster Smile

Before I go any further, it is very important to understand the distinction between ‘layers’ and ‘tiers’. Clarity of wording is important when having this kind of discussion. I discuss the difference in Chapter 1 of my Expert 2005 Business Objects book, and in several blog posts – perhaps this one is best:

http://www.lhotka.net/weblog/MiddletierHostingEnterpriseServicesIISDCOMWebServicesAndRemoting.aspx

The key thing is that a layer is a logical separation of concerns, and a tier directly implies a process or network boundary. Layers are a design constraint, tiers are a deployment artifact.

How you layer your code is up to you. Many people, including myself, often use assemblies to separate layers. But that is really just a crutch – a reminder to have discipline. Any clear separation is sufficient. But you are absolutely correct, in that a great many developers have trouble maintaining that discipline without the clear separation provided by having different code in different projects (assemblies).

1)

CSLA doesn’t group all layers into a single assembly. Your business objects belong in one layer – often one assembly – and so all your business logic (validation, calculation, data manipulation, authorization, etc) are in that assembly.

Also, because CSLA encourages the use of object-oriented design and programming, encapsulation is important. And other OO concepts like data hiding are encouraged. This means that the object must manage its own fields. Any DAL will be working with data from the object’s fields. So the trick is to get the data into and out of the private fields of the business object without breaking encapsulation. I discussed the various options around this issue in my previous post.

Ultimately the solution in most cases is for the DAL to provide and consume the data through some clearly defined interface (ADO.NET objects or DTOs) so the business object can manage its own fields, and can invoke the DAL to handle the persistence of the data.

To be very clear then, CSLA enables separation of the business logic into one assembly and the data access code into a separate assembly.

However, it doesn’t force you to do this, and many people find it simpler to put the DAL code directly into the DataPortal_XYZ methods of their business classes. That’s fine – there’s still logical separation of concerns and logical layering – it just isn’t as explicit as putting that code in a separate assembly. Some people have the discipline to make that work, and if they do have that discipline then there’s nothing wrong with the approach imo.

2)

I have no problem writing business rules in code. I realize that some applications have rules that vary so rapidly or widely that the only real solution is to use a metadata-driven rules engine, and in that case CSLA isn’t a great match.

But let’s face it, most applications don’t change that fast. Most applications consist of business logic written in C#/VB/Java/etc. CSLA simply helps formalize what most people already do, by providing a standardized approach for implementing business and validation rules such that they are invoked efficiently and automatically as needed.

Also consider that CSLA’s approach separates the concept of a business rule from the object itself. You then link properties on an object to the rules that apply to that object. This linkage can be dynamic – metadata-driven. Though the rules themselves are written as code, you can use a table-driven scheme to link rules to properties, allowing for SaaS scenarios, etc.

3)

This is an inaccurate assumption. CSLA .NET requires a strong separation between the UI and business layers, and allows for a very clear separation between the business and data access layers, and you can obviously achieve separation between the data access and data storage layers.

This means that you can easily have UI specialists that know little or nothing about OO design or other business layer concepts. In fact, when using WPF it is possible for the UI to only have UI-specific code – the separation is cleaner than is possible with Windows Forms or Web Forms thanks to the improvements in data binding.

Also, when using ASP.NET MVC (in its present form at least), the separation is extremely clear. Because the CSLA-based business objects implement all business logic, the view and controller are both very trivial to create and maintain. A controller method is typically just the couple lines of code necessary to call the object’s factory and connect it to the view, or to call the MVC helper to copy data from the postback into the object and to have the object save itself. I’m really impressed with the MVC framework when used in conjunction with CSLA .NET.

And it means that you can have data access specialists that only focus on ADO.NET, LINQ to SQL, EF, nHibernate or whatever. In my experience this is quite rare – very few developers are willing to be pigeonholed into such a singularly uninteresting aspect of software – but perhaps your experiences have been different.

Obviously it is always possible to have database experts who design and implement physical and logical database designs.

4)

I entirely agree that the DTO design pattern is incredibly valuable when building services. But no one pattern is a silver bullet and all patterns have both positive and negative consequences. It is the responsibility of professional software architects, designers and developers to use the appropriate patterns at the appropriate times to achieve the best end results.

CSLA .NET enables, but does not require, the concept of mobile objects. This concept is incredibly powerful, and is in use by a great many developers. Anyone passing disconnection ADO recordsets, or DataSets or hashtables/dictionaries/lists across the network uses a form of mobile objects. CSLA simply wraps a pre-existing feature of .NET and makes it easier for you to pass your own rich objects across the network.

Obviously only the object’s field values travel across the network. This means that a business object consumes no more network bandwidth than a DTO. But mobile objects provide a higher level of transparency in that the developer can work with essentially the same object model, and the same behaviors, on either side of the network.

Is this appropriate for all scenarios? No. Decisions about whether the pattern is appropriate for any scenario or application should be based on serious consideration of the positive and negative consequences of the pattern. Like any pattern, mobile objects has both types of consequence.

If you look at my blog over the past few years, I’ve frequently discussed the pros and cons of using a pure service-oriented approach vs an n-tier approach. Typically my n-tier arguments pre-suppose the use of mobile objects, and there are some discussions explicitly covering mobile objects.

The DTO pattern is a part of any service-oriented approach, virtually by definition. Though it is quite possible to manipulate your XML messages directly, most people find that unproductive and prefer to use a DTO as an intermediary – which makes sense for productivity even if it isn’t necessarily ideal for performance or control.

The DTO pattern can be used for n-tier approaches as well, but it is entirely optional. And when compared to other n-tier techniques involving things like the DataSet or mobile objects the DTO pattern’s weaknesses become much more noticeable.

The mobile object pattern is not useful for any true service-oriented scenario (note that I’m not talking about web services here, but rather true message-based SOA). This is because your business objects are your internal implementation and should never be directly exposed as part of your external contract. That sort of coupling between your external interface contract and your internal implementation is always bad – and is obviously inappropriate when using DTOs as well. DTOs can comprise part of your external contract, but should never be part of your internal implementation.

The mobile object pattern is very useful for n-tier scenarios because it enables some very powerful application models. Most notably, the way it is done in CSLA, it allows the application to switch between a 1-, 2- and 3-tier physical deployment merely by changing a configuration file. The UI, business and data developers do not need to change any code or worry about the details – assuming they’ve followed the rules for building proper CSLA-based business objects.

Thursday, June 12, 2008 8:58:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer

Dunn Training is working with some people in Ireland to put together a CSLA .NET training class. If you are in Ireland, the UK or perhaps Europe, you may be interested. This CSLA forum post has some more information.

Thursday, June 12, 2008 8:14:43 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, June 10, 2008

I've been prototyping various aspects of CSLA Light (CSLA for Silverlight) for some time now. Enough to be confident that a decent subset of CSLA functionality will work just fine in Silverlight - which is very exciting!

The primary area of my focus is serialization of object graphs, and I've blogged about this before. This one issue is directly on the critical path, because a resolution is required for the data portal, object cloning and n-level undo.

And I've come to a final decision regarding object serialization: I'm not going to try and use reflection. Silverlight turns out to have some reasonable support for reflection - enough for Microsoft to create a subset of the WCF DataContractSerializer. Unfortunately it isn't enough to create something like the BinaryFormatter or NetDataContractSerializer, primarily due to the limitations around reflecting against non-public fields.

One option I considered is to say that only business objects with public read-write properties are allowed. But that's a major constraint on OO design, and still doesn't resolve issues around calling a property setter to load values into the object - because object setters typically invoke authorization and validation logic.

Another option I considered is to actually use reflection. I discussed this in a previous blog post - because you can make it work as long as you insert about a dozen lines of code into every class you write. But I've decided this is too onerous and bug-prone. So while reflection could be made to work, I think the cost is too high.

Another option is to require that the business developer create a DTO (data transfer object) for each business object type. And all field values would be stored in this DTO rather than in normal fields. While this is a workable solution, it imposes a coding burden not unlike that of using the struct concepts from my CSLA Classic days in the 1990's. I'm not eager to repeat that model...

Yet another option is to rely on the concept of managed backing fields that I introduced in CSLA .NET 3.5. In CSLA .NET 3.5 I introduced the idea that you could choose not to declare backing fields for your properties, and that you could allow CSLA to manage the values for you in something called the FieldManager. Conceptually this is similar to the concept of a DependencyProperty introduced by Microsoft for WF and WPF.

The reason I introduced managed backing fields is that I didn't expect Silverlight to have reflection against private fields at all. I was excited when it turned out to have a level of reflection, but now that I've done all this research and prototyping, I've decided it isn't useful in the end. So I'm returning to my original plan - using managed backing fields to avoid the use of reflection when serializing business objects.

The idea is relatively simple. The FieldManager stores the property values in a dictionary (it is actually a bit more complex than that for performance reasons, but conceptually it is a dictionary). Because of this, it is entirely possible to write code to loop through the values in the field manager and to copy them into a well-defined data contract (DTO). In fact, it is possible to define one DTO that can handle any BusinessBase-derived object, and other for any BusinessListBase-derived object and so forth. Basically one DTO per CSLA base class.

The MobileFormatter (the serializer I'm creating) can simply call Serialize() and Deserialize() methods on the CSLA base classes (defined by an IMobileObject interface that is implemented by BusinessBase, etc.) and the base class can get/set its data into/out of the DTO supplied by the MobileFormatter.

In the end, the MobileFormatter will have one DTO for each business object in the object graph, all in a single list of DTOs. The DataContractSerializer can then be used to convert that list of DTOs into an XML byte stream, as shown here:

image

The XML byte stream can later be deserialized into a list of DTOs, and then into a clone of the original object graph, as shown here:

image

Notice that the object graph shape is preserved (something the DataContractSerializer in Silverlight can't do at all), and that the object graph is truly cloned.

This decision does impose an important constraint on business objects created for CSLA Light, in that they must use managed backing fields. Private backing fields will not be supported. I prefer not to impose constraints, but this one seems reasonable because the alternatives are all worse than this particular constraint.

My goal is to allow you to write your properties, validation rules, business rules and authorization rules exactly one time, and to have that code run on both the Silverlight client and on your web/app server. To have that code compile into the Silverlight runtime and the .NET runtime. To have CSLA .NET and CSLA Light provide the same set of public and protected members so you get the same CSLA services in both environments.

By restricting CSLA Light to only support managed backing fields, I can accomplish that goal without imposing requirements for extra coding behind every business object, or the insertion of arcane reflection code into every business class.

Tuesday, June 10, 2008 8:01:50 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Friday, May 30, 2008

I have hesitated to publicly discuss my experiences with Vista, acting under the theory that if you can't say something nice you shouldn't say anything at all. But at this point I have some nice things to say (though not all nice), and I think there's some value in sharing my experiences and thoughts.

On the whole, my experience with Vista has been decidedly mixed.

Vista is very pretty. It is clearly the future in many ways (especially around IIS 7 and WAS and security in general). And it has some nice usability features – like a far better replacement for ntbackup, and pre-enabled shadowing (so you can retrieve old files if you lose/overwrite them). And quite a few OS features are easier to find/use than in XP (once you get used to the changes).

However, it is slower and more resource-intensive than XP. So you can’t upgrade from XP and expect the same level of performance or responsiveness on the same hardware. If your hardware is more than a few months old, I really can't recommend an upgrade.

I upgraded my (now 2 year old) laptop when Vista came out, and have been generally displeased with the results. It is running Vista Business. However, as Vista has aged Microsoft has issued patches/fixes/updates that have helped with stability and performance. I would say that it is now tolerable, or at least I’ve learned to live with it. While it is workable, it isn't really satisfying - to me at least. My laptop is a dual core machine with 3 gigs of RAM and a low-end GPU.

One thing I'll note is that upgrading the laptop from 2 to 3 gigs of RAM made a huge difference in performance. Vista really likes memory, and the more you can get in your machine the happier you'll be.

I didn’t upgrade my desktop to Vista until I replaced my desktop machine. I do almost all my work on this machine, and wasn't about to deal with the performance issues on a constant basis.

My new desktop machine, which is running Vista Ultimate, is a quad core with 4 gigs of RAM and a high end GPU. I find that Vista runs quite adequately on this (admittedly high-end) machine. My current bottlenecks are memory speed (but DDR3 is too expensive) and disk IO (but 10k RPM disks are too loud – I’m one of those “silent computer” nuts).

I have colleagues who are running Vista 64 bit. Apparently that is faster and more stable (partially due to fewer iffy drivers, and because 64 bit gets all your 4+ gigs of RAM). But I'm a gamer, so I'm kind of stuck with 32 bit until there are 64 bit versions of Battlefield 2142, Supreme Commander, Sim City 4 and Civ IV :)

One thing that has really helped my Vista experience is the discovery of TeraCopy. Vista is notorious for slow file copies (especially when copying multiple files). It is a bit sad that Vista's slow file copies have enabled a product niche for something like a file copy utility (how 1990 is that!), but whatever, it works.

I have UAC turned on. I realize many devs turn it off. But if our users are to live with it, I think developers should too. And personally I think it should be illegal for Microsoft employees to turn it off – they should know what they are doing to their customers.

The thing is, I have not found UAC to be overly troublesome. Yes, there are some extra dialogs when installing software – but that’s not a big deal imo, and is an acceptable trade-off for the security. The bigger frustration with UAC are simpler things like trying to create a favorite in IE, or copy a shortcut to the Programs menu – both of which turn out to be really hard due to UAC.

I have done some work with VS 2005 under Vista. You have to run as admin to debug web apps, which means you can't double-click sln files. There may have been some other quirks too, I don't recall. Long ago I created a virtual machine with XP where I have VS 2005 installed, and any 2005 work is done there (for .NET 2.0/3.0 - primarily maintenance for CSLA .NET 3.0).

For months now I've been using VS 2008, largely under Vista. The experience is quite smooth. You do have to run 2008 as admin to debug a web app running in IIS, but not in the dev web server. It really isn’t a big deal to run as admin for VS if you need to debug a web app. This is the intended “escape hatch” for developers that need to do things a normal user should not be able to do. It is a little frustrating to not be able to double-click a sln file, but I can deal with that small issue.

And for WPF or Windows Forms work (and a lot of web work where the dev server can be used) then you don't need to run as admin at all.

In the final analysis, if you have a relatively new machine with high-end hardware and lots of RAM, then I think Vista is a fine OS, even for a developer. But if your machine is more than a few months old, has less than 3 gigs of RAM or has an older GPU, I'd hesitate to leave XP.

Friday, May 30, 2008 10:45:18 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, May 28, 2008
tech_summit_logo_sm Magenic is holding a full-day, two-track mini-conference on June 20.
That is just 3 weeks away, but there are still some open seats, so reserve yours now!

Our first keynote speaker is Jay Schmelzer, GPM of the RAD tools group at Microsoft. He'll be talking about the future of RAD tools (Visual Studio and more).

Our second keynote speaker is me, Rockford Lhotka. I'll be talking about the future of CSLA .NET and CSLA Light, a version of CSLA that will run in Silverlight.

The rest of the day is divided into two tracks for a total of 8 high-quality technical sessions. This will be a full day of hard-core technical content and fun!

This FREE event is being held in Downers Grove near Chicago, IL. It starts at 8:30 AM, we're providing lunch, and the event runs through to a reception at the end of the day at around 5 PM.

If you'd like an invitation to attend, please email info@magenic.com.

Click here more information about the event

Wednesday, May 28, 2008 10:36:55 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 27, 2008

Sue, my wife's best friend, and mother of my goddaughter, was diagnosed with breast cancer late last year. Fortunately they caught it early and with surgery and radiation she beat the cancer. It was a rough period of time, but she got through it.

Sue has convinced my wife to do a three day, 60 mile, Walk for the Cure this fall. It is a fundraiser for cancer research, and a worthy cause. I would think there must be better ways to raise money than to walk 20 miles a day for three days, but apparently not...

Here's my wife Teresa's post about the walk, including links to a web site where people (you perhaps?) can donate to the cause.

In fact, just to make it as easy as possible, here's the direct link to the donation page.

I rarely post personal items on my blog, preferring to keep it focused on cool technical stuff. But cancer is such an important issue. Sue beat it through good fortune and perseverance. My mother is living in an ongoing battle against a different type of cancer. A battle that appears unwinnable, and which has altered our lives dramatically over the past few years.

Any help in raising money for this cancer research effort is most appreciated! Thank you!

Tuesday, May 27, 2008 9:36:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Wednesday, May 21, 2008

OK, this is really cool: http://www.devfish.net/articles/inbetween/

It appears that Microsoft, having reserved the convention center for the two weeks of Tech Ed (Dev week and IT Pro week), is allowing the user groups in the region to take advantage of the idle space in the intervening weekend. That is so cool!!

But now, to be fair, Microsoft should provide two days of conference center space and AV in <insert your city here>, don't you think? :)

Wednesday, May 21, 2008 8:12:30 PM (Central Standard Time, UTC-06:00)  #    Disclaimer
 Tuesday, May 20, 2008

I was just in a discussion about ClickOnce with several people, include Brian Noyes, who wrote the book on ClickOnce.

I was under the mistaken impression, and I know quite a few other people who have this same misconception, that ClickOnce downloads complete new versions of your application each time you publish a new version. In fact, I know of at least a couple companies who specifically chose not to use ClickOnce because their app is quite large, and re-downloading the whole thing each time a new version is published is unrealistic.

It turns out though, that ClickOnce does optimize the download. When you publish a new version of your app, all the new files are written to the server, that is true. But the client only downloads changed files. All unchanged files are copied from the previous install folder on the client to the new install folder on the client.

In other words, all unchanged files are reused from the copy already on the client, and so are not downloaded again. Only changed files are downloaded from the server.

The trick to making this work is to only rebuild assemblies that have actually changed before you do a publish. Don't rebuild unchanged assemblies, because that could change the assembly - and even a one byte change in the assembly would cause it to be downloaded because the file hash would be different.

Saying that gives me flashbacks to binary compatibility issues with VB6, but it makes complete sense that they'd have to use something like a file hash to decide whether to re-download each file.

Tuesday, May 20, 2008 9:00:29 AM (Central Standard Time, UTC-06:00)  #    Disclaimer
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
April, 2019 (2)
January, 2019 (1)
December, 2018 (1)
November, 2018 (1)
October, 2018 (1)
September, 2018 (3)
August, 2018 (3)
June, 2018 (4)
May, 2018 (1)
April, 2018 (3)
March, 2018 (4)
December, 2017 (1)
November, 2017 (2)
October, 2017 (1)
September, 2017 (3)
August, 2017 (1)
July, 2017 (1)
June, 2017 (1)
May, 2017 (1)
April, 2017 (2)
March, 2017 (1)
February, 2017 (2)
January, 2017 (2)
December, 2016 (5)
November, 2016 (2)
August, 2016 (4)
July, 2016 (2)
June, 2016 (4)
May, 2016 (3)
April, 2016 (4)
March, 2016 (1)
February, 2016 (7)
January, 2016 (4)
December, 2015 (4)
November, 2015 (2)
October, 2015 (2)
September, 2015 (3)
August, 2015 (3)
July, 2015 (2)
June, 2015 (2)
May, 2015 (1)
February, 2015 (1)
January, 2015 (1)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2019, Marimer LLC

Send mail to the author(s) E-mail



Sign In