Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Friday, July 31, 2009

There’s no DataContextChanged event in Silverlight. And that’s a serious problem for creating certain types of control, because sometimes you just need to know when your data context has been changed!

In my case I have the Csla.Silverlight.PropertyStatus control, which listens for events from the underlying data context object so it knows when to refresh the UI. If the data context changes, PropertyStatus needs to unhook the events from the old object, and hook the events on the new object. That means knowing when the data context has changed.

In version 3.6 we implemented a Source property and required that the data source be explicitly provided. That was unfortunate, but since it was our own property we could detect when the value was changed.

I recently did a bunch of research into this topic, trying to find a better answer. I’d rather hoped that Silverlight 3 would provide an answer, but no such luck…

It turns out that there’s a hack solution that does work. Actually there are a few, but most of them seem terribly complex, and this one seems somewhat easier to follow.

The basic idea is to create a data binding connection between your DataContext property and another of your properties. Then when DataContext changes, data binding will change your property, and you can detect that your property has changed. Rather a hack, but it works.

In its distilled form, the structure looks like this. First there’s the control class, which inherits from some framework base class:

public class PropertyStatus : ContentControl
{

Then there’s the constructor, which includes a handler for the Loaded event, which is where the real magic occurs:

public PropertyStatus()
{
  Loaded += (o, e) =>
  {
    var b = new System.Windows.Data.Binding();
    this.SetBinding(SourceProperty, b);
  };
}

When the control is loaded, a new Binding object is created. This Binding object picks up the default DataContext value. The Source property is then bound to this new Binding, which effectively binds it to the default DataContext. Definitely a bit of hackery here…

Obviously this means there’s a Source property – or at least the shell of one. I don’t really want a visible Source property, because that would be confusing, but I must at least implement the DependencyProperty as a public member:

public static readonly DependencyProperty SourceProperty = DependencyProperty.Register(
  "Source",
  typeof(object),
  typeof(PropertyStatus),
  new PropertyMetadata((o, e) => ((PropertyStatus)o).SetSource(e.OldValue, e.NewValue)));

I do have a private Source property, though I’ve never found that it is invoked (breakpoints in the get/set blocks are never hit):

private object Source
{
  get { return GetValue(SourceProperty); }
  set
  {
    object old = Source;
    SetValue(SourceProperty, value);
    SetSource(old, value);
  }
}

The lambda in the static DependencyProperty declaration calls SetSource(), and this is my “data context changed” notification:

private void SetSource(object old, object @new)
{
  DetachSource(old);
  AttachSource(@new);
...
}

I suppose this could be generalized into a base class, where SetSource() really raises a DataContextChanged event. I didn’t find the need for that, as I’ve only got a couple controls that need to know the data context has changed, and they inherit from different framework base classes.

The point being, with a little data binding hackery, it is not terribly hard to implement a method that is invoked when the data context changes. Not as nice as having the actual event like in WPF, but much nicer than forcing the XAML to explicitly set some data source property on every control.

(for those interested, this will be changed in CSLA .NET 3.7.1, and it will be a breaking change because the Source property will go away)

Friday, July 31, 2009 3:41:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

The current contender for the pattern to displace IoC as “the trendy pattern” is MVVM: Model-View-ViewModel.

MVVM, often called the ViewModel pattern, has its origins in Silverlight, and to a lesser degree WPF has murky origins (maybe Expression Blend, Smalltalk or from aliens in the 8th dimension). It applies well to Silverlight and WPF, because those environments use XAML to describe the actual user presentation, and there’s a very valid desire to avoid having any code-behind a XAML page. A traditional MVC or MVP model isn’t sufficient to accomplish that goal (at least not efficiently), because it is often necessary to have some “glue code” between the View and the Model, even when using data binding.

The challenge is that some Models aren’t shaped correctly to support the View, so something needs to reshape them. And ideally there’d be some way (WPF commanding or similar) to “data bind” button click events, and other events, to some code other than code-behind. The idea being that when a user selects an item in a ListBox, the code that runs in response isn’t in the code-behind.

The idea behind ViewModel is to create a XAML-friendly object that sits between the View and the Model, so there’s a place for what would have been code-behind to go into a testable object. The ViewModel becomes the container for the code-behind, but it isn’t code-behind. Clearly the ViewModel object is part of the UI layer, so the architecture is something like this.

image

It turns out that there are various ways of thinking about the role of a ViewModel. I think there are two broad approaches worth considering.

ViewModel as Sole Data Source

You can set up the ViewModel to be the sole data source for the View. In this case the ViewModel exposes all the properties and methods necessary to make the View function. This can work well with an anemic Model, where the Model is composed of a bunch of dumb data container objects (think DataTable, DTO and most entity objects).

With an anemic data-centric Model it is common to reshape the Model to fit the needs of the View. And since the Model is anemic, something needs to apply any business, validation and authorization rules and it surely won’t be the Model itself.

Creating this type of ViewModel is non-trivial, because the ViewModel must use containment and delegation (a concept familiar to VB6 developers) to literally wrap, reshape and alter/enhance the behavior of the underlying Model.

image

Every Model property must be reimplemented in the ViewModel, or the View won’t have access to that property. The ViewModel must implement INotifyPropertyChanged, and very possibly the other data binding interfaces (IDataErrorInfo, IEditableObject, etc).

In short, the ViewModel will almost certainly become quite large and complex to overcome the fact that the Model is anemic.

What’s really sad about this approach, is that the end result will almost certainly require more code than if you’d just used code-behind. Arguably the result is more testable, but even that is debatable, since the ViewModel now implements all sorts of data binding goo and you’ll need to test that as well.

ViewModel as Action Repository

Another way to think about a ViewModel is to have it be a repository for actions/commands/verbs. Don’t have it reimplement all the properties from the Model, just have it augment the Model.

This works well if you already have a rich Model, such as one created using CSLA .NET. If the Model is composed of objects that already fully support data binding and have business, validation and authorization rules, it seems silly to reimplement large chunks of that functionality in a ViewModel.

Instead, have the ViewModel expose the Model as a property, alongside any additional methods or properties exposed purely by the ViewModel itself.

image

Again, this presupposes the Model is powerful enough to support direct data binding to the View, which is the case with CSLA .NET business objects, but may not be the case with simpler DTO or entity objects (which probably don’t implement IEditableObject, etc).

The value to this approach is that the ViewModel is much simpler and doesn’t replicate large amounts of code that was already written in the Model. Instead, the ViewModel augments the existing Model functionality by implementing methods to handle View requirements that aren’t handled by the Model.

For example, the Model may be a list of objects that can be bound to a ListBox control in the View. When the user double-clicks an item in the ListBox it might be necessary for the UI to navigate to another form. Clearly that’s not a business layer issue, so the Model knows nothing about navigation between forms. Typically this would be handled by a MouseDoubleClick event handler in code-behind the XAML, but we want no code-behind, so that option is off limits.

Since neither the View nor the Model can handle this double-click scenario, it is clearly in the purview of the ViewModel. Assuming some way of routing the MouseDoubleClick event from the XAML to a method on the ViewModel, the ViewModel can simply implement the method that responds to the user’s action.

This is nice, because the View remains pure XAML, and the Model remains pure business. The presentation layer concept of navigation is handled by an object (the ViewModel) who’s sole job is to deal with such presentation layer issues.

Routing XAML Events to a ViewModel

Regardless of which kind of ViewModel you build, there’s a core assumption that your XAML can somehow invoke arbitrary methods on the ViewModel in response to arbitrary actions by the user (basically in response to arbitrary events from XAML controls). WPF commanding gets you part way there, but it can’t handle arbitrary events from any XAML control, and so it isn’t a complete answer. And Silverlight has no commanding, so there’s no answer there.

When we built CSLA .NET for Silverlight, we created something called InvokeMethod, which is somewhat like WPF commanding, but more flexible. In the upcoming CSLA .NET 3.7.1 release I’m enhancing InvokeMethod to be more flexible, and porting it to WPF as well. My goal is for InvokeMethod to be able to handle common events from a ListBox, Button and other common XAML controls to invoke methods on a target object in response. For the purposes of this blog post, the target object would be a ViewModel.

The ListBox control is interesting to work with, because events like SelectionChanged or MouseDoubleClick occur on the ListBox control itself, not inside the data template. There’s no clear or obvious way for the XAML code to pass the selected item(s) as a parameter to the ViewModel, so what you really need to do is pass a reference to the ListBox control itself so the ViewModel can pull required information from the control in response to the event. In my current code, the solution looks like this:

<ListBox ItemsSource="{Binding Path=Model}"
         ItemTemplate="{StaticResource DataList}"
         csla:InvokeMethod.TriggerEvent="SelectionChanged"
         csla:InvokeMethod.Resource="{StaticResource ListModel}"
         csla:InvokeMethod.MethodName="ShowItem"/>

Notice that the ItemsSource is a property named Model. This is because the overall DataContext is my ViewModel object, and it has a Model property that exposes the actual CSLA .NET business object model. In fact, I have a CslaViewModel<T> base class that exposes that property, along with a set of actions (Save, Cancel, AddNew, Remove, Delete) supported by nearly all CSLA .NET objects.

For the InvokeMethod behaviors, the ListModel resource is the ViewModel object, and it has a method called ShowItem(), which is invoked when the ListBox control raises a SelectionChanged event:

public void ShowItem(object sender, object parameterValue)
{
  var lb = (System.Windows.Controls.ListBox)sender;
  SelectedData = (Data)lb.SelectedItem;
}

The ShowItem() method gets the selected item from the ListBox control and exposes it via a SelectedData property. I have a detail area of my form that is databound to SelectedData, so when the user clicks an item in the ListBox, the details of that item appear in that area of the form. But the ShowItem() method could navigate to a different form, or bring up a dialog, or do whatever is appropriate for the user experience.

The point is that the SelectionChanged, or other event, from a XAML control is used to invoke an arbitrary method on the ViewModel object, thus eliminating the need for code-behind the XAML.

Why this isn’t Ideal

My problem with this implementation is that the View and ViewModel are terribly tightly coupled. The ShowItem() implementation only works if the XAML control is a ListBox. It feels like all I’ve done here is moved code-behind into another file, which is not terribly satisfying.

What I really want is for the XAML to pick out the selected item – something like this pseudocode:

<ListBox ItemsSource="{Binding Path=Model}"
         ItemTemplate="{StaticResource DataList}"
         csla:InvokeMethod.TriggerEvent="SelectionChanged"
         csla:InvokeMethod.Resource="{StaticResource ListModel}"
         csla:InvokeMethod.MethodName="ShowItem"
         csla:InvokeMethod.MethodParameter="{Element.SelectedItem}"/>

Where “Element” refers to the current XAML control, and “SelectedItem” refers to a property on that control.

Then the ShowItem() code could be like this:

public void ShowItem(object parameterValue)
{
  SelectedData = (Data)parameterValue;
}

Which would be much better, since the ViewModel is now unaware of the XAML control that triggered this method, so there’s much looser coupling.

There’s no direct concept for what I’m suggesting built into XAML, so I can’t quite do what I’m showing above. The “{}” syntax is reserved by XAML for data binding. But I’m hopeful that I can make InvokeMethod do something similar by having a special syntax for the MethodParameter value, using characters that aren’t reserved by XAML data binding. Perhaps:

csla:InvokeMethod.MethodParameter=”[[Element.SelectedItem]]”

Or maybe more elegant would be a different property:

csla:InvokeMethod.ElementParameter=”SelectedItem”

Anyway, I think this is an important thing to solve, otherwise the ViewModel is just a more complicated form of code-behind, which seems counter-productive.

Friday, July 31, 2009 9:44:09 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, July 27, 2009

Who’d have thought I’d get spammed by my own ads?

On a couple of my sites (this blog and my forum) I use Google adSense to generate ads. That helps pay for my Internet costs and helps keep CSLA .NET free, which is good.

As a provider of technical content on both my forum and blog, I rather expect that I’ll get reasonably targeted ads – that’s the whole point of Google’s “context sensitive” rule-the-world marketing engine after all…

So I was entirely taken aback this morning, when I received an email from a forum user that included a screen shot of a buxom young lady as part of an ad on my site. In fact, I rather suspected some proxy agent between his client and my server did some ad replacement or something – I mean who would think Google would put near-porn content on my technical forum?

Well it turns out that Google really would put near-porn on my technical forum.

It turns out that the advertiser in question is apparently working around the edges of Google’s policies – or may have crossed them and Google has just ignored it for a month or more – I really don’t know.

Thanks to help from fellow users of Google (not Google themselves – they are apparently too busy “doing no harm” to help us stop them from doing harm), I think I have the ads filtered out.

However, I no longer trust Google adSense, and I wasted enough time on this to pretty much eat any income I’ve generated thus far from having their ads on my site… I figure I’ll leave their ads online in the hopes that Google gets a clue and addresses the issue – but next time something like this happens my “filter” solution will almost certainly be to just drop Google entirely, and look for a more reputable ad provider.

Monday, July 27, 2009 1:35:00 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, July 21, 2009

I have released CSLA .NET version 3.7, with support for Silverlight 3.

This is really version 3.6.3, just tweaked slightly to build on Silverlight 3. It turns out that the Visual Studio upgrade wizard didn’t properly update the WCF service reference from Silverlight 2 to Silverlight 3, so I had to rereference the data portal service. I also took this opportunity to fix a couple minor bugs, so check the change logs.

  1. If you are a CSLA .NET for Windows user, there’s just one minor change to CslaDataProvider, otherwise 3.7 really is 3.6.3.
  2. If you are a CSLA .NET for Silverlight user, there are a couple minor bug fixes in CslaDataProvider and Navigator, and 3.7 builds with Silverlight 3. Otherwise 3.7 is 3.6.3.

In other words, moving from 3.6.3 to 3.7 should be painless.

Downloads are here:

At this point the 3.6.x branch is frozen, and all future work will occur in 3.7.x until I start working on 4.0 later this year.

Tuesday, July 21, 2009 9:45:08 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, July 19, 2009

Every year Magenic gives its employees a cool tech gift around the holidays. This year it was a Kindle 2, and so I’ve been using mine for several months now and have some thoughts.

On the upside, the Kindle allows me to bring a number of books with me everywhere I go. This is very nice, given how much I travel for work, and the fact that I do most of my reading while sitting in airplanes.

Also, I like the form factor of the device, including the size, shape and weight. And the screen is easy on the eyes. I really think I read faster on the Kindle than with a paper book because of the clarity, consistency and the nice way the device moves forward from page to page.

But there are some things I dislike too.

I like to share books. My wife and I have a respectable library in our home, and we’re constantly loaning books to friends. And borrowing books in return – it is a great way to interact socially and share common interests.

The Kindle entirely destroys the concept of book sharing. With a real book I spend $8-$25 to get something I can read and share with friends. With the Kindle I pay the same price, but only I can read the result. All I can do is tell my friends it was good, or not.

As a content creator, I suppose I should be cheering on the idea of books that can’t be shared, but I’m afraid I’m a reader first, and author second, and this is a really serious drawback for me. To the point that, since getting the Kindle, I’ve purchased a couple paper books because I know I’ll want to share them. Obviously I didn’t waste the money to buy them on the Kindle too, as that seems rather silly to me, so there I was back to lugging around paper books on the airplane.

The other major problem I have with the Kindle is the same one I have with buying music online.

While Amazon (and music vendors) portray the transaction as a “purchase”, it is really a “lease”.

I’ve lost several CDs worth of music over the years, as a music vendor went out of business and their licenses expired and the music I “bought” was rendered unplayable. I’ve long since decided that I’ll never buy DRM “protected” music. Such a “purchase” is a hoax – a total scam. Personally I think it should be illegal to portray it as a purchase transaction (false advertising or whatever), but I guess we live in a laissez-faire enough world that it is up to each of us to get ripped of a few times before we rebel against the dishonest corporate entities.

I hadn’t really thought about the Kindle being in the same category until I read this recent article. It turns out that Amazon can yank your license to read a book if they desire. And of course it is true that if Amazon folds, or gets bored with the Kindle idea, that all the books I “purchased” will disappear.

With music I’ve been paying a monthly fee to the Zune service to lease access to all the music they have. And that makes me reasonably happy, because it is an honest, up-front transaction that is what it says it is. I get access to amazing amounts of music as long as I keep paying my lease. If I stop paying, or if the Zune service goes away, the music disappears. This makes me happy overall, and I view this as a reasonably value.

Maybe Amazon should do this with the Kindle too. Be more honest and up-front about the nature of the transaction and the relationship. Charge a monthly fee for access to their entire online library of books, and have the books disappear off the device if/when I stop paying the monthly fee.

Of course this still doesn’t solve the problem where I can’t share books I “purchase” with my friends, I still don’t know of a good solution to this issue.

Sunday, July 19, 2009 5:35:26 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, July 10, 2009

You’ve probably noticed that Microsoft released Silverlight 3 today. More information can be found at

I’ve put a very stable beta of CSLA .NET 3.7 online as well

Version 3.7 is really the same code as 3.6.3, but it builds with the Silverlight 3 tools for Visual Studio 2008. So while I’m calling it a beta, it is really release-quality code for all intents and purposes.

If you are a Windows/.NET user of CSLA .NET, you’ll find that all future bug fixes, enhancements and so forth will be in version 3.7. Version 3.6.3 is the last 3.6 version.

If you are a Silverlight user of CSLA .NET, and continue to use the Silverlight 2 tools, 3.6.x is still for you, but I plan to address only the most critical bugs in that version – and I’m not currently aware of anything at that level. So 3.6.3 is really the end for Silverilght 2 as well.

In short, 3.7 is now the active development target for any real work, and you should plan to upgrade to it when possible.

Friday, July 10, 2009 11:46:31 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, July 08, 2009

I’ve observed, in broad strokes, a pattern over the past 20 years or so.

At one time there was DOS (though I was a happy VAX/VMS developer at the time, and so didn’t suffer the pain of DOS).

Then there was Windows, which ran on DOS (sadly I did have to deal with this world).

Then we booted into Windows, which had a “DOS command prompt” window, so DOS ran on Windows (and the world got better).

Windows lasted a long time, but then came .NET, which ran on Windows.

I always expected that we’d see the time come when we’d boot into .NET, and Windows would run on .NET.

I no longer think that’s likely. Which is too bad, because that would have been cool.

But in the late 1990’s there was the very real possibility that the browser would become the OS. The dot-bomb crash eliminated that debate, robbing the browser-based company’s of their funding and allowing Windows, Linux and Mac OS to continue to dominate.

In the past 2-3 years though, we’ve seen something different. Flash/Flex/Air, Silverlight and Google Gears have shown that the “browser as the OS” idea didn’t go away, it just went to sleep for a while.

Now I’ll suggest that there are really two camps here. There’s the Google camp that really wants the browser as the OS. And there’s the Adobe/Microsoft camp (can I put them together?) that realizes that the technology of the web is really a stack of hacks, and that it is high time to move on to the next big thing. My bias is clearly toward the next big thing, and has been since the web started being misused as an app platform in the 1990’s. They are proposing the mini-runtime-as-the-OS concept, using the browser as a simple launch-pad for the real runtime.

But either way, the concept of a “client OS” has been fading for some years now in my view. Or conversely, the concept of “all my apps run in ‘the browser’” has been ascending steadily and rapidly over the past 2-3 years.

And I’m OK with this, because we’re not talking about reducing all apps to stupid browser-based mainframe-with-color architectures like the web has tried to do for over a decade.

No! We’re talking about full-blown smart-client technologies like Silverlight, where the client application can be an edge application in a service-oriented system, or the client tier of an n-tier application. We’re talking about the resurgence of the incredible productivity and scalability offered by the n-tier technologies of 1997-98. We’re talking about getting our industry back on track, so we can look back at 1999-2009 as nothing more than a “lost decade”.

(can you tell I think the terminal-based browser model is really lame?)

Does this mean that Windows, Mac OS and Linux become less relevant? I think so. Does this mean that Silverlight is the most important technology Microsoft is currently building? I think so.

Consider that Silverlight, in a <5 meg runtime, provides everything you need to build most types of powerful smart-client business applications. And consider that these applications auto-deploy and auto-update in a way that the user never sees the deploy/update process. Totally smooth and transparent.

Consider that you can write your C# or VB code and run it on the client. Or on the server. Or both (see CSLA .NET for Silverlight if you want to know more). You don’t need to learn JavaScript. You don’t need a totally different language and runtime on the client from the server. Silverlight provides a decent level of consistency between client and server, so you can reuse code, and absolutely reuse skills, knowledge and tools.

Google is totally on the bandwagon of course. Many people have seen their strategy for months, if not years. And we’re now seeing it come through, as they start to make their browser is the OS play in earnest.

What’s interesting to me, is that in either case – the browser-as-OS (Google) or the mini-runtime-as-OS (Silverlight) – leaves the previous generation OS concept (Windows/Mac/Linux) out of the loop.

Windows 7, perhaps the best version of Windows ever created, might be the last great hurrah for the legacy OS concept.

I mean all we need now is a Silverlight-based version of Office (to compete with Google Apps) and the underlying “OS” beneath Silverlight becomes rapidly irrelevant...

Wednesday, July 08, 2009 10:45:44 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

As anyone who’s used VB any time after 1994 knows, dynamic languages are really powerful for certain problems. Of course it used to be called “late binding”, but the point is that you can write code against a type you don’t know at development time (or compile time).

This concept is all the rage now, and it is fun (imo) to be able to nod sagely and say “Yes, I’ve been saying this for 14 years now.”

Not that I’d say that to Chris, but it is really nice that he is now an “it getter”*:

http://www.sellsbrothers.com/news/showTopic.aspx?ixTopic=2290

But the concept is really, really powerful. And many of the related concepts and capabilities that have come in more recent languages, including VB 9, 10 and C# 4, really enhance the capabilities of “late binding” and open up entire new scenarios.

This is along the lines of the C# people who talk about the couple features that are not in VB, and then dismiss XML literals as being of little or no value. That’s just ignorance talking. XML literals do for XML coding what dynamic languages do for using objects to abstract service/system boundaries.

I have been doing nearly all C# for over a year, but when I need to mess with XML I still create a VB project. Who’d choose to put themselves through the pain incurred in C# to do something that is so much simpler in VB?

In the end, my point is simple: don’t be a language bigot. Every language out there, VB, Python, C# and more, brings something unique and beautiful to the world. We can only hope that our language of choice (whatever that is for you) has an architect with an open mind, who is willing to bring the best ideas into each language…

(and yes, I hope that means things like XML literals, WithEvents and the Implements clause work their way into C# – because those concepts are really powerful and would benefit the language greatly)

Regardless, it is nice to see C# gaining “late binding”, which should provide great benefit a lot of people.

 

* thank you Steven Colbert

Wednesday, July 08, 2009 9:50:15 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, July 07, 2009

If you’ve read the news in the last day or two, you’ve probably run across an article talking about an “unprecedented step” taken by Microsoft, in that they are talking about a Windows vulnerability before they have a patch or fix available.

When I read the article on msnbc.com it mentioned that there is a workaround (not a fix – but a way to be safer), and that information could be found on Microsoft’s web site.

So off I went to www.microsoft.com – Microsoft’s web site. Where I found nothing on the topic, but I did find a link to the security home page.

So off I went to the security home page. Where I found nothing that was obviously on the topic. Yes, there’s a lot of information there, including some information on viruses, infection attacks and an apparent rise in fake attacks (so I started wondering if MSNBC had been faked out?).

At no point in here did I realize that one of the articles on the security home page actually was the article I was looking for! It turns out that this particular vulnerability is through an ActiveX video component, a fact not mentioned in the MSNBC article. So while I saw information about such a thing on the Microsoft site, I had no way to link it to the vague mainstream press article that started this whole adventure…

Fortunately I know people :)

The vulnerability is an ActiveX video component issue. And the workaround is documented here:

http://support.microsoft.com/kb/972890

And now that I know I’m looking for information related to an ActiveX video component issue, it is clear that there are relevant bits of information on these sites too:

Microsoft Security Response Center blog:

http://blogs.technet.com/msrc/default.aspx

Microsoft TechNet Security alerts:

http://www.microsoft.com/technet/security/advisory/default.mspx

I still think the communication here is flawed. The mainstream press screwed up by providing insufficient and vague information, making it virtually impossible to find the correct documentation from Microsoft on the issue. But perhaps Microsoft was vague with the press too – hard to say.

And I think Microsoft could have been much more clear on their sites, providing some conceptual “back link” to indicate which bits of information pertain to this particular issue.

There’s no doubt in my mind that my neighbors, for example, would never find the right information based on the mainstream articles in the press. So Microsoft’s “unprecedented step” of talking about this issue will, for most people, just cause fear, without providing any meaningful way to address that fear. And that’s just sad – lowering technology issues to the level typically reserved for political punditry.

Tuesday, July 07, 2009 9:06:16 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, July 01, 2009

I’ve updated my prototype MCsla project to work on the “Olso” May CTP. The update took some effort, because there are several subtle changes in the syntax of “Oslo” grammars and instance data. What complicated this a little, is that I am using a custom DSL compiler because the standard mgx.exe utility can’t handle my grammar.

Still, I spent less than 8 hours getting my grammar, schemas, compiler and runtime fixed up and working with the CTP (thanks to some help from the “Oslo” team).

I chose at this point, to put the MCsla project into my public code repository. You can use the web view to see the various code elements if you are interested.

The prototype has limited scope – it supports only the CSLA .NET editable root stereotype, which means it can be used to create simple CRUD screens over single records of data. But even that is pretty cool I think, because it illustrates the end-to-end flow of the whole “Oslo” platform concept.

A business developer writes DSL code like this:

Object Product in Test
{
  Public ReadOnly int Id;
  Public string Name;
  Public double ListPrice;
} Identity Id;

(this is the simplest form – the DSL grammar also allows per-type and per-property authorization rules, along with per-property business and validation rules)

Then they run a batch file to compile this code and insert the resulting metadata into the “Oslo” repository.

The user runs the MCslaRuntime WPF application, which reads the metadata from the repository and dynamically creates a XAML UI, CSLA .NET business object and related data access object that talks to a SQL Server database.

f01

The basic functionality you get automatically from CSLA .NET is all used by the runtime. This includes application of authorization, business and validation rules, automatic enable/disable for the Save/Cancel buttons based on the business object’s rules and so forth.

If the business developer “recompiles” their DSL code, the new metadata goes into the repository. The user can click a Refresh App button to reload the metadata, immediately enjoying the new or changed functionality provided by the business developer.

The point is that the business developer writes that tiny bit of DSL code instead of pages of XAML and C#. If you calculate the difference in terms of lines of code, the business developer writes perhaps 5% of the code they’d have written by hand. That 95% savings in effort is what makes me so interested in the overall “Oslo” platform story!

Wednesday, July 01, 2009 5:15:07 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
July, 2014 (2)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In