Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Monday, March 26, 2007

I have put an early test version of CSLA .NET version 3.0, along with some updates to the ProjectTracker app online for download at www.lhotka.net/cslanet/download.aspx.

At this time the code is C# only. I'll port to VB once I'm more comfortable that the code is stable. And that statement alone should reinforce that this is early test code!! :)

The CSLA framework code includes WCF and WPF support. I have not yet found a case where I've needed to do anything to CSLA itself to support WF - invoking workflows and using CSLA objects in activities works as-is. You can look at the change log document to see what I've done for WCF and WPF.

The ProjectTracker folder now has a PTWpf project and a PTWorkflow project.

Neither of these are in the solution! The reason they are not in the solution is because PTWpf is a March 2007 Orcas CTP project, and so it won't load under VS 2005, and PTWorkflow requires the WF additions for VS 2005, which not everyone has installed. To put this another way: if you have the WF additions installed on VS 2005, you can open the workflow project. But to open the PTWpf project you need to be running the March 2007 Orcas CTP.

That said, if you want to play with the WPF forms under 2005, you can certainly use the xaml and cs files - just grab the bits you need and use them in whatever tool you are using (xamlpad, notepad, the Dec CTP or whatever). The PTWpf project isn't complete, but the ProjectList and ProjectEdit forms both work and illustrate a couple ways to interact with CSLA-style business objects for both read-only and read-write purposes.

If you do try the 3.0 version, please let me know of any issues you find. I'm actively working on this code, and appreciate any bug reports or other input!

Monday, March 26, 2007 12:18:02 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, March 15, 2007

I’ve been spending a lot of time in WPF-land over the past few weeks, and thought I’d share some of what I’ve learned. I haven’t been learning styles or UI layout stuff – Microsoft says that’s the job of the turtleneck-wearing, metrosexual GQ crowd, so I’ll just roll with that. Instead, I’ve been learning how to write data source provider controls, and implement Windows Forms-like behaviors similar to the ErrorProvider and my Csla.Windows.ReadWriteAuthorization control.

You know, manly programming J

The data source provider control is, perhaps, the easiest thing I’ve done. Like ASP.NET, WPF likes to use a data control concept. And like ASP.NET, the WPF data provider controls are easy to create, because they don’t actually do all that much at runtime.  (I do want to say thank you to Abed Mohammed for helping to debug some issues with the CslaDataProvider control!)

I’m sure, as designer support for data provider controls matures in tools like Visual Studio and Expression Blend, that life will get far more complex. Certainly the Visual Studio designer support for Csla.Web.CslaDataProvider has been the single most time consuming part of CSLA .NET, though I was able to create the runtime support in an afternoon…

What I have today, is a Csla.Wpf.CslaDataProvider control that works similar to the ObjectDataProvider. The primary difference is that CslaDataProvider understands how to call Shared/static factory methods to get your objects, rather than calling a constructor method. The result is that you can create/fetch CSLA .NET business objects directly from your XAML.

The really cool part of this, is that CslaDataProvider supports asynchronous loading of the data. To be fair, the hard work is done by the WPF base class, DataSourceProvider. Even so, supporting async is optional, and requires a bit of extra work in the control – work that is worth it though. If you construct a form that has multiple DataContext objects for different parts of the form, loading all of them async should give some nice performance benefits overall.

On the other hand, if your form is bound to a single business object the value isn’t clear at all. Though the data load is async, the form won’t actually display until all the async loads are complete, so for a single data source on a form my guess is that async is actually counter-productive.

The validation/ErrorProvider support is based on some work Paul Stovell published on the web. I conceptually based my work on similar concepts, and have created a ValidationPanel control that uses IDataErrorInfo to determine if any bindings of any controls contained in the panel are invalid.

The panel control loops through all the controls contained inside the panel. On each control it loops through the DependencyProperty elements defined for that control (in WPF controls, normal properties aren’t bindable, only dependency properties). And it then loops through any Binding objects attached to each DependencyProperty. I discovered that those Binding objects are complex little buggers, and that there are different kinds of binding that I need to filter out. Specifically relative bindings and control-to-control bindings must be ignored, because they don’t represent a binding to the actual business object.

To optimize performance the panel does a bunch of caching of binding information, and is relatively sparing in how often it refreshes the validation data – but it is comparable to ErrorProvider, in that changing one business object property does trigger rechecking of all other properties bound within the same ValidationPanel. I think this is necessary, because so many data source objects are constructed around the Windows Forms model. Not following that model would cause a lot of headache when moving to WPF.

The authorization support is implemented in a manner similar to the validation. Csla.Wpf.AuthorizationPanel uses the CSLA .NET IAuthorizeReadWrite interface and scans all bindings for all controls contained in the panel to see if the business object will allow the current user to read or write to the bound property. If the user isn’t allowed to read the property, the panel either hides or collapses the data bound control (your choice). If the user isn’t allowed to write to the property, the panel sets the control’s IsReadOnly property to true, or if the control doesn’t have IsReadOnly, it sets IsEnabled to false.

AuthorizationPanel doesn’t include the same level of caching as ValidationPanel, but I don’t think it is necessary. Where ValidationPanel refreshes on every property change, AuthorizationPanel only refreshes if the data source object is changed (replaced) or if you explicitly force a refresh – probably because the current user’s principal object has changed.

I want to pause here and point out that I’ve had a lot of help in these efforts from Paul Czywczynski. He’s spent a lot of time trying these controls and finding various holes in my logic. And the next control addresses one of those holes...

The IsValid, IsDirty, IsSavable, IsNew and IsDeleted properties on a CSLA .NET business object are marked as Browsable(false), meaning they aren’t available for data binding. In Windows Forms you can work around this easily by handling a simple event on the BindingSource object. But in WPF the goal is to write no code – to do everything through XAML (or at least to make that possible), so such a solution isn’t sufficient.

Enter the ObjectStatusPanel, which takes the business object’s properties and exposes them in a way that WPF can consume. Using this panel, your object’s status properties (single object or collection) become available for binding to WPF controls, and if the object’s properties change those changes are automatically reflected in the UI. The most common scenario here is to bind a button’s enabled status to the IsSavable property of your editable root business object.

Most recently I’ve started refactoring the code in these controls. For the most part, I’ve now consolidated the common code from all three into a base class: Csla.Wpf.DataPanelBase. This base class encapsulates the code to walk through and find relevant Binding objects on all child controls, and also encapsulates all the related event handling to detect when the data context, data object, object property, list or collection have changed. It turns out that a lot of things can happen during data binding, and detecting all of them means hooking, unhooking and responding to a lot of events.

I wrote all these controls originally using the Dec 2006 CTP, and just started using them in the Mar 2007 CTP. As a pleasant surprise there was no upgrade pain – they just kept working.

In fact, they work better in the MarCTP, because the Cider designer (WPF forms designer) is now capable of actually rendering my custom controls. What I find very interesting, is that the designer actually runs the factory method of the CslaDataProvider control, so the form shows real data from the real objects right there in Visual Studio. I’m not sure this is a good thing, but that’s what happens.

There’s no doubt that I’ll find more issues with these controls, and they’ll change over the next few weeks and months.

But the exciting thing is that I’m now able to create WPF forms that have functional parity with Windows Forms, including validation, authorization and object status binding. And it can all be done in either XAML or code, running against standard CSLA .NET business objects.

People attending VS Live at the end of this month will be the first to see these controls in action – both in my workshop on Sunday the 25th and in my sessions during the week. And I plan to put a test version of CSLA .NET 3.0 online that week as well, so anyone who wants to play with it can give it a go.

Right now, if you aren’t faint of heart, you can grab the live code from my svn repository. Keeping in mind, of course, that this is an active repository and so the code in trunk/ may or may not actually work at any given point in time.

Thursday, March 15, 2007 9:37:24 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Tuesday, March 13, 2007

String formatting in .NET is a pain. Not that it has ever been easy - even COBOL formatting masks can get out of hand, but there's no doubt that the .NET system is harder to grasp and remember than the VB 1-6 scheme...

I just had a need to format an arbitrary value using a user-supplied format string. You'd think that

obj.ToString(format)

would do the trick. Except that System.Object doesn't have that override of ToString(), so that's not a universal solution. So String.Format() is the obvious next choice, except that I need to somehow take a format string like 'N' or 'd' and make it into something valid for String.Format()...

Brad Abrams has some good info. But his problem/solution isn't quite what I needed. Close enough to extrapolate though:

outValue = string.Format(string.Format("{{0:{0}}}", format), value);

Given a format string of 'N', the inner Format() returns "{0:N}", which is then used by the outer Format() to format the actual value.

Tuesday, March 13, 2007 9:30:21 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

I've been working with the Visual Studio Orcas CTP from March 2007 for a few days now. My focus had been around WCF and WPF and ensuring my CSLA .NET 3.0 development worked in the new CTP.

Then earlier today I tried building a workflow. And the workflow designer wouldn't open. Instead I got a concise little error dialog saying "Microsoft.VisualStudio.Shell.WindowPane.GetService(System.Type)". This was the case with both C# and VB projects.

Worse, attempting to close VS after that point caused a complete VS crash. It turns out that other designers (like project properties) fail as well, with similar errors.

In talking to some people at Microsoft, I discovered that the problem wasn't universal. But in talking to more people, the root of the issue appeared.

I downloaded the huge 9 part VSTS/TFS edition of the Orcas VPC. Other people downloaded the smaller 7 part VSTS-only edition. The problem only occurs in the big 9 parter, and is due to a side-by-side issue. Some VS 2005 components are in the bigger VPC to support SQL Server, and they are causing the issue. The 7 part VPC doesn't have that functionality, or those components, and so the SxS problem doesn't occur.

So I'm about 80% done downloading the 7 parter, then I can get back to work.

Tuesday, March 13, 2007 9:15:14 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

So here’s a good question: is computer science dead?

I have a degree in computer science, but a large number of people (probably a majority) I’ve worked with over the years have not had such a degree. Instead, they’ve had certificates from tech schools or they migrated to computing from other disciplines like astrophysics, literature, theatre, etc.

Our industry really attracts an eclectic group of people. Personally I think this is because some people “get it” and some don’t. Education is incredibly helpful, but that ephemeral mental twist that allows some people to grok programming can’t be taught, and without that mental twist a person can only go so far – no matter how much education they have. At the same time, people with that mental twist often find their progress slowed by a lack of education, because they waste time rediscovering solutions to problems solved long ago.

Still, the question is interesting: is CS dead? Certainly it is in trouble. Enrollment in university CS programs is down across the board.

But I think the bigger question is whether CS has remained relevant. And if not, can it become relevant again?

Computer Science, as a discipline, is really only useful if it pushes the boundaries and advances our understanding of the science. I was recently privileged to attend Microsoft’s annual TechFest event. This is an event where Microsoft Research shows off their stuff to mainstream Microsoft – to the product groups. Because this is their 15th anniversary, they invited some media and other special guests as well.

Before going further here, it is important to realize that Microsoft Research is the single largest provider of CS research funding, and they fund groups in several locations and universities around the planet. While there are obviously many smart people not funded by MSR, some of the smartest researchers out there are funded by MSR.

Some of what they showed was really cool. Some of it, however, was barely at the level of what you’ll find from vendors in industry. (I’m bound by NDA, so I can’t give specifics – sorry)

In other words, the top academic minds are, in many cases, barely keeping up with the top industry minds who are building salable products. And to be fair, some of the top academic minds are exploring things that are barely even on the radar in the mainstream industry.

To me, this is the crux of the matter though: CS can’t be relevant if it is merely keeping up with industry. If product teams at component vendors or product groups in Microsoft are at or ahead of the researchers, then the researchers are wasting their time.

Consider the idea that a Microsoft product team could be working on a very cool set of functionality. This functionality involves a set of new language features, some runtime capabilities and directly addresses a very important pain point we all face today, and one that will get rapidly more painful over the next 2-5 years. Suppose, for grins, that they even have a working prototype that demonstrates the basic viability of this solution. One that’s robust enough for serious experimentation in development scenarios you or I might be involved with.

Then consider a bunch of researchers, who are working on basically the same issue. They’ve undoubtedly got a lot of thinking and formalization going on that’s hard to quantify. As a result of that analysis they have a prototype that demonstrates some of their thinking. One that can only be run by one of the researchers, and which is too incomplete for broad experimentation.

In other words, these researchers are behind a non-research group.

Now for all I know, the researchers will ultimately come up with a better, more complete solution to the problem. But if they come up with that solution after the product group ships the product then it is rather too little, too late.

My point is this: I think CS is in trouble because academia has lost track of where the industry has gone and is going. As I noted in my last post, the rate of change in our industry is incredible, and it is accelerating. While this may have some negative side-effects, there’s no doubt that it has a lot of positive side-effects as well, and that we certainly live in exciting times!

I do think, like the author of the original article, that one of the negative side-effects at the moment is that Computer Science is struggling to remain relevant in the face of this rate of change. Too many of them are teaching outdated material, and are researching problems that have already been solved.

However, I don’t see a collapse of CS as a foregone conclusion. Nor do I see it as a good thing. The author of the original article appears to suggest that it is a good thing for CS departments to stop focusing on programming, and rather to become focused on “interdisciplinary studies” to better promote the needs of IT organizations. You know, the remnants of what’s left after all the fun work moves to India, then China and then wherever-is-cheapest-next.

But I see it differently. If the technology we have today is the best mankind can achieve, then that is really depressing. And let’s face it, industry-based research is pragmatic. In my example earlier, the industry-based team is solving a very real problem we face today, and they are doing it in response to a market need.

Effective research however, must address problems we don’t even know we have. Problems industry won’t solve because there’s no market need – at least not yet.

I absolutely reject the idea that computer scientists can abdicate the responsibility to move the science forward. Or that we’ve somehow reached the pinnacle of human achievement in computing.

Worse, from an American nationalistic perspective, this sort of attitude is very dangerous (fortunately the author is British, not American J ). Why? Because our economists tell us that it is a good thing that the fun work is being outsourced, because it gives us more time and energy to “innovate” (and allows us to buy super-cheap tube socks and plastic toys). Well, in computing that “innovation” comes through research.

In other words, for the US itself to remain relevant, we must ensure that we have a vibrant Computer Science focus, and that our researchers really are pushing the envelope. Because if innovation is all we have left, then we’d damn well better do a great job with it!

The bar has been raised. For CS to remain relevant they need to recognize this fact and step up to meet the new expectations.

Tuesday, March 13, 2007 9:59:42 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, March 11, 2007

As I’ve mentioned before, my mother is battling cancer. Due to this, I’ve spent more time in hospitals over the past few months than I have in my life before now. And in so doing, I’ve drawn some scary conclusions about where our industry may be headed.

When I got into the industry 20 years ago (yes, it’s true…), it was quite realistic to think that a hot-shot programmer could be an expert in everything they touched.

In my case, I got hired to write software on a DEC VAX using VAX Basic. Now, I was a dyed in the wool Pascal fanatic, and the idea of using “Basic” was abhorrent to me. But not as abhorrent as not having a job (this was during the Reagan-era recession after all), and besides, I loved the VAX and this was a VAX job.

What I learned, was that VAX Basic wasn’t “Basic” at all. It was a hybrid between FORTRAN (the native VAX language) and Pascal. A couple months later, I was an expert at, not only the VMS operating system, but at this new language as well. It was my sixth programming language, and after a while learning a new language just isn’t that hard.

My point is this: as a junior (though dedicated) programmer, I knew pretty much everything about my operating system and programming language. For that matter, I was versed in the assembly language for the platform too, though I didn’t need to use it.

Fast-forward 8 years to 1995: the beginning of the end. In 1995 it was still possible for a dedicated programmer to know virtually everything about their platform and language. I’d fortunately anticipated the fall of OpenVMS some years earlier, and had hitched my wagon to Windows NT and Visual Basic. Windows NT was, and is, at its core, the same as OpenVMS (same threading, same virtual memory, etc.) and so it wasn’t as big a shift as it could have been. The bigger shift was from linear/procedural programming to event-driven/procedural programming…

But, in 1995 it was quite realistic to know the ins and outs of VB 3.0, and to fully understand the workings of Windows NT (especially if you read Helen Custer’s excellent book). The world hadn’t changed in any substantive way, at least on the surface, from 1987.

Beneath the surface the changes were happening however. Remote OLE Automation arrived with VB 4.0, rapidly followed by real DCOM. SQL Server appeared on the scene, offering an affordable RDBMS and rapidly spreading the use of that technology.

(Note that I’d already worked with Oracle at this point, but its adoption was restrained by its cost. Regardless of the relative technical merits of SQL Server, its price point was an agent for change.)

And, of course, the HTTP/WAIS/Gopher battle was resolved, with HTTP the only protocol left standing. Not that anyone really cared a lot in 1995, but the seeds were there.

Now step forward just 3 years, to 1998. Already, in 1998 it was becoming difficult for even a dedicated developer to be an expert in everything they used. Relational databases, distributed programming technologies, multiple programming languages per application, the rapidly changing and unreliable HTML and related technologies. And at the same time, it isn’t like Windows GUI development went away or even stood still either.

Also at this time we started to see the rise of frameworks. COM was the primary instigator – at least in the Microsoft space. The face that there was a common convention for interaction meant that it was possible for people to create frameworks – for development, or to provide pre-built application functionality at a higher level. And those frameworks could be used by many languages and in many settings.

I smile at this point, because OpenVMS had a standardized integration scheme in 1987 – a concept that was lost for a few years until it was reborn in another form through COM. What comes around, goes around.

The thing is, these frameworks were beneficial. At the same time, they were yet another thing to learn. The surface area of technology a developer was expected to know now included everything about their platform and programming tools, and one or more frameworks that they might be using.

Still, if you were willing to forego having friends, family or a real life, it was technically possible to be an expert in all these things. Most of us started selecting subsets though, focusing our expertise on certain platforms, tools and technologies and struggling to balance even that against something resembling a “normal life”.

Now come forward to today. What began in 1995 has continued through to today, and we're on the cusp of some new changes that add even more complexity.

Every single piece of our world has grown. The Vista operating system is now so complex that it isn’t realistic to understand the entire platform – especially when we’re expected to also know Windows XP and Server 2003 and probably still Windows 2000.

For many of us, the .NET Framework has replaced the operating system as a point of focus. Who cares about Windows when I’ve got .NET. But the .NET Framework is now well over 10,000 classes and some totally insane number of methods and properties. It is impractical to be an expert on all of .NET.

Below the operating system and .NET, the hardware is undergoing the first meaningful change in 20 years: from single processor to multiple processors and/or cores. Yes, I know multiprocessor boxes have been around forever – our VAX was dual CPU in 1989. But we are now looking at desktop computers having dual core standard. Quad core within a couple years.

(I know most people went from 16 to 32 bits – but the VAX was 32 bit when I started, and 64 bit when I moved to Windows, so I can’t get too excited over how many bits are in a C language int. After you've gone back and forth on the bit count a couple times it doesn't seem so important.)

But this dual/quad processor hardware isn’t uniform. Dual processor means separate L1/L2 caches. Dual core means separate L1, but sometimes combined L2 caches. AMD is working on a CPU with a shared L3 cache. And this actually matters in terms of how we write software! Write your software wrong, and it will run slower rather than faster, thanks to cache and/or memory contention.

(This sort of thing, btw, is why understanding the actual OS is so important too. The paging model used by OpenVMS and Windows can have a tremendous positive or negative impact on your application’s performance if you are looping through arrays or collections in certain ways. Of course Vista changes this somewhat, so yet again we see expansion in what we need to know to be effective…)

At least now we only have one real programming language: VB/C# (plus or minus semi-colons), though that language keeps expanding. In .NET 2.0 we got generics, in 3.5 there’s a whole set of functional language additions to support LINQ. And I’m not even mentioning all the little tweaky additions that have been added here and there – like custom events in VB or anonymous delegates in C#.

And how many of us know the .NET “assembly language”: CIL? I could code in the VMS macro assembly language, but personally I struggle to read anything beyond the simplest CIL…

I could belabor the point, but I won’t. Technology staples like SQL Server have grown immensely. Numerous widely used frameworks and tools have come and gone and morphed and changed (Commerce Server, SharePoint, Biztalk, etc).

The point is that today it is impossible for a developer to be an expert in everything they need to use when building many applications.

As I mentioned at the beginning, I’ve been spending a lot of time in hospitals. And so I’ve interacted with a lot of nurses and doctors. And it is scary. Very, very scary.

Why?

Because they are all so specialized that they can’t actually care for their patients. As a patient, if you don’t keep track of what all the specialists say and try to do to you, you can die. An oncologist may prescribe treatment A, while the gastro-intestinal specialist prescribes treatment B. And they may conflict.

Now in that simple case, the specialists might (might!) have collaborated. But if you are seriously ill, you can easily have 4-8 specialists prescribing treatments for various subsystems of your body. And the odds of conflict is very high!

In short, the consumer (patient) is forced to become their own general physician or risk serious injury or death at the hands of the well-meaning, but incredibly-specialized physicians surrounding them.

And I think this is where our industry is headed.

I know people who are building their entire career by specializing on TFS, or on SharePoint Server, or on SQL Server BI technologies, or Biztalk. And I don’t blame them for a second, because there’s an increasing market demand for people who have real understanding of each of these technologies. And if you want to be a real expert, you need to give up any pretense of expertise in other areas.

The same is true with the Windows/Web bifurcation. And now WPF is coming on the scene, so I suppose that’s a “trifurcation”? (apparently that’s a real word, because the spell checker didn’t barf!)

What amazes me is that this insane explosion in complexity has occurred, and yet most of my customers still want basically the same thing: to have an application that collects and processes data, generates some reports and doesn’t crash.

But I don’t think this trend is reversible.

I do wonder what the original OO people think. You know, the ones who coined the “software crisis” phrase 20 or so years ago? Back then there was no real crisis – at least not compared to today…

So what does this mean for us and our industry?

It is an incredible shift for an industry that is entirely built on generalists. Companies are used to treating developers like interchangeable cogs. We know that’s bad, but soon they will know it too! They’ll know it because they’ll need a bigger staff, and one that has more idle time per person, to accomplish things a single developer might have done in the past.

Consulting companies are built on utilization models that pre-suppose a consultant can fill many roles, and can handle many technology situations. More specialization means lower utilization (and more travel), which can only result in higher hourly rates.

Envision a computer industry that works like the medical industry. “Developers” employed in corporate settings become general practitioners: people who know a little about a lot, and pretty much nothing about anything. Their role is primarily to take a guess about the issue and refer the customer to a specialist.

These specialists are almost always consultants. But those consultants have comparatively low utilization. They focus on some subset of technology, allowing them to have expertise, but they’re largely ignorant of the surrounding technologies. Because their utilization is low, they are hopping from client to client, never focused entirely an any one – this reduces their efficiency. Yet they are experts, and they are in demand, so they command a high hourly rate – gotta balance the lower hours somehow to get the same annual income…

Many projects require multiple specialists, and the consumer is the only one who knows what all is going on. Hopefully that “general practitioner” role can help – but that’s not the case in the medical profession. So we can extrapolate that in many cases it is the end consumer, the real customer, that must become tech-savvy enough to coordinate all these specialists as they accidentally conflict with each other.

Maybe, just maybe, we can head this off somewhat. Maybe, unlike in the medical industry, we can develop generalists as a formal role. Not that they can directly contribute much to the effort, but at least the people in this role can effectively coordinate the specialists that do the actual work.

Just think. If each cancer patient had a dedicated general practitioner focused on coordinating the efforts of the various specialists, how much better would the results be? Of course, there’s that whole issue of paying for such a dedicated coordinator – somehow who’s continuing education and skills would have to be extraordinary…

Then again, maybe I’m just being overly gloomy and doomy. Maybe we’ll rebel against the current ridiculous increases in complexity. Maybe we’ll wake up one day and say “Enough! Take this complexity and shove it!”

Sunday, March 11, 2007 12:01:25 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, March 08, 2007

A few months ago I blogged about Paul Sheriff's experiment with selling content on the web, though something he calls the Inner Circle.

Like all such experiments, there are lessons to be learned. And any good experiment will evolve over time based on feedback and observation. Recently, Paul changed his pricing model from a recurring subscription, to a one-time fee for a lifetime membership. While I know a fair number of people did go for the subscription model, I think Paul decided to continue the experiment by trying a different model, and this is it.

I know Paul recently put a framework for accessing data, configuration settings, cryptography, exception handling and key management online there, so he's not only doing articles and webcasts, but is providing code and components as well.

If you were leery of the recurring subscription model, you might want to take another look and see if this new model fits you better.

Thursday, March 08, 2007 11:23:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, March 02, 2007

Dunn Training has done a great job with the CSLA .NET 3 day training class, and the feedback so far has been very positive!

Due to the success of the training, Dunn is taking the training on the road, bringing it to a number of cities across the US, including:

  • Boston
  • Minneapolis
  • San Francisco
  • Orlando
  • Washington, DC
  • Dallas
  • Seattle
  • San Diego
  • Las Vegas

Click here for all the information.

"I came to this class with two things, a very large problem to solve and a very modest knowledge of CSLA.  What I left with was a solution to the problem using CSLA and enough knowledge to create a prototype on my airplane trip home.  This class was worth every penny and every minute of time.  The instructor, Miguel Castro, was awesome.  DUNN Training Rocks!!"

Leah Bialic, Cambridge Soft

"The approach taken with this class was refreshing. Instead of the typical instructor/student environment, I felt like I was in a room of colleagues working together to learn; this was a definite plus.  I rate this class 10 out of 10.  Overall, this class was everything I needed.  Miguel's knowledge and passion for CSLA made it worth every penny."

Alan Gamboa, CCG Systems

"This class provides a wonderful, concrete clarification to the CSLA .Net Framework."

Theo Moore, Magenic

"This class was a huge help. Definitely money well spent. "

Chris Williams, Magenic

Friday, March 02, 2007 3:03:54 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
July, 2014 (2)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In