Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Tuesday, December 21, 2004

In recent SOA related posts I’ve alluded to this, but I wanted to call it out directly. SOA is founded on the principal of passive data. Dima Maltsev called this out in a recent comment to a previous blog entry:

 

In one of the posts Rockford is saying that SOA doesn't address the need of moving data and also the logic associated with that data. I think this is one of the main ideas of SOA and I would rather state it positively: SOA wants you to separate the logic and data.

 

Yes, absolutely. SOA is exactly like structured or procedural programming in this regard. All those concepts we studied way back in the COBOL/FORTRAN days while drawing flowcharts and data flow diagrams are exactly what SOA is all about.

 

Over the past 20-30 years two basic schools of thought have evolved. One ways that data is a passive entity, the other attempts to embed data into active entities (often called objects or components).

 

Every now and then someone tries to merge the two concepts, providing a scheme by which sometimes data is passive, while other times it is contained within active entities. Fowlers Data Transfer Objects (DTO) design pattern and my CSLA .NET are examples of recent efforts in this space, though we are hardly alone.

 

But most people stick with one model or the other. Mixing the models is complex and typically requires extra coding.

 

Thus, most software is created using a passive data model, or an active entity model alone. And the reality is that the vast majority of software uses a passive data model.

 

In my speaking I often draw a distinction between data-centric and object-oriented design.

 

Data-centric design is merely a variation on procedural programming, with the addition of a formalized data container of some sort. In some cases this is as basic as a 2-dimensional array, but in most cases it is a RecordSet, ResultSet, DataSet or some variation on that theme. In .NET it is a DataTable (or a collection of DataTables in a DataSet).

 

The data-centric model is one where the application goes to the database and retrieves data. The data in the database is passive, and when the application gets it, the data is in a container – say a DataTable. This container is passive as well.

 

I hear you arguing already. “Both the database and DataTable make sure certain columns are numeric!”, or “They both make sure the primary key is unique!”. Sure, that is true. Over the past decade some tiny amount of intelligence has crept into our data containers, but nothing really interesting. Nothing that makes sure the number in the column is a good number – that it is in the right range, or that it was calculated with the right formula.

 

Anything along the lines of validation, calculation or manipulation of data occurs outside the data container. That outside entity is the actor, the data container is merely a vessel for passive data.

 

And that’s OK. That works. Most software is written this way, with the business logic in the UI or a function library (or maybe a rules engine), acting against the data.

 

The problem is that most people don’t recognize this as procedural programming. Since the DataSet is an object, and your UI form is an object, the assumption is that we’re object-oriented. Thus we don’t rigorously apply the lessons learned back in the FORTRAN days about how to organize our code into reusable procedures and organize those procedures into function libraries.

 

Instead we plop the code into the UI behind button clicks and key press events. Any procedural organization is a token effort, unorganized and informal.

 

Which is why I favor an active entity approach – in the form of object-oriented business entity objects.

 

In an active entity model, data is never left out “on its own” in a passive state. Well, except for when it is in the database, because we’re stuck using RDBMS databases and the passive data concept is so deeply embedded in that technology it is hopeless…

 

But once the data comes out of the database it is in an active entity. Again, this is typically an entity object, designed using object-oriented concepts. The primary point here is that the data is never exposed in a raw form. It is never passive. Any external entity using the data can count on the data being validated, calculated and manipulated based on a consistent set of rules that are included with the data.

 

In this model we avoid putting the logic in the UI. We avoid the need to create procedures and organize them into function libraries like in the days of COBOL. Instead, the logic is part and parcel with the data. They are one.

 

Most people don’t take this approach. Historically it has required more coding and more effort. With .NET 1.x some of the overhead is gone, since basic data binding to objects is possible. However, there’s still the need to map the objects to/from the database and that is certainly extra effort. Also, the data binding isn’t on a par with that available for the DataSet.

 

In .NET 2.0 the data binding of objects will be on a par (or better than) binding with a DataSet, so that end of things is improving nicely. The issue of mapping data to/from the database remains, and appears that it will continue to be an issue for some time to come.

 

In any case, along comes SOA. SOA is all about active entities sending messages to each other. When phrased like that it sounds almost object-oriented, but don’t be fooled.

 

The active entities are procedures. Each one is stateless, and each one is defined by a formal contract that specifies the name of the procedure, the parameters it accepts and the results it returns.

 

Some people will argue that they aren’t stateless. Indigo, for instance, will allow stateful entities just like DCOM and Remoting to today. But we all know (after nearly 10 years experience with DCOM) that stateful entities don’t scale and don’t lead to reliable systems. So if you really want to have an unscalable and unreliable solution then go ahead and use stateful designs. I’ll be over here in stateless land where things actually work. :-)

 

The point being, these service-entities are not objects, they are procedures. They accept messages. Message is just another word for data, so they are procedures that accept data.

 

The data is described by a schema – often an XSD. This schema information has about the same level of “logic” as a database or DataSet – which is to say it is virtually useless. It can make sure a value is numeric, but it can’t make sure the number is any good.

 

So Dima is absolutely correct. SOA is all about separating the data (messages) from the logic (procedures aka services).

 

Is this a good thing? Well that’s a value judgement.

 

Did you like how procedural programming worked the first time around? Do you like how data-centric (aka procedural) programming works in VB or Powerbuilder? If so, then you’ll like SOA – at least conceptually.

 

Or do you like how object-oriented programming works? Do you appreciate the consistency and centralized nature of having active entities that wrap your data at all times? Do you feel that the extra effort of doing OO is worth it to gain the benefits? If so, then you’ll probably balk at SOA.

 

Note that I say that people happy with procedural programming will conceptually like SOA. That’s because there’s always the distributed parallel thing to worry about.

 

SOA is inherently a distributed design approach. Yes, you might configure your service-entities to all run on the same machine, or even in the same process. But the point of SOA is that the services are location-transparent. You don’t know that they are local, and at some point in the future they might not be. Thus you must design under the assumption that each call to a service is bouncing off two satellites as it goes half-way around the planet and back.

 

SOA (as described by Pat Helland at least) is also inherently parallel. Calls to services are asynchronous. Your proxy object might simulate a synchronous call, so you don’t even realize it was async. But a core idea behind SOA is that of transport-transparency. You don’t know how your message is delivered. Is it via web services? Is it via a queue? Is it via email? You don’t know. You aren’t supposed to care.

 

But even procedural aficionados may not be too keen to program in a distributed parallel environment. The distributed part is complex and can be slow. The parallel part is complex and, well, very complex.

 

My predication? Tools like Indigo will avoid the whole parallel part and it will never come to pass. The idea of transport-transparency (or protocol-transparency) will go the way of the dodo before SOA ever becomes truly popular.

 

The distributed part of the equation can’t really go away though, so we’re kind of stuck with that. But we’ve dealt with that for nearly 10 years now with DCOM, so it isn’t a big thing.

 

In the end, my prediction is that SOA will fade away as people realize that the reality is that we’ll be using it to do exactly what we did with DCOM, RMI, IIOP and various other RPC technologies.

 

The really cool ideas – the ones with the power to be emergent – are already fading away. They aren’t included (at least in the forefront) of Indigo, and they’ll just slip away like a bright and shining dream and we’ll be left with a cool new way to do RPC.

 

And RPC is fundamentally a passive data technology. Active entities (procedures) send passive data between them – often in the form of arrays or more sophisticated containers like a DataSet or perhaps an XML document with an XSD. It is all the same stuff with tiny variations. Like ice cream – chocolate, vanilla or strawberry, it is all cold and fattening.

 

So welcome to the new world, same as the old world.

Tuesday, December 21, 2004 8:27:28 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

One of the areas where the non-Microsoft world has long criticized Microsoft is in their use of a closed bug and issue tracking system. The open-source world, in particular, claims that having an public submission process is one of their key benefits.

With Visual Studio 2005 and .NET 2.0, Microsoft has launched what is apparently a little publicized and little-known public submission web site for bugs and suggestions. It was code-named Ladybug, and is now called the Product Feedback Center.

Not that it is totally unheard of, or unused. Ladybug is rumored to be the primary motivation in C# getting edit-and-continue. E&C wasn't even on the C# feature list, but large numbers of ex-VB developers who are now enjoying semi-colons apparently voted E&C to the top of the Ladybug wish list, putting pressure on Microsoft to add the feature.

True story? I don't know, but that's the rumor.

The point being, Microsoft pays attention to the stuff on this site. If you have a bug or issue in VB, C#, ASP.NET or whatever you should make sure to report it on Ladybug!!!

Tuesday, December 21, 2004 5:12:23 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Saturday, December 18, 2004

I was relaxing today, visiting with my wife and she read me the text from this link. It got me thinking… (and yes, I am way too geeky for my own good sometimes ;-) )

 

The ancient Egyptians had a pretty solid understanding of decomposition into object-oriented models al la CRC-style analysis.

 

(Stargate SG-1’s general misuse of the goddess Maat notwithstanding, and totally ignoring the real world speculation that Maat was an alien)

 

Look at the description of the goddess Maat’s role. She has a very clearly delineated and focused responsibility. As an object, Maat’s job is to measure justice vs evil. Arguably this is the job of the scale, but given that there’s a goddess involved, I’m giving more weight (pun intended) to her than to the inanimate scale.

 

She doesn’t decide what to do about evil, or how to mete out justice. This is not her role.

 

Osiris on the other hand, doesn’t measure justice or evil, nor does he perform punishment. His responsibility is also clearly delineated as a single “business rule” wherein he passes the Judged off to the appropriate “handler“.

 

To do this he collaborates with Maat (and technically with the scale) and with the Judged. He has a using relationship with Ahemait, where he passes the Judged if they fail to meet the business requirement by having an excess quantity of evil (as measured by Maat).

 

The fact that the Egyptians realized that justice had two parts, or two responsibilities, is very interesting. They properly separated the responsibility of measuring evil (as a single, discrete concept) from the responsibility of judging outcomes based on that measurement. A distinction that was fortunately carried forward into the US justice system. The measurement of evil is the responsibility of the jury, while the judging is a separate concern as is the actual implementation of punishment. Next time you are called for jury duty, keep in mind that you are filling the role of the goddess.

 

Not only did the Egyptians get the separation of responsibilities, but they understood the idea of object collaboration. This is illustrated by the fact that Osiris can’t do his job without collaborating with other “objects” in the system.

 

And to think. All this time I’ve been saying that object-orientation has been around for about 30 years. I guess I was off by a couple orders of magnitude. It is more like 3000 years :-)

Saturday, December 18, 2004 10:27:43 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 17, 2004

Sorry for the blatantly commercial nature of this entry, but as we close out 2004 and head into 2005 it turns out that the Microsoft-oriented consulting business is doing pretty well. Thus my employer, Magenic, is in need of more consultants (see below for details).

 

As most people are only too aware, the computer field (and in particular consulting) had a very rough time from 2001 to 2003 (give or take a bit). Not only was there the dot-bomb, but there was the post-9/11 effect and the overall economic recession. This was all amplified in the computer field because of the overly optimistic positions taken by many companies during the Clinton-era economic boom.

 

But the reality is that over the past 18-24 months the consulting business stabilized and has started to grow once again. At least that has been my observation watching Magenic’s business slow and then grow over the past 4+ years.

 

I think there are many factors going into the return of growth. Certainly the weak dollar is making offshoring less attractive. But so is the slow realization among many companies that they really aren’t getting the savings they’d hoped for (given the high cost of management, requirements gathering, communications and rework).

 

Add to that the slow, but persistent recovery of the economy overall. While it is true that a lot of people (in the general pool of workers) are working multiple odd jobs to make ends meet, the fact is that companies are now resuming investment in IT in order to improve efficiency. This is triggering renewed interest in line-of-business software development, and in turn is causing demand for consulting.

 

Finally there’s the maturation of .NET. Many organizations wait until a couple service packs and maybe a point release come out before moving aggressively to any technology. And I don’t blame them in the least. But .NET (and all of Microsoft’s related (or quasi-related) server products now have at least a point release and/or some service packs that have been out for a while. Even conservative organizations are starting to seriously look at moving some or all their IT into the .NET space.

 

This is even leading to a lot of coexistence with Java and J2EE. I can’t count the number of organizations I’ve talked to that switched to the big J in reaction to Windows DNA, and are now choosing (yes, choosing!) to bring in .NET in addition to Java. Every now and then they are even replacing Java, but more often it is a clear strategy of coexistence and integration.

 

Ultimately this growth means that Magenic actually has more work than we have people. The fact that Magenic is pretty selective in its hiring makes finding people more difficult, but we’d rather lose work before compromising on who we hire. Even so, we’re the largest privately held Microsoft-focused consulting company in the US (possibly in the world).

 

Magenic is looking for .NET architects and developers. But even harder to find are people with good experience in Biztalk Server, Commerce Server, Sharepoint and Business Intelligence (or other SQL Server skills). If you are interested, Magenic has offices in Boston, Atlanta, Chicago, Minneapolis and San Francisco.

Friday, December 17, 2004 2:58:05 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, December 15, 2004

If SOA really is a revolutionary force on a par with OO, it is because it has emergent behavior. Here's some more information on emergence as a field of study and what it really means.

Wednesday, December 15, 2004 5:40:35 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Earlier this week my employer, Magenic worked with the Microsoft office in Atlanta to set up a series of speaking engagements with some Atlanta-based companies. I had a good time, as I got some very good questions and discussion through some of the presentations.

 

In one of the presentations about SOA, an attendee challenged my assertion that “pure” SOA would lead to redundant business logic. He agreed with my example that validation logic would likely be replicated, but asserted that replication of validation logic was immaterial and that it was business logic which shouldn’t be replicated.

 

Now I’m the first to acknowledge that there are different types of business logic. And this isn’t the first time that I’ve had people argue that validation is not business logic. But that’s never rung true for me.

 

Every computer science student (and I assume CIS and MIS students) gets inundated with GIGO early in their career. Garbage-In-Garbage-Out. The mantra that illustrates just how incredibly important data validation really is.

 

You can write the most elegant and wonderful and perfect business logic in the world, and if you mess up the validation all your work will be for naught. Without validation, nothing else matters. Of course without calculation and manipulation and workflow the validation doesn’t matter either.

 

My point here is that validation, calculation, manipulation, workflow and authorization are all just different types of business logic. None are more important than others, and a flaw in any one of them ripples through a system wreaking havoc throughout.

 

Redundant or duplicated validation logic is no less dangerous or costly than duplicated calculation or manipulation logic. Nor is it any easier to manage, control or replicate validation logic than any other type of logic.

 

In fact, you could argue that it is harder to control replicated validation logic in some ways. This is because we often want to do validation in a web browser where the only real options are javascript or flash. But our “real” code is in VB or C# on the web server and/or application server. Due to the difference in technology, we can’t share any code here, but must manually replicate and maintain it.

 

Contrast this with calculation, manipulation or workflow, which is almost always done as part of “back end processing” and thus would be written in VB or C# in a DLL to start with. It is comparatively easy to share a well-designed DLL between a web server and app server. Not trivial I grant you, but it doesn’t require manual porting of code between platforms/languages like most validation logic we replicate.

 

So given the comparable importance of validation logic to the other types of business logic, it is clear to me that it must be done right. And given the relative complexity of reusing validation logic across different “tiers” or services, it is clear that we have a problem if we do decide to replicate it.

 

And yet changing traditional tiered applications into a set of cooperative services almost guarantees that validation logic will be replicated.

 

The user enters data into a web application. It validates the data of course, because the web application’s job is to give the user a good experience. Google’s new search beta is reinforcing the user expectation that even web apps can provide a rich user experience comparable to Windows. Users will demand decent interactivity, and thus validation (and some calculation) will certainly be on the web server and probably also replicated into the browser.

 

The web application calls the business workflow service. This service follows SOA principles and doesn’t trust the web app. After all, it is being used by other front-end apps as well, and can’t assume any of them do a good job. We’re not talking security here either – we’re talking data trust. The service just can’t trust the data coming from unknown front-end apps, as that would result in fragile coupling.

 

So the service must revalidate and recalculate any thing that the web app did. This is clear and obvious replication of code.

 

At this point we’ve probably got three versions of some or all of our validation logic (browser, web server, workflow service). We probably have some calculation/manipulation replication too, as the web server likely did some basic calculations rather than round-trip to the workflow service.

 

Now some code could be centralized into DLLs that are installed on both the web server and app server, so both the web app and workflow service can use the same code base. This is common sense, and works great in a homogenous environment.

 

But it doesn’t extend to the browser, where the replicated validation logic is in javascript. And it doesn’t work all that well if the web server is running on a different platform from the app server.

 

In any case, we have replication of code – all we’re doing is trying to figure out how to manage that replication in some decent way. I think this is an important conversation to have, because in my mind it is one of the two major hurdles SOA must overcome (the other being versioning).

Wednesday, December 15, 2004 5:14:03 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

TheServerSide.net just published an article of mine titled “The Fallacy of the Data Layer”, in which I basically carry some SOA ideas to their logical conclusion to see what happens. Unfortunately the sarcasm in the article was pretty well hidden and so the comments on the article are kinda harsh - but funny if you actually see the sarcasm :-)

 

Wednesday, December 15, 2004 5:00:16 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 10, 2004

According to this article, quoting Microsoft VP Muglia, WinFS probably won't show up for real until sometime in the next decade - the version of Windows Server after Longhorn Server.

And people were wondering why I wasn't so excited about WinFS and the related ObjectSpaces technology...

Now it looks like the primary feature of Longhorn Server will be componentization. Which is nothing to sneeze at and is probably far more important on a day-to-day basis than WinFS anyway.

Friday, December 10, 2004 3:33:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, December 09, 2004

I frequently seem to have non-technical things I'd like to blog about. I even dipped into them a bit by encouraging people to vote in November.

And while it is often said that blogs have no rules, I submit that blogs do have self-imposed rules and/or themes.

My motivation in establishing this blog was to have a forum for editorial comments on technology - .NET, SOA or whatever other technical areas catch my interest. If you are a reader of my blog, I assume you have overlapping interests around technology.

I don't assume you share all my other interests. And thus I've set up a separate personal blog. It is a forum for all my non-professional interests (things like religion, politics, hobbies, etc). This way I can keep posts on this blog entirely focused on technology, and still have a forum for my diverse other interests.

Now back to our regularly scheduled technology discussions...

Thursday, December 09, 2004 5:21:47 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Last night a group of top developers from the Minneapolis area had dinner with David Chappell. Great company, great food and great conversation; what more could you want?

 

The conversation ranged wide, from whether parents should perpetuate the myth of the Tooth Fairy and Santa all the way down to how Whidbey’s System.Transactions compares to the transactional functionality in BizTalk Server.

 

At one point during the conversation David mentioned that there’s not only an impedance mismatch between objects and relational data, but also between users and objects and between services and objects. While the user-object mismatch is somewhat understood, we’re just now starting to learn the nature of the service-object mismatch.

 

(The OO-relational impedance mismatch is well-known and is discussed in many OO books, perhaps most succinctly in David Taylor’s book)

 

I think that his observation is accurate, and it is actually the topic of an article I submitted to TheServerSide.net a few days ago, so look for that to stir some controversy in the next few weeks :-)

 

I also heard that some people have read my blog entries on remoting (this one in particular I think) and somehow came to the conclusion that I was indicating that Indigo would be binary compatible with remoting. To be clear, as far as I know Indigo won’t be binary compatible with remoting. Heck, remoting isn’t binary compatible with remoting. Try accessing a .NET 1.1 server from a 1.0 client and see what happens!

 

What I said is that if you are careful about how you use remoting today you’ll have a relatively easy upgrade path to Indigo. Which is no different than using asmx or ES today. They’ll all have an upgrade path to Indigo, but that doesn’t imply that Indigo will natively interoperate with remoting or ES (DCOM).

 

(Presumably Indigo will interop with asmx because that’s just SOAP, but even then you’ll want to change your code to take advantage of the new Indigo features, so it is still an upgrade process)

 

What does “be careful how you use remoting” mean? This:

 

  1. Use the Http channel
  2. Host in IIS
  3. Use the BinaryFormatter
  4. Do not create or use custom sinks, formatters or channels

 

If you follow these rules you should have a relatively direct upgrade path to Indigo. Will you have to change your code? Of course. That’s what an upgrade path IS. It means you can, with relative ease, update your code to work with the new technology.

 

Thursday, December 09, 2004 1:24:47 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Friday, December 03, 2004

The organizers of HDC 04 should be proud. For their first outing in organizing a conference they've outdone themselves.

There are around 150 people here from the midwest area (Minnesota, Iowa, Nebraska, etc.) and also a few from as far away as Boston. I gather that they sold out in just 3 weeks and couldn't take more people because the venue just can't handle any more.

Next year they'll need a bigger hotel, because you just can't beat a day of dual-track content for $18 - lunch included.

As a Minnesotan, I am thrilled to see this kind of event and this level of interest in the midwest. When you consider that recent VS Live shows in Chicago only drew around 500 people, and that (venue permitting) the HDC's first outing would have topped 200 people this is really impressive!

Last night was a pre-event party, and I enjoyed several great conversations ranging from SOA to VB to generics to hunting and fishing. Got to love it! Sam Gentile and I had a long running and very enjoyable discussion that covered SOA and the future of VB. As he said, I got him to spend more time talking about VB than he’s spent in the last three or more years total - glad I could broaden some horizons :-)

Friday, December 03, 2004 9:59:33 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
You scored as alternative. You're partially respected for being an individual in a conformist world yet others take you as a radical. You have no place in society because you choose not to belong there - you're the luckiest of them all, even if your parents are completely ashamed of you. Just don't take drugs ok?


What Social Status are you?
created with QuizFarm.com
Friday, December 03, 2004 12:09:05 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, December 01, 2004

I just finished reading through Ted Neward’s Effective Enterprise Java book (ISBN 780321130006) and it is a must read for any enterprise .NET developer.

 

I hear it now. “What?! A Java book for .NET developers? Rocky, you’ve gone over the edge!”

 

No, I haven’t gone over the edge, and YES, absolutely a Java book for .NET developers.

 

It would be extremely foolish for any enterprise developer to read only .NET or Java information. It isn’t like either technology as a monopoly on smart, eloquent technical experts. On top of that, the two platforms are so similar at an architecture and software design level that the same guidelines and best practices apply equally to both.

 

So yes, go out and read Ted’s book. Every single item in the book has direct applicability to .NET. Sure you’ll need to translate some terminology to the .NET world, but that’s a minor thing. The important thing is that you’ll learn a whole lot about the good, the bad and the ugly when it comes to creating enterprise applications.

 

The truisms for minimizing network costs (latency, bandwidth, etc.) are universal. The tradeoffs when dealing with shared state and mapping state between objects, databases and XML are universal. The need to reduce locking and blocking are universal. The OO design patterns that address these issues in Java are exactly the patterns that apply in the .NET space as well.

 

And since this is a book about enterprise issues, there’s relatively little code and you can safely ignore virtually all of it. No offense to Ted, but this book doesn’t need any of the Java code. I’m sure he put it in there to make it actually feel like a Java book. No, the value is in the sometimes irreverent yet thorough discussion of the issues and the tradeoffs involved in addressing those issues.

 

Honestly, if this book used more universal terminology and skipped the few code examples it has, it would be a universal must-have resource for any modern enterprise platform – Java, .NET or whatever. I contend that it still fills this role, albeit with the need for .NET readers to mentally translate some terminology into our world. And that’s a small price to pay.

 

Wednesday, December 01, 2004 4:05:20 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
October, 2014 (1)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In