Rockford Lhotka's Blog

Home | Lhotka.net | CSLA .NET

 Friday, March 31, 2006

During the early banter-and-have fun part of my recent dotnetrocks interview I dissed TDD (Test Driven Development, not the text translation service for the deaf). Of course this wasn’t the focus of the real interview, and prior to sitting down behind the mic I hadn’t thought about TDD at all, so we were all just having some fun. As such, it was flippant and it was short - meaning I was having fun, and that I surely didn't have the time or opportunity to really express my thoughts on design or testing. That would be an entirely different show all by itself.

 

Nor do I have time to write a long discussion of my thoughts on design, or on testing, just at the moment. But I did respond to some of the comments on Jeffery Palermo’s blog – and here are some more thoughts:

 

As a testing methodology TDD (as I understand it) is totally inadequate. I was unable to express this point in the interview due to the format, but the idea that you’d have developers write your QA unit tests is unrealistic. And it was pointed out on Palermo’s blog that TDD isn’t about writing comprehensive tests anyway, but rather is about writing a single test for the method – which is exceedingly limited from a testing perspective.

 

Of course you could argue that since the vast majority of applications have no tests at all, that it is a huge win if TDD can get companies to write just one test per method. That is infinitely better than the status quo (that whole division by zero thing) and so TDD is hugely beneficial regardless of whether it is actually the best approach or not. I’d go with this: some testing is better than no testing. But one-test-per-method is pretty lame and certainly doesn’t qualify as real testing.

 

But the real key is that developers are terrible testers (with perhaps a few exceptions). This is because developers test to make sure something works. Actual testers, on the other hand, test to find ways something doesn't work. Testers focus on the edge cases, the exceptional cases, the scenarios that a developer (typically?) ignores. Certainly there's no way to provide this level of comprehensive test in a single test against a method - which really supports another person's comment on Palermo's blog: that TDD isn't about testing.

 

Having spent a few months being employed specifically to do that type of QA unit testing I can tell you that I suck at it. I just don't think in that "negative" manner, and it is serious effort for me to methodically and analytically work my way through every possible permutation in which a method can be used and misused. I’ve observed that this is true for virtually all developers. As a group, we tend to be optimists – testing only to make sure things work, not to find out if they fail.

 

But as someone on Palermo's blog pointed out, TDD is mis-named. TDD isn't about testing (thankfully), nor apparently is it about development. It is, he said, a design methodology.

 

That's fine, but I don't buy into TDD as a “design methodology" either. You can't "design" a system when you are focused at the method level. There’s that whole forest-and-trees thing... Of course I am not a TDD expert – by a long shot – so for all I know there’s some complimentary part of TDD that looks at the forest level.

 

But frankly I don’t care a whole lot, because I use a modified CRC (class, responsibility and collaboration) approach. This approach is highly regarded within the agile/XP community, and is to my mind the best way to do OOD that anyone has come up with to this point. (David West's Object Thinking book really illustrates how CRC works in an agile setting) The CRC approach I use sees the forest, then carefully focuses in on the trees, pruning back the brush as needed.

 

Now I could see where TDD (as I understand it) would be complimentary to CRC, in that it could be used to write the prove-it-works tests for the objects' methods based on the OO model from CRC – but I’m speculating that this isn’t what the TDD people are after. Nor do I see much value at that point – because the design is done, so whether you write the test before or immediately after doesn't really matter - the test isn't driving the design.

 

But to my mind writing the tests is a must. In the interview I mentioned that I (like all developers) have always written little test harnesses for my code. What I didn’t get to say (due to the banter format) was that for years now I’ve been writing those tests in nunit, so they provide a form of continual regression testing. In talking to Jim Newkirk about TDD and his views on this, he said that this is exactly what he does and recommends doing.

 

And as long as we all realize that these are developer tests, and that some real quality assurance tests are also required (and which should be written by test-minded people) then that is good. Certainly this is my view: developers must write nunit tests (or use VSTS or whatever), and that I strongly encourage clients to hire real testers to complete the job – to flesh out the testing to cover edge and exceptional cases.

 

None of which, of course, is part of the design process, because that’s done using a modified CRC approach.

Friday, March 31, 2006 10:05:09 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, March 30, 2006

That was fast! Just this past weekend I was in New London, CT and did a live-in-the-studio interview for dotnetrocks on CSLA .NET 2.0. Somehow I thought it would air in a week or two, but just like that it is online and ready to hear!

While I was there I also recorded two episodes for DNR TV - the new dotnetrocks initiative that includes video of the computer screen. In those programs I walked through the basic structure of an editable root object and discussed how to bind it to a Windows Forms interface. If you want a quick intro into the use of CSLA .NET these should be well worth watching when they become available.

Thursday, March 30, 2006 3:37:42 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Yea! I have completed the final proof/review of all 12 chapters of the VB book. The Apress schedule calls for the book to go to print next week, which means it should be available in the 2-3 week timeframe (given time for printing, binding, shipping and all that stuff). It is so nice to be done, done, done!! :)

Thursday, March 30, 2006 7:02:20 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, March 22, 2006

I have sent off the final code for the book to Apress, which officially means that I have RTM code at this point. You can go here to download the release code for VB or C#.

As a bonus, I also put a test release of a WCF data portal channel up for download - you can get if from this page.

Update: Today (March 22) I recieved my author's copies of the Expert C# 2005 Business Objects book - so it is most certainly in print. What a joy it is to actually see the results of the past several months of work!!

Wednesday, March 22, 2006 12:38:56 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Sunday, March 19, 2006

A number of people have asked why the C# edition of my new Business Objects book is coming out before the VB edition. And yes, that is the case. The C# book should be out around the end of March, and the VB book about 2-3 weeks later.

As I've said before, I wrote the books "concurrently". Which really means I wrote one edition, knowing full well that I'd be coming through later to swap out all the code bits and change a few of the figures. It really didn't matter to me which one I did first, because I was going to have to go through every chapter to do this swapping process either way.

I did write the actual framework and sample app in VB first. The initial port to C# was handled by a fellow Magenicon, Brant Estes. Believe it or not, he had the initial compilable code done in about three days! I am thinking he didn't do a whole lot of sleeping (because he did the port by hand, not with a tool - which is awesome, because it means the code style and quality is far higher).

I could give you a fluffy (if somewhat true) answer, that the reason I did the C# book first was to ensure that the C# code was fully tested and brought entirely into line with the VB code. And there really is some truth to that. By doing the C# edition first, I was able to go through every line and recheck, tweak and enhance that code. Several large chunks of functionality were actually added or altered in C# first (during the writing process) and I back-ported them into the VB version of the code.

But the real reason is what a few people have speculated: dollars. For better or worse, the fact is that the .NET 1.0 C# book is outselling the VB book by quite a lot. Nearly 2:1 actually. Due to this, my publisher really wanted to get the C# edition out first. I initially pushed back, but I personally have no rational reason to do one before the other. I do love VB very much, but that's an irrational and emotional argument that simply doesn't hold a lot of sway...

(That said, if there was more than a couple week difference in release dates, I would have insisted on VB first, and I would have won that argument. But you must pick your battles, and it made no sense to me to have a fight with my publisher over which book came out 3 weeks before the other...)

For what its worth, porting the book is far, far easier than porting (and maintaining) the code. I've said it before, and I'm sure I'll say it again, maintaining the same code in two languages is just a pain. Converting the book was a relatively rote process of copy-paste, copy-paste, copy-paste. But converting the code means retesting, double-checking and almost certainly still missing a thing or two...

Anyway, I thought I'd blog this just to make the order of release totally clear. For better or worse, it was simply a business decision on the part of my publisher.

Sunday, March 19, 2006 9:36:08 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Monday, March 13, 2006

Business people hold the common belief that computer people are all the same, just cogs in a machine. Don’t like the color, shape, odor or whatever of one cog, just replace it with another cog. Domestic cogs are too expensive? You can find low-priced cogs overseas.

 

Those of us actually in the computer field know that this is quite incorrect. One “cog” is not at all like another. In fact, the skills of good cogs are orders of magnitude better that the skills of average cogs. Similarly, the skills of bad cogs are orders of magnitude worse than the average.

 

Unfortunately, the array of skills falls on a truncated bell curve; and I think that’s a primary contributor to the way business people perceive us cogs. That, and another issue I’ll get to later in this entry.

 

But the bell curve issue is a real and serious one. And one that isn’t likely to go away. The fact is that the vast majority of computer professionals are competent. Not stellar, not terrible; just competent. And, for technical ability, one competent professional really is much like another.

 

There are also “professionals” who are a standard deviation or two down the skill slope. We’ve all interacted with some of these people. But the fact is that they simply don’t last long; not most of them anyway. That’s why the bell curve is truncated. You have to truncate it, because to ignore this low end would distort the curve, making it useless.

 

The reality is that the vast majority of practicing computer professionals are in the middle of the skill range, but artificially appear to be near the bottom. At least this is true for the business people who don’t exist inside the industry and don’t see that bottom third (or so) of the skill range like we do.

 

This vast majority of very competent and generally hard-working professionals tend to be Mort (in Microsoft’s code scheme). They are as focused on the business as on the technology. Sure they are technologists, but that knowledge is focused on solving the issues of the business in the most efficient manner possible.

 

What’s important to realize though, is that the technology skills of Mort aren’t a differentiator. Even subjectively, the tech skills of most computer professionals really are pretty interchangeable (within a platform at least, such as Java or .NET). But the business domain understanding and knowledge are most certainly less interchangeable. Yet in the end, it is the intersection between the technology and business knowledge that truly differentiates one computer professional from another. It is in this nebulous area of overlap that the cogs become specialized and substantially more valuable than just any other random cog.

 

If business people understood the true value of this intersection of skills and knowledge they’d understand why they can’t just outsource or offshore all the computer work to save money. While the technical skills might be comparable, that area of business-technology overlap can only be gained by in-house staff (or perhaps by consultants who focus on a vertical area of business).

 

Years ago I managed a small group of programmers at a bio-medical manufacturing company. Our small team (of 5) built and maintained software that is a pipe dream for almost every client I’ve encountered from a consulting perspective. But our small team had this intersection in spades. We all understood the business, the people, the politics and we understood our entire technology infrastructure and standards. Coupled with being competent professionals, this little team did what I can say with certainty (having been in consulting for 12 years now) would take a team of consultants 2-3 times the size and at least twice as long.

 

But it is also important to remember that the bell curve and the non-linear nature of skill differentiation combine to create some interesting effects.

 

One standard deviation up the skill curve and you have far fewer professionals. Yet they are likely an order a magnitude better than average. Two standard deviations up the curve and there are precious few people, but they are really good with technology.

 

This is the part business people just don’t get. Hire someone one standard deviation up the curve, get them the business domain knowledge and they’ll out-produce a number of average developers. In theory, it would be even better if you got someone two standard deviations up the curve.

 

But what really happens is this: moving up the skill curve requires increased focus and specialization on the technology. There’s simply less time/room for learning the business domain.

 

The group one standard deviation up still has enough time to grasp the business domain effectively, and that group really is much better than average. We’ve all worked with people like this. People that really grok both the business and the technology and are able to come up with answers to problems in a fraction of the time as many others with whom they work. These are the Elvis persona in Microsoft-speak.

 

Of course the business people often don’t recognize this value, so these people don’t get paid according to their production and so they often leave and become consultants. A simple case of the business people shooting themselves in the foot, turning a would-be gold mine into an adequate resource.

 

Why “adequate”? Well, Elvis as a consultant loses the business domain expertise, and even their increased technical skills can’t entirely compensate. The fact is, Elvis as a consultant is not much (if any) better than Mort as a full-time employee (FTE). That near-mystical business/technology intersection enjoyed by a typical Mort typically offsets most or all the benefit Elvis gets from their technology understanding.

 

This is why Elvis-as-a-FTE is a real catch, and why business people should get a clue and work to keep these people!

 

Then there’s the people that are two or more standard deviations up the curve. They are so busy keeping up with technology that they don’t have time for the business domain expertise. Hell, they often don’t have time for families, stable inter-personal relationships or anything else. Let’s face it: these are the uber-geeks. In Microsoft’s persona system these are the Einstein persona.

 

An Einstein may work in a regular company, but honestly that’s pretty rare. The lack of business domain focus, coupled with the narrow technologies used by any one company tend to make Einstein a very poor fit for a single business. Of course very large businesses have the variety to overcome this, giving an Einstein enough different technologies to avoid boredom. Oddly, the same is true for some very small companies, where dabbling in random technologies seems to be the norm rather than the exception.

 

So most Einstein types are in consulting, working for software companies (like Microsoft or Oracle) or they are in very large or very small business environments.

 

And everywhere but in a software company, they are trouble. The Einstein types directly contribute to making all computer professionals look like cogs. They can (and do) create software that can only be understood and maintained by people at their level. But they are easily bored, and generally don’t do maintenance, so they move on, leaving the maintenance to Mort.

 

But of course Mort can’t maintain this stuff, so it slowly (or rapidly) degrades and becomes a nightmare. You’ve probably seen this: a system that was obviously once elegant, but which has been hacked and warped over time into a hideous monster. The remnants of an Einstein. Perhaps a better name for this persona would be Frankenstein…

 

Just check out this “overview of languages” post, but make special note of what happened when the Einstein types moved on and the Mort/Elvis types tried to maintain their masterwork…

 

Now we know what happened here. But to business people it is likewise “quite obvious” what happened: They hired some bad cogs to start with (the Einstein types), who created a monster and left. Then they hired some more bad cogs later (the Mort/Elvis types) who were simply unable to fix the problem. See, all cogs are the same; so let’s just offshore the whole thing, because some of those foreign cogs are really cheap!

 

In summary, what needs to be done is to figure out a way to get business people to understand that not all cogs are the same. That there are wide variations in technical skill, and wide variations in business skill. Perhaps most of all, they need to understand the incredible value in that mystical intersection where business and technical skill and knowledge overlap to create truly productive computer professionals.

 

The alternative is scary. When there’s thought of having Microsoft unionize, you know we’re in trouble. Unionization certainly has its good and bad points, but it is hard to deny that people inside a union embrace the idea of being interchangeable cogs and use it to their advantage. To me, at least, that’s a terrifying future…

Monday, March 13, 2006 4:14:18 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

Somebody woke up on the wrong side of the bed, fortunately it wasn't me this time :)

This is a nice little rant about the limitations of OO-as-a-religion. I think you could substitute any of the following for OO and the core of the rant would remain quite valid:

  • procedural programming/design
  • modular programming/design
  • SOA/SO/service-orientation
  • Java
  • .NET

Every concept computer science has developed over the past many decades came into being to solve a problem. And for the most part each concept did address some or all of that problem. Procedures provided reuse. Modules provided encapsulation. Client/server provided scalability. And so on...

The thing is, that very few of these new concepts actually obsolete any previous concepts. For instance, OO doesn't elminate the value of procedural reuse. In fact, using the two in concert is typically the best of both worlds.

Similarly, SO is a case where a couple ideas ran into each other while riding bicycle. "You got messaging in my client/server!" "No! You got client/server in my messaging!" "Hey! This tastes pretty good!" SO is good stuff, but doesn't replace client/server, messaging, OO, procedural design or any of the previous concepts. It merely provides an interesting lense through which we can view these pre-existing concepts, and perhaps some new ways to apply them.

Anyway, I enjoyed the rant, even though I remain a staunch believer in the power of OO.

Monday, March 13, 2006 11:44:39 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Thursday, March 09, 2006

Ron Jacobs and I had a fun conversation about the state of object-oriented design and the use of business objects in software development. You can listen to it here: http://channel9.msdn.com/Showpost.aspx?postid=169693

Thursday, March 09, 2006 5:56:37 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Saturday, March 04, 2006

Here's another status update on the books.

I am done with the C# book, and it is still on schedule to be available at the end of March.

I am in the final revision stages with the VB book, so it is still on schedule to be available mid-April.

Since posting the beta code, people have pointed out a couple bugs (which I've fixed), one bug which can't be fixed without breaking the book (so it will be fixed later - and it isn't a show-stopper anyway) and a few feature requests (which will obviously come later).

I expect to be able to put the final version 2.0 code for both books online around March 13.

I'm also working with Magenic to have them host a new online forum. The plan is to use Community Server 2.0, hosted on a server at Magenic and most likely available through forums.lhotka.net (though that's not for sure at this point).

So the next few weeks look to be exciting (anticipated):

March 13

CSLA .NET 2.0 code available

End of March

Expert C# 2005 Business Objects available

End of March

New CSLA .NET forums available

Mid-April

Expert VB 2005 Business Objects available

Saturday, March 04, 2006 11:51:10 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
 Wednesday, March 01, 2006

I was recently asked whether I thought it was a good idea to avoid using the Validation events provided by Microsoft (in the UI), in favor of putting the validation logic into a set of objects. I think the answer to the question (to some degree) depends on whether you are doing Windows or Web development.

 

With Windows development, Windows Forms provides all the plumbing you need to put all your validation in the business objects and still have a very rich, expressive UI - with virtually no code in the UI at all. This is particularly true in .NET 2.0, but is quite workable in .NET 1.1 as well.

 

With Web development life isn't as nice. Of course there's no way to run real code in the browser, so you are stuck with JavaScript. While the real authority for any and all business logic must reside on the web server (because browsers are easily compromised), many web applications duplicate validation into the browser to give the user a better experience. While this is expensive and unfortunate, it is life in the world of the Web.

 

(Of course what really happens with a lot of Web apps is that the validation is only put into the browser - which is horrible, because it is too easy to bypass. That is simply a flawed approach to development...)

 

At least with ASP.NET there are the validation controls, which simplify the process of creating and maintaining the duplicate validation logic in the browser. You are still left to manually keep the logic in sync, but at least it doesn't require hand-coding JavaScript in most cases.

 

 

Obviously Windows Forms is an older and more mature technology (or at least flows from an older family of technologies), so it is no surprise that it allows you to do more things with less effort. But in most cases the effort to create Web Forms interfaces isn't bad either.

 

In any case, I do focus greatly on keeping code out of the UI. There's nothing more expensive than a line of code in the UI - because you _know_ it has a half-life of about 1-2 years. Everyone is rewriting their ASP.NET 1.0 UI code to ASP.NET 2.0. Everyone is tweaking their Windows Forms 1.0 code for 2.0. And all of it is junk when WinFX comes out, since WPF is intended to replace both Windows and Web UI development in most cases. Thus code in the UI is expensive, because you'll need to rewrite it in less than 2 years in most cases.

 

Code in a business object, on the other hand, is far less expensive because most business processes don't change nearly as fast as the UI technologies provided by our vendors... As long as your business objects conform to the basic platform interfaces for data binding, they tend to flow forward from one UI technology to the next. For instance, WPF uses the same interfaces as Windows Forms, so reusing the same objects from Windows Forms behind WPF turns out to be pretty painless. You just redesign the UI and away you go.

 

A co-worker at Magenic has already taken my ProjectTracker20 sample app and created a WPF interface for it – based on the same set of CSLA .NET 2.0 business objects as the Windows Forms and Web Forms interfaces. Very cool!

 

So ultimately, I strongly believe that validation (and all other business logic) should be done in a set of business objects. That’s much of the focus in my upcoming Expert VB 2005 and C# 2005 Business Objects books. While you might opt to duplicate some of the validation in the UI for a rich user experience, that’s merely an unfortunate side-effect of the immature (and stagnant) state of the HTML world.

Wednesday, March 01, 2006 3:44:33 PM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 

This recent MSDN article talks about SPOIL: Stored Procedure Object Interface Layer.

This is an interesting, and generally good idea as I see it. Unfortunately this team, like most of Microsoft, apparently just doesn't understand the concept of data hiding in OO. SPOIL allows you to use your object's properties as data elements for a stored procedure call, which is great as long as you only have public read/write properties. But data hiding requires that you will have some private fields that simply aren't exposed as public read/write properties. If SPOIL supported using fields as data elements for a stored procedure call it would be totally awesome!

The same is true for LINQ. It works against public read/write properties, which means it is totally useless if you want to use it to load "real" objects that employ basic concepts like encapsulation and data hiding. Oh sure, you can use LINQ (well, dlinq really) to load a DTO (data transfer object - an object with only public read/write properties and no business logic) and then copy the data from the DTO into your real object. Or you could try to use the DTO as the "data container" inside your real object rather than using private fields. But frankly those options introduce complexity that should be simply unnecessary...

While it is true that loading private fields requires reflection - Microsoft could solve this. They do own the CLR after all... It is surely within their power to provide a truly good solution to the problem, that supports data mapping and also allows for key OO concepts like encapsulation and data hiding.

Wednesday, March 01, 2006 10:00:01 AM (Central Standard Time, UTC-06:00)  #    Disclaimer  |  Comments [0]  | 
On this page....
Search
Archives
Feed your aggregator (RSS 2.0)
August, 2014 (2)
July, 2014 (3)
June, 2014 (4)
May, 2014 (2)
April, 2014 (6)
March, 2014 (4)
February, 2014 (4)
January, 2014 (2)
December, 2013 (3)
October, 2013 (3)
August, 2013 (5)
July, 2013 (2)
May, 2013 (3)
April, 2013 (2)
March, 2013 (3)
February, 2013 (7)
January, 2013 (4)
December, 2012 (3)
November, 2012 (3)
October, 2012 (7)
September, 2012 (1)
August, 2012 (4)
July, 2012 (3)
June, 2012 (5)
May, 2012 (4)
April, 2012 (6)
March, 2012 (10)
February, 2012 (2)
January, 2012 (2)
December, 2011 (4)
November, 2011 (6)
October, 2011 (14)
September, 2011 (5)
August, 2011 (3)
June, 2011 (2)
May, 2011 (1)
April, 2011 (3)
March, 2011 (6)
February, 2011 (3)
January, 2011 (6)
December, 2010 (3)
November, 2010 (8)
October, 2010 (6)
September, 2010 (6)
August, 2010 (7)
July, 2010 (8)
June, 2010 (6)
May, 2010 (8)
April, 2010 (13)
March, 2010 (7)
February, 2010 (5)
January, 2010 (9)
December, 2009 (6)
November, 2009 (8)
October, 2009 (11)
September, 2009 (5)
August, 2009 (5)
July, 2009 (10)
June, 2009 (5)
May, 2009 (7)
April, 2009 (7)
March, 2009 (11)
February, 2009 (6)
January, 2009 (9)
December, 2008 (5)
November, 2008 (4)
October, 2008 (7)
September, 2008 (8)
August, 2008 (11)
July, 2008 (11)
June, 2008 (10)
May, 2008 (6)
April, 2008 (8)
March, 2008 (9)
February, 2008 (6)
January, 2008 (6)
December, 2007 (6)
November, 2007 (9)
October, 2007 (7)
September, 2007 (5)
August, 2007 (8)
July, 2007 (6)
June, 2007 (8)
May, 2007 (7)
April, 2007 (9)
March, 2007 (8)
February, 2007 (5)
January, 2007 (9)
December, 2006 (4)
November, 2006 (3)
October, 2006 (4)
September, 2006 (9)
August, 2006 (4)
July, 2006 (9)
June, 2006 (4)
May, 2006 (10)
April, 2006 (4)
March, 2006 (11)
February, 2006 (3)
January, 2006 (13)
December, 2005 (6)
November, 2005 (7)
October, 2005 (4)
September, 2005 (9)
August, 2005 (6)
July, 2005 (7)
June, 2005 (5)
May, 2005 (4)
April, 2005 (7)
March, 2005 (16)
February, 2005 (17)
January, 2005 (17)
December, 2004 (13)
November, 2004 (7)
October, 2004 (14)
September, 2004 (11)
August, 2004 (7)
July, 2004 (3)
June, 2004 (6)
May, 2004 (3)
April, 2004 (2)
March, 2004 (1)
February, 2004 (5)
Categories
About

Powered by: newtelligence dasBlog 2.0.7226.0

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014, Marimer LLC

Send mail to the author(s) E-mail



Sign In