Rockford Lhotka

 Tuesday, April 6, 2010
« CSLA 4 business rule chaining | Main | CSLA 4 highlights »

Software design and development is called many things. Sometimes engineering, sometimes art, sometimes magic. And sometimes science, though for some reason that is less common in my experience.

Which is a little odd, since a lot of us have Computer Science degrees, which are usually a Bachelors or Masters of Science.

Today I applied science in a way so obvious it just kind of hit me that we really do science, at least from time to time.

I’m working on the CSLA 4 business rules system, trying to exercise it to locate bugs and issues, as well as writing some blog posts (here and here) on the topic. I was writing a post about some thoughts I have around dependent properties, and it occurred to me that the current code doesn’t work like I think it should.

That’s a hypothesis. An assertion that something does (or in this case doesn’t) work a certain way.

So I wrote a test to establish that the current implementation doesn’t work the way I think it should. And I ran my test. And the test passed.

Now this isn’t good. My hypothesis was a negative, and my test was a positive. In other words, I wrote the test for the way I thought it should work, and I’m pretty convinced that the current implementation doesn’t work that way. So why did my test pass?

At this point I could do two things. I could say “well, it works, so I should be happy”, or I could be a scientist and figure out how or why it is working.

There are lots of parallels to this in real life. Especially in medicine, but in many other aspects of life as well. People observe something that appears to work (a herb seems to fix headaches or whatever), a technique that is referred to as being “evidence-based”. Some people just accept it and live with the ill-defined and unsubstantiated belief that the result is actually real and repeatable or reliable. Scientists don’t necessarily reject that it works, but they have the need to understand why it works, and to ensure it is actually repeatable.

In my case I spent a couple hours digging into how and why my test was passing. I found two things. First, my test was slightly faulty. Second, there really is a bug in my current implementation. In other words my original hypothesis is correct.

This was all really subtle. The current implementation half-works, but only because of a bug in the flawed implementation – it should have completely failed. In other words, the implementation is flawed, but a bug in the flawed implementation made it half-work. It is like a double-bug… And my test wasn’t thorough enough to fully exercise the scenario, so it didn’t catch the failure, so my testing methodology was also to blame.

Ultimately my application of science has been beneficial, though it revealed a bunch more work I need to do.

The cool thing though, is that this reinforces how important science is in daily life, for computing and everything else. I surely hope that the medicines I take, and the airplanes in which I fly and the cars I drive go through rigorous scientific testing – because the last thing I want is to trust my life to evidence-based and unsubstantiated beliefs that everything is rosy…

Tuesday, April 6, 2010 1:25:57 PM (Central Standard Time, UTC-06:00)  #    Disclaimer