Reliable code: building in robustness

Open Source Book coverHi! You've found a page that was previously published on OpenSourceSmall.biz, a web site associated with the book John wrote called Open Source Solutions for Small Business Problems. This book is available for purchase at Amazon (affiliate link), but we've rolled all the web site content into John's business site.

Don't hesitate to drop us a line if you need anything!

Sat, 01/19/2008 - 05:08 -- John Locke

Ok. Last post on the quality code series. One of the downsides of getting older is realizing you do have shortcomings. You know how when you're young, going into a job interview, the toughest question is the one about your weaknesses? We're all quite blind to our weaknesses, until experience comes up and forces you to realize you're not perfect. Sometimes this happens early, sometimes late, but it happens to everyone sometime.

My coding weakness, it turns out, is reliability. I'm terrible at handling errors, building test frameworks, doing unit testing. I find all of that stuff quite boring. But it's essential to building a reliable application.

Reliability and security go hand in hand. In security, you're looking at the attacks, and making sure your code is secure against them. In reliability, you're identifying what each chunk of code expects to get, and then define how to handle exceptions, unexpected input. Done correctly, reliable code is secure. But it's a total pain to do, and it takes a lot longer to get there.

One of the code samples I examined recently was set up in a completely class-driven way, though I would not call it object oriented because none of the classes extended other classes. It was a rather simple, flat collection of objects and helpers and interfaces. It was not powerful. My guess is, it is not fast. It did not look very customizable. But it was certainly clear, and every single method inspected every single parameter, making sure the input was valid. Calls to other objects had extensive error handling built-in -- this application looked like it could not fail without notifying the programmer exactly where the failure was, with helpful feedback.

This is tedious work. I save it for the polishing phases of a project, focusing on getting things to work in the first place. But there's a strong argument to be made for building reliability into each module from the start. It's a very different style of programming, and takes a lot longer to get there, but the end result will inevitably be more secure, less buggy, and more able to account for every possible scenario--even if it handles a scenario by saying "I can't do that yet."

I think there's a personality difference between these development styles. The artist figures out some innovative way of solving the problem, gets a proof-of-concept working brilliantly quickly, and cranks through code producing a huge amount in a short amount of time. The craftsman takes a slower, methodical approach, crafting each module individually, building unit tests to make sure it works correctly as he goes, and building a system piece by polished piece.

Successful projects need both. The artist/hacker provides vision, drive, and momentum. The craftsman makes sure the system can handle the load, and can prove it's doing what it's designed to do.

The 80/20 rule comes into play here. 80% of the features can be hacked together very quickly, in the first 20% of the project. To make the project stand the tests of time, handle everything that might be thrown at it, and act as a foundation for a business or a mission-critical part, you need the craftsman to do the remaining 80% of the work to finish the job and get that final 20% of the functionality complete.

So here's a checklist for evaluating reliability of a project:

  • Is the program broken up into discrete modules that can be completely tested one at a time?
  • Are there unit tests built for each module, testing the output for normal and exceptional conditions?
  • Is the input to each module validated and properly tested to handle all possible things that may be passed to it?
  • Does the module handle non-normal input, and raise the appropriate errors?
  • Are there regular tests of the software as a whole, and each module, to identify tests that fail, or regressions in the code?

The only way to ensure reliability is through rigorous testing. Some of the newer programming practices rely on test-driven development--first you define what a module does, then you develop a test for it, and then only after all that do you finally develop the module until it passes all the tests.

In a small business environment, this all may be too much overhead. 80% of an application may be enough, and at 20% of the cost, much more inline with the budget. But when you need something to be completely reliable, take a look at the testing framework, how much it covers, and how much of the application passes the tests.

Add new comment

  1. Again, good job on the site. Unfortunately, most people won't be able to tell just how cool it really is. There is definitely a better look and feel on the outside, but where it really shines is under the hood. In today's world of crappy software vendors who provide crappy products and next to zero service at premium prices, it's refreshing to work with someone who is honest, thorough, reasonable and willing to do what it takes to meet the customer's needs.

    Eric Leung, Director of Information Systems
    Outdoor Research

Need More Freelock

       

About Freelock

We are located in Pioneer Square, in downtown Seattle. 83 Columbia Street #401 Seattle, WA 98104  USA [P] 206.577.0540 Contact Us/Directions | Site Map Get Updates ©1995-2014 Freelock Computing