Alan Shalloway’s Hat Trick

Why does design seems so effortless in the hands of a master and why do beginners find design so difficult? Alan Shalloway‘s talk about Emergent Design at Software Development Best Practices last September demonstrated three ways to get to an identical good design: apply design patterns, use commonality-variability analysis, and practice test driven development techniques. The problem he used to illustrate his talk was fairly simple. Design classes to support monitoring microwave chips and cards by requesting their status over either a TCP/IP connection or via SMTP. Messages may optionally be encrypted with either 64 or 128 bit encryption. Cards queue information and send it out no more than every 10 minutes unless there is an error (then they send it immediately). Chips send results immediately.

Alan started by describing a tall, ugly hierarchy with intermediate abstract classes and 24 concrete subclasses for each hardware type, encryption mechanism, transmission protocol combination. This design is clearly bad because it contains redundant behaviors and a combinatorial explosion of leaf classes.

Next he applied the GOF authors’ advice of favoring composition over inheritance, to create a design that had three families of classes—one for each variation of hardware type, transmission protocol, and encryption. The specific hardware class plugged in the appropriate encryption and transmission protocol helper classes together to implement the appropriate behavior variations. Alan then showed how test-driven development starts with a simpler game plan. Instead of understanding all variations upfront, create a clean design which supports just one partial feature (or story) at a time. Refactor as you add more capabilities following a few good design principles. The idea with TDD is to let the design emerge, one story at a time. The first story Allan specified was to request the status of a chip with 64 bit encryption transmitted via TCP/IP. The next story added the flexibility to support different encryptions, the third transmission via SMTP, and so forth. He concluded his design demonstration by applying commonality-variability analysis to create the same end result. In a nutshell commonality-variability analysis involves identifying common abstractions and variations, relationships between them, assigning them responsibilities, and then linking them together. It differs fundamentally from TDD in that you upfront analyze variations before creating classes that are then configured to work together.

Voila! Identical designs following different approaches and a handful of design heuristics. Is this a realistic expectation outside of a canned talk? Can mere mortals, students new to designs, or developers faced with considerably larger more complicated design variations perform such clean design factorings in a real-world setting? I’m highly skeptical. I’ve seen so many different designs for a more complex problem I present to students of my design class that I’ve stopped believing I know every reasonable solution. I am constantly amazed by the sheer number of different acceptable design solutions students create. Sure, there are recurring patterns and themes among a range of acceptable design solutions. But I don’t expect identical solutions.

Alan’s demonstration that a good design can be achieved three different way—an amazing hat trick—really hit home the point that you needn’t always start by knowing everything upfront. But toss in a few more wrinkles—add a communications port, a mechanism to gain access to that port, define different card and chip types (with different reports), cards that report at different time intervals, cards and chips that can be programmed to report or have to be polled (or both)—and I expect design solutions to really diverge. One designer may embed an if-then-else decision into a method while another may factor out a variation into a family of classes. Some designers check a condition before asking a helper to perform an action, rather than find the polymorphic way to hide those details. Instead of having an “empty encrypter” class, designers may use a variable to represent bits of encryption, and only invoke the encrypter if the variable’s value isn’t zero. Even though I strive for squeaky clean polymorphism and to eliminate external checks, I often can’t convince students or some designers of the benefits of using null objects. To them, creating another (null) class seems like more work than it’s worth.

So while I don’t expect different designers to produce identical results, I firmly believe that abstractions can either be developed as you go or plotted out ahead of time. Alan advises designers to selectively use these approaches. Work out pattern-oriented solutions ahead of time when you know what variations are prevalent. When design variations aren’t so straightforward, start simply and add support for variations as you go. By using test driven development and rigorously refactoring before adding new functionality, you can keep an emerging design clean. Just don’t expect everyone’s design aesthetics to line up. Commonality-variability analysis seems to be a useful anlytic technique whenever I’m laying out the facts as I consider how to support a related set of variations.

Little things add up

Being in the country, surrounded by fields and trees, my perennial garden requires constant weeding. A few nights ago I was out in the garden digging out some tenacious crab grass. At first I noticed pinprick or two on my arm and my legs itching. I ignored those slights and kept working. But after being bitten several times and feeling a persistent roving itching sensation around my ankles I stopped to investigate. A small army of teeny ants were streaming over my shoes, crawling up my pants legs, into my socks, over my gloves onto my bare arms. I ditched my shoes, socks, pants…and dashed inside. That ended weeding for the day.

I’m like that. I ignore little annoyances. Bug bites or a few scratches don’t stop me from weeding. I block out most distractions when intent on a task. It’s only when irritations exceed some threshold that I turn my attention to what’s bugging me. Otherwise small distractions, if I notice them at all, are easily brushed aside.

Software developers often have an unrelenting focus when tackling big meaty design problems. Little annoyances get brushed aside in the rush of making the design work. Who cares about those little bugaboos when exciting stuff is unfolding right before your eyes? And yet, those little things add up.

Why is it that we wait until there’s a persistent itch before we scratch it? I suppose it is human nature. If you let every little thing distract you, you’d never get anything done But design bugs don’t have six legs and initiative so they tend to hang around. If you don’t knock ’em down and clean them up things can get really ugly. The more of them you have, the more difficult it is to get your design working. Yet when you’ve invested so much in your design you have to keep working at it (even if it would be better to scrap it and start afresh).

That why test-driven development is a big win. Writing tests first forces you to focus you on thinking about the interface before you design and code it. Making those tests work becomes a relentless way of getting observable behavior to work rather than letting crufty untested code pile up. But it isnâ’t enough. Writing a test is isn’t the same as testing for the quality of a design choice. Maybe that little design choice you just made should bug you (but it doesn’t) because you can’t tell whether it is just OK or what might make it better. If you are in this situation I have one bit of advice on how not to insert a little thing into your design that may end up biting you in the end:

If you find yourself writing code that asks an object whether it is in a certain state or whether it is capable of handling a request, and then turns around and asks it to do the right thing (based on its answer), think twice. This should bug you. There’s usually a better design choice that places the decision-making responsibilities inside the object that is being directed. A small example from my design course’s problem can illustrate. The problem is to create a design that sets the report interval for any kind of sensing device. Some sensing devices are “intelligent” and can be programmed to report at specific time intervals. Others have to be polled on a schedule. How should you design a controller that handles either case? A straightforward design might have a controller ask:

If sensingDevice.isProgrammable() then sensingDevice.setReportInterval(timeInterval)
else { /* code to add the sensing device to the poller at the timeInterval *}

My choice, instead, would be to would have the control code turn around and ask any device to set its report interval (and have the device make the decision based on whether it was programmable or not) to do the work. One line of control code with decision-making being done in the sensing devices:


Instead of making explicit control decisions I always prefer delegating those responsibilities to objects that do things based on what type of thing they are. It makes for simpler control code and localized (encapsulated decisions). Objects that behave differently based on who they are and what capabilities they have can greatly simplify control code. Making objects smart can eliminate external decision-making.

I’m going to continue compiling a list of my favorite garden-variety design bugs and will be writing about more of them. If you have some seemingly innocuous design choices that bug you, I’d like to hear from you.