Can’t I Just Be Reasonable?

“Don’t it always seem to go
That you don’t know what you’ve got
Till it’s gone” –Joni Mitchell, Big Yellow Taxi

My husband “loaned” my unused iPad to my father-in-law. I hadn’t used it in a year. He thought since it might expand his dad’s horizons and bring the Internet to him for the first time. But my father-in-law didn’t use the iPad either. Upon finding it stashed in a drawer with its power drained (after a couple of months), I demanded it back.

I proceeded to load it with some New Yorkers to read on a trip.

It was great…for a very short while.

But this past week I read a physical New Yorker instead of its e cousin. There was something extremely satisfying about shuffling and folding its pages. Sure, I can listen to poets read their poems and enjoy the extra photos on my iPad. But the video clips? Not that interesting.

My iPad remains underutilized. And I am feeling a bit guilty about asking for it back. Why did I want it back? Why did I react so strongly to “losing” my unused iPad?

Daniel Kahneman, in Thinking Fast and Slow, gives insights into how we react to perceived gains and losses. We respond to a loss far more strongly than we do to an equivalent gain. Take something away and we’ll pine for it even more than its perceived value. And we are driven more strongly to avoid losses than to achieve gains. No, that isn’t rational. But it’s how we are wired.

Sigh. So chalk up my reaction to my loaned iPad to petty possessiveness and an ingrained reaction to perceived loss.

Even more distressing, Kahneman points out that we take on extra risks when faced with a loss. We continue to press on in spite of mounting losses. Losing gamblers keep gambling. Homeowners are reluctant to sell a house that is underwater in value and move on. And additional time and resources get allocated to late, troubled software projects with little or no hope for success. It’s easier than deciding to pull the plug.

Not surprisingly, our aversion to loss increases as the stakes increase. But not dramatically. Only when things get really, really bad do we finally do pull back and stop taking avoidable risks. And to top that off, loaded, emotional words heighten our perception of risk (conjuring up scary imaginary risks that we then don’t react to rationally).

So knowing these things, how can I become a better decision-maker? Right now, I don’t see any easy fixes. Awareness is a first positive step. When I feel a pang of loss I’m going to try to dig deeper to see whether I need to shift my perspective (which might be hard to do in the heat of the moment, but nonetheless…). Especially when I suddenly become aware of a loss. Knowing about loaded, emotional words, I’m going to be sensitive to any emotional “negative talk” that could distort my perceptions of actual risks.

Still, I’m searching for more concrete actions to take that can help me react more rationally to perceived losses. Is this a hopeless cause? I’m interested in your thoughts.

Distinguishing between testing and checking

At Agile 2013 Matt Heusser presented a history of how agile testing ideas have evolved in “Twelve Years of Agile Testing: And What Do We Do Now?” The most intellectually challenging idea I came away from Matt’s talk was the notion that testing and checking are different. I’m still trying to wrap my head around this distinction.

Disclosure: I’m not a testing insider. However, along with effective design and architecture practices, pragmatic testing is a passion of mine. I have presented talks at Agile with my colleague Joe Yoder on pragmatic test driven design and quality scenarios.

Like most, I suspect, I have a hard time teasing out a meaningful distinction between checking and testing. When I looked up definitions for testing and checking there was significant overlap. Consider these two definitions:

Testing-the means by which the presence, quality, or genuineness of anything is determined

Testing-a particular process or method for trying or assessing.

And these for checking:

Checking-to investigate or verify as to correctness.

Checking-to make an inquiry into, search through, etc.

Using the first definition for testing, I can say, “By testing I determine what my software does.” For example, a test can determine the amount of interest calculated for a late payment or the number of transactions that are processed in an hour. Using the second meaning of testing, I can say that, “I perform unit testing by following the test first cycle of classic TDD” or that, “I write my test code to verify my class’ behavior after I’ve completed a first cut implementation that compiles.” Both are particular testing processes or methods.

I can say, “I check that my software correctly behaves according to some standard or specification (first meaning).” I can also perform a check (using the second definition) by writing code that measure how many transactions can be performed within a time period.

I can check my software by performing manual procedures and observing results.

I can check my software by writing test code and creating an automated test suite.

I might want to assess how my software works without necessarily verifying its correctness. When tests (or evaluations) are compared against a standard of expected behavior they also are checks. Testing is in some sense a larger concept or category that encompasses checking.

Confused by all this word play? I hope not.

Humans (and speakers of any native language) explore the dimensions and extent of categories by observing and learning from concrete examples. One thing that distinguishes a native speaker from a non-native speaker is that she knows the difference between similar categories, and uses the appropriate concept in context. To non-native speakers the edges and boundaries of categories seem arbitrary and unfathomable (meanings aren’t found by merely reading dictionary definitions).

I’ve been reading about categories and their nuances in Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences. (Just yesterday I read about subtle difference between the phrases, “Letting the cat out of the bag” and “Spilling the beans.”)

So what’s the big deal about making a distinction between testing and checking?

Matt pointed us to Michael Bolton’s blog entry, Testing vs. Checking. Along with James Bach, Michael has nudged the testing world to distinguish between automated “checks” that verify expected behaviors versus “testing” activities that require human guided investigation and intellect and aren’t automatable.

In James Bach’s blog, Testing and Checking Refined, they makee these distinctions:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.
(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)”

My first reaction was to throw up my hands and shout “Enough!” My reaction was that of a non-native speaker trying to understand a foreign idiom! But then I calmed down, let go of my urge to precisely know James and Michael’s meanings, accept some ambiguity, and looked for deeper insight.

When Michael explained,

“Checking is something that we do with the motivation of confirming existing beliefs” while, “Testing is something that we do with the motivation of finding new information.”

it suddenly became more clear. We might be doing what appears to be the same activity (writing code to probe our software), but if our intentions are different, we could either be checking or testing.

Why is this important?

The first time I write test code and execute it I learn something new (I also might confirm my expectations). When I repeatedly run that test code as part of a test suite, I am checking that my software continues to work as expected. I’m not really learning anything new. Still, it can be valuable to keep performing those checks. Especially when the code base is rapidly changing.

But I only need to execute checks repeated on code that has the potential to break. If my code is stable (and unchanging), perhaps I should question the value of (and false confidence gained by) repeatedly executing the same tired old automated tests. Maybe I should write new tests to probe even more corners of my software.

And if tests frequently break (even though the software is still working), perhaps I need to readjust my checks. I’m betting I’ll find test code that verifies details that should be hidden/aren’t really essential to my software’s behavior. Writing good checks that don’t break so easily makes it easier to change my software design. And that enables me to evolve my software with greater ease.

When test code becomes stale, it is precisely because it isn’t buying any new information. It might even be holding me back.

I have a long way to go to become a fluent native testing speaker. And I wish that James and Michael could have chosen different phrases to describe these two categories of “testing” (perhaps exploration and verification?).

But they didn’t.
Fair enough.

Evangelizing New (Software Architecture) Ideas and Practices

Last December I spoke at the YOW Conferences in Australia on Why We Need Architects (and Architecture) on Agile Projects.

I wanted convince agile developers that emergent design doesn’t guarantee good software architecture. And often, you need to pay extra attention to architecture, especially if you are working on a large project.

There can be many reasons for paying attention to architecture: Meaningful progress may be blocked by architectural flaws. Some intricate tricky technical stuff may need to be worked out before you can implement functionality that relies on it. Critical components outside your control may need to be integrated. Achieving performance targets may be difficult and you need to explore what you can do before choosing among expensive or hard to reverse alternatives. Many other developers may depend on some new technical bit working well (whether it be infrastructure, that particular NoSQL database, or that unproven framework). Some design conventions need to be established before a large number of developers start whacking away at gobs of user stories.

I characterized the role of an agile architect as being hands-on; a steward of sustainable development, and someone who balances technical concerns with other perspectives. One difference between large agile and small agile projects is that you often need to do more significant wayfinding and architectural risk mitigation on larger projects.

I hoped to inspire agile developers to care more about sustainable architecture and to consider picking up some more architecture practices.

Unfortunately, my talk sorely disappointed a thoughtful architect who faces an entirely different dilemma: He needs to convince non-agile architects to adapt agile architectural practices. And my talk didn’t give him any arguments that would persuade them.

My first reaction to his rant was to want to shout: Give up! It is impossible to convince anyone to adopt new way of working that conflict with his or her deeply held values.

But then again, how do new ways of working ever take hold in an organization? By having some buzz around them. By being brought in (naively or conscientiously) by leaders and instigators who know how to build organizational support for new ideas. By being new and sexy instead of dull and boring. By demonstrating results. By capturing people’s imagination or assuaging their fears. Surreptitiously, quietly replacing older practices when reasons for doing them are no longer remembered. When the old guard dies out or gives up.

Attitudes rarely change through compelling discussions or persuasive argumentation. I look to Mary Lynn Manns and Linda Rising’s Fearless Change: Patterns for Introducing New Ideas for inspiration.

I take stock of how much energy I want to invest in changing attitudes and how much investment people have in the way they are doing things now. I don’t think of myself as a professional change agent. Yet, as a consultant I am often brought in when organizations (not necessarily every person, mind you) want to do things differently.
People generally aren’t receptive to new ideas or practices or technologies when they feel threatened, dismissed, disrespected, underappreciated, or misunderstood. I am successful at introducing new techniques when they are presented as ways to reduce or manage risks or increase productivity or reliability or improve performance or whatever hot button the people who I am exposing these new ways of working are receptive to. Labeling techniques as “agile” or “lean” may create a buzz in those that already receptive. But the reaction can be almost allergic in those who are not. The last thing I want do is to foster divisiveness. Labels do that. If I get people comfortable taking just that next small step, that is often enough for them to carry on and make even more significant changes. Changing practices takes patience and persistence. At the end of the day I can only convince, demonstrate and empathize; I cannot compel people to make changes.

Software Decision Making Under Stress

I recently blogged about my discomfort with making software design decisions at “the last responsible moment” and suggested that deciding at the “most responsible moment” might be a better approach. To me, a slight semantic shift made a difference in how I felt. Deciding at the most responsible moment made me feel less stressed and more in control of the situation.

But is this because I am basically lazy, preferring to put off decisions until they absolutely, positively must be made (and then hating that gut wrenching feeling when I finally realize that I don’t have enough time to think things through)? Or is something else going on?

I admit that the decisions we make about software development on a day in/day out aren’t always made under extreme stress; yet I thought I’d see what researchers say about decision-making under stress. As a developer I benefit from some stress, but not too much. That’s why (reasonable) deadlines and commitments are important.
But first, a disclaimer. I have not exhaustively researched the literature on this topic. I suspect there are more relevant and recent publications than what I found. But the two papers I read got me thinking. And so, I want to share them.

Giora Keinan, in a 1987 Journal of Personal and Social Psychology article, reports on a study that examined whether “deficient decision making” under stress was largely due to not systematically considering all relevant alternatives. He exposed college student test subjects to “controllable stress”, “uncontrollable stress”, or no stress, and measured how it affected their ability to solve interactive decision problems. In a nutshell being stressed didn’t affect their overall performance. However, those who were exposed to stress of any kind tended to offer solutions before they considered all available alternatives. And they did not systematically examine the alternatives.

Admittedly, the test subjects were college students doing word analogy puzzles. And the uncontrolled stress was the threat of a small random electric shock; but still, the study demonstrated that once you think you have a reasonable answer, you jump to it more quickly under stress. (Having majored in psychology and personally performed experiments on college students, I can anecdotally confirm that while college students are good test subjects, one should take care to not over-generalize any results.)

So, is stress “good” or “bad”? Is systematically considering all viable alternatives before making a decision a better strategy (or not)? Surely, we in the software world know we never have enough time to make perfectly researched decisions. And big-upfront-decision-making, without confirmation is discouraged these days. Agile developers believe that making just-in-time design decisions result in better software.

But what are the upsides or downsides to jumping to conclusions too hastily? What happens if you feel too stressed when you have to make a decision? To gain some insight into that, I turned to a summary article, Judgment and decision making under stress: an overview for emergency managers , by Kathleen M. Kowalski-Trakofler, Charles Vaught, and Ted Sharf of the National Institute of Occupational Safety and Health. These authors raised many questions about the status quo of stress research and the need for more grounded studies. However, they also drew three interesting conclusions:

1.Under extreme stress [think natural disasters, plane crashes, war and the like], successful teams tend to communicate among themselves. As the emergency intensifies, a flatter communication hierarchy develops with more (unsolicited) information coming from the field to the command centre. Under stressful, emergency situations, communication becomes streamlined and localized. Also, people take personal initiative to inform leaders about the situation.

2. Stress is affected by perception; it is the perceived experience of stress that an individual reacts to. When you perceive a situation as stressful, you start reacting as if it were stressful. What is stressful to you is what’s important. And not everyone reacts well under stress. Training helps emergency responders to not freak out in an emergency, but those of us in the software world aren’t nearly so well-trained to respond to software crises. When is the last time you had software crises training?

3. Contrary to popular opinion, judgment is not always compromised under stress. Although stress may narrow the focus of attention (the data are inconclusive), this is not necessarily a negative consequence in decision making. Some studies show that the individual adopts a simpler mode of information processing that may help in focusing on critical issues. So, we can effectively make reasonable decisions if we find and focus on the critical issues. If we miss out on a critical issue, well, things may not work out so well.

Reading these papers confirmed some suspicions I had: Stress is something we perceive. It doesn’t matter whether others share your perception or not. If you feel stressed, you are stressed. And you are more likely to make decisions without considering every alternative. That can be appropriate if your decisions are localized, you have experience, and you have a means of backing out of a decision if it turns out to be a bad one. But under extreme stress things start to break down. And then, if you haven’t had emergency training, how you respond is somewhat unpredictable.

I hope that we can start some ongoing discussions within the software community about design decisions and decision-making in general. How do you, your team or your management react to, avoid, or thrive on stress? Do you think agile practices help or hinder decision-making? If so, why? If not, why not.

Giving Design Advice

In an ideal work environment software designers freely ask for and offer constructive criticism and openly discuss issues. They don’t take criticism as personal affronts, and they and their managers make intelligent, informed decisions.

OK, so how do design discussions play out where you work? In my latest IEEE Software design column I discuss some effective ways to give advice as well as hurdles you may have to overcome before your advice is heeded. Being an effective critic, advisor, and design colleague isn’t purely a matter of being on top of your technical game. Cognitive biases affect how people naturally (and often illogically) receive and process information. They can have a big impact on how you your advice is received and interpreted. If you want to get better at communicating suggestions to others, you should become more aware of these biases and look for ways to mitigate or avoid them. Wikipedia has a good overview of cognitive biases in how people assess risk, make decisions, and rationalize them after the fact.

To whet your appetite for this topic I’ll mention one bias that has probably bitten every programmer at least once. A confirmation bias is when a person looks for what confirms a strongly held belief while ignoring or undervaluing contradictory evidence. Have you ever found yourself unable to isolate a software bug because you insisted that your software just couldn’t work that way? Your confirmation bias might have prevented you from noticing facts that would lead you more swiftly to identifying the bug’s root cause. I like the idea presented in Debugging by Thinking by Robert Charles Metzger that taking on the different mindset of a detective, mathematician, safety expert, psychologist (one of my personal favorites), computer scientist or engineer you can get a better slant on tracking down a bug. According to the author, “Each way has an analogy, a set of assumptions that forms a worldview, and a set of techniques associated with it.” In designing as well as debugging, having a variety of worldviews for tackling a problem helps you avoid getting stuck in a rutand pick the right strategy based on the context. One way to get around a confirmation bias is to shake yourself out of the normal way of doing business.

Cognitive biases aren’t good or bad. They just are. And if you work with others it helps if you can identify biases that lead you (or others) to jump to conclusions, hold onto an idea when it should probably be discarded, or ignore risks. That knowledge can help you tune your message and become aware of and avoid bias traps.

Barely good enough doesn’t cut it

About a year ago Scott Ambler wrote an article stating, “One of Agile Modeling’s more controversial concepts is the dictum that models and documents should be just barely good enough.” Scott characterized barely good enough models as having just enough accuracy, consistency and detail to remain understandable and provide value and pointed out that barely good enough is context dependent—what’s good enough for your situation may not be barely good enough for mine.

I don’t like the words, “just barely good enough”. Even after a year, those four little words stick in my craw. I don’t scrape by on barely good enough planning or preparation in other aspects of my life. So why should that be the mantra for my modeling, design or design documentation activities? As a thought experiment, where would you mark a point on the hypothetical value per effort curve below that marks what you consider “just barely good just enough” modeling or design? Now place another mark for where you’d like to be on that value per effort curve for your current project.

I don’t want my checking account to have just barely enough money or get just barely good enough sleep. After a few weeks I’d be a walking zombie or the meanest bitch on the planet. If I allotted barely enough time to get to the airport, sooner or later I’d miss a flight. Since my local movie theatre is just 6 minutes from my door on a good traffic day, I do allot just barely enough time. Most of the time I make it to the movies. (I’m quite willing to trade off having to occasionally go to a later show against sitting through those annoying pre-movie ads.)

In most aspects of my personal and professional life I make choices about how much planning, preparation and cushion or slack I need based on my preferences, style, and the consequences of failure. When things matter I take extra care. When they don’t, well…I’m as casual as the next person. But I prefer to think that I take adequate measures most of the timeâ—adequate and good enough. Not just barely good enough. To me this subtle shift in attitude and outlook is huge. And an appropriate one for agile developers to take.

Barely good enough thinking can too easily be used as an excuse for taking the easier, inadequate way out. Sure, we can discuss what’s “barely good enough” but that missing the point—don’t we all want to do adequate, good work, and not do things that aren’t needed? Barely good enough thinking will always make us question whether we’ve done too much. And as a consequence, any design before coding is getting a bad rap.

I propose we get rid of that barely scraping by attitude and replace it with a healthier one. I propose we start using words like “enough”, “adequate”, or “meets the project’s needs” to describe how much design we do. Or perhaps “just enough” modeling as Susan Burk described in her Practical Process talk at SD Best Practices. These words are far less edgy. But that is a good thing. Without feeling desperate, I’m hopeful people can feel safe enough to start having meaningful conversation about what’s enough given their current situation.

Scott’s edgy phrase emphasizes the point that agilists strive to minimize unnecessary work. Design just in time. Do just enough. Don’t go overboard. Push towards minimizing if you’ve had a tendency to over design or document. But I say do an adequate job. Meet your objectives. Maybe you need to do even more designing than you have been doing. Nothing edgy about that message, just common sense.

The “just barely good enough” message can be a real turn off to teams who come from traditions of producing high-quality, high value designs. What? You mean to be agile I should turn away from what I find to be valuable? No, you don’t.

Adequate design means you may need to sort through options and gain enough understanding before you pound out much production code. You may even need to design and build a few prototypes to understand the problem before you launch into something. That’s OK if that’s what you need to do. You can be agile without having to barely scrape by. In fact, conscientious developers know that and insist on taking whatever measures they need to build the right stuff.

I have known agile teams who feel guilty about taking any time to model and explore their design options. And they feel inadequate when they find it useful to write down a simple use case to clarify a story card. Good grief! Clarity is what we’re after. Others feel not clever (or edgy enough) when they need to think about their problem or sketch out a potential design before writing tests or code. I say stop feeling guilty! If writing, thinking, and talking about your design with others helps you clarify your ideas, keep it up. Most people need to think a bit before they code.

If you want to keep permanent representations of key design or architectural views in addition to their code don’t feel guilty about that either. It just means that you find value in more than simply the working code. (If you don’t, then stop it!) The right kind of adequate design documentation can go a long ways in communicating key ideas to new team members, support and operations. Your code doesn’t always speak clearly to others about your design.

At Software Development Best Practices, Scott gave a talk on agile modeling where explained that barely good enough was the point where the value per effort is maximized. His point on his hypothetical curve is way higher than where I’d mark it. I’m betting your sense of what’s barely good enough differs from Scott’s view, too. I’d label Scott’s point as “optimal” and mark just barely good enough way to the left.

I’m not sure what an ideal value/effort distribution curve looks like, and I suspect a value per effort curve will vary by project type and organization. I’m skeptical that it looks anything like the curve Scott drew (Scott, doesn’t make any claims that he’s discovered a ideal shape for this curve, only that you should find the “barely good enough point” and not overdo design). Curious about various distribution curves and their properties, I took a look on wikipedia. Nothing seemed to fit my expectations. I would suspect that the value of design gradually tails off rather than steeply declines (on most projects). Have you found the value/effort for modeling to follow a normal distribution curve or some other known distribution function? Have you it to vary by project type? If so, in what ways? Do you find that value to decay at some point (after hitting some maximal value per effort point)? Or have you found it to remain steady or keep increasing at a slower rate? I’d be interested in your thoughts.

False Dichotomies and Forced Divisions

Last week I received an email with this tagline:

“Replacing an on-site customer with some use cases is about as effective
as replacing a hug from your Mom with a friendly note.”

I enjoy this person’s funny, witty, and constantly changing taglines. They certainly add zest to mail messages. But this one bugged me. It set up a false dichotomy. A false dichotomy occurs when someone sets up choices so that it appears there are only two possible conclusions when in fact there are further alternatives. Consider the phrase “if you’re not for me you must be against me.” Most of the time this is a false dichotomy. There are other possibilities. You may totally be indifferent to the person’s proposed idea or undecided. There may be several unmentioned possibilities (and they may not be mutually exclusive).

Driving to a Portland SPIN meeting last night I saw this bumper sticker: “I don’t have to like George Bush to love my country”. Wow. A false dichotomy pointed out in the political arena. What a novelty!

But back to what bugs me about this tagline. It first set up the false dichotomy that “mom’s hug” is better than “friendly note”. But wait! Mom’s hugs aren’t always better than friendly notes. Maybe you need that friendly note to help you through a tough day. Maybe that friendly note includes a useful reminder. In that case a friendly hug might be a good start, but it’s not enough. Mom can always give you a friendly hug and write you a friendly note.

The tagline then makes the powerful analogy between mom and onsite customer, and friendly note and use cases. If you don’t think this through you could end up being swayed to believe that use cases and notes are never as good as mom or onsite customers or apple pie (and that you have to pick one). But use cases and onsite customers can co-exist if you need them to. There are legitimate reasons to write things down. Maybe writing helps a customer sort through what she really wants. There can be value in recording what was said because it needs remembering by more than the development team. The next time someone tries to sway you by setting up a false dichotomy don€’t get caught in their faulty reasoning. Stop. Think things through. Then decide what your position is or whether you see more possibilities.

Loaded words

What do you do when people react negatively to terms you use to describe ideas? If you are like one clever manager I met at Software Development Best Practices, you turn around and let the team take ownership of the way they are going to speak about things. This manager of managers in a health care company recounted how he introduced Scrum into his organization. After talking about Scrum values and practices, he got pushback on the names of Scrum activities. “Scrum? Sounds like a fight. We don’t like that. Sprints? Why the goofy terminology? We don’t like the sound of it. Sounds like people are always running hard. And besides, we’re not athletic.” So he asked his group to propose alternative names. Instead of sprints, his group calls them iterations. And yeah, they know they need to be short. They are following scrum practices; they just don’t call a spade a spade. He’s convinced that they are the better for it. It isn’t so important what they’re called as how they’re applied. They’ve even renamed daily standups. And they have them mid-morning so everyone can attend (as the team’s work hours are staggered).

Another case in point. At Agile 2005, Jon Spence from Medtronic presented an experience recounting how he got his company to adopt agile practices on a project. Medtronic makes defibrillators and pacemakers. It was somewhat tricky introducing agile concepts into his organization. Jon had to tone down the edginess of the agile message. He can’t imagine the Agile Manifesto hanging in the hallways at his company. For one thing, one of its tenets favoring, “Working software over comprehensive documentation” would be highly controversial. Medtronic builds FDA regulated products that require extensive documentation. According to Jon, the Agile Manifesto would cause an “allergic reaction” at Medtronic. He said he wasn’t going to bring back copies of it to pass around (they were handed out at the conference). No sir. Those would be fighting words. And Jon wants to avoid controversy so he can focus on introducing agile practices. What proved effective was talking about delivering code incrementally with higher quality using a balanced set of practices that provide a safety net. Those were the right words to convince management. His project delivered on their promises and he and others are now spreading agile practices to other project teams.

I appreciate powerful words that people can rally around. But they don’t have to be edgy. By avoiding loaded words you can more effectively get your message across. If the agile manifesto doesn’t have the right words for your organization (and you don’t want to be branded a radical) you may need to discover different ways to talk about agile practices. It isn’t always necessary to use inflammatory words and shake people up to cause change.

It’s not OK..or is it?

Inspired by the TV show Starved, which chronicles the lives of friends with eating disorders who attend meetings with other food-challenged folks (where inappropriate behavior is censored with the chant, “it’s not OK”, I imagine a support group for software agilists gone astray:

“Hi, I’m Dave and I don’t like to pair program. If I spend a few quiet hours alone before everyone shows up, I can get a whole lot more done.
“Dave, it’s not OK.”

“Hi, I’m Beth and I prefer to sketch out my design before I write any code.”
“Beth, it’s not OK.”

“Hi, I’m Rick and I plan the work for my team and then show them my plan.”
“Rick, it’s not OK.”

Or is it?
I recoil from absolutes. The chant “It’s not OK!” grates against my core values. Sure sometimes behaviors may be inappropriate, but there’s got to be a better way to address the issue. Imagine another world:

“Dave, you don’t like pair programming. I want our team to really try pairing. Maybe as a group we should tone down all our chatter. It can get pretty loud sometimes and that makes it hard to focus. If you really don’t think pairing is going to work for you, you can still be agile, but you might find it more to your style to work on the mark project. They’re writing unit tests, doing daily builds and have short iterations, but they’re not following all XP practices. One thing the mark team insists on instead of paired programming is to have paired checkins for all new modules and any changes close to the end of an iteration.”

“Beth, I like that you know UML and use it effectively. When you do draw design ideas everyone seems to understand thing better. I think you have a knack for making ideas understandable. When you take the time to sketch out what’s really needed, I suspect you save rework time. Maybe we should consider doing even more design pre-work for complex functionality. Let’s set up an experiment to measure the time we spend refactoring vs. the time we spend doing some upfront design for a couple of challenging user stories on our next sprint.”

“Rick, I like that you want to plan ahead. But instead of planning for your group, why not get them involved in planning? They’ll be more committed if they set their own goals.”

Hard line agilists used to say that until you know how to play by the rules don’t break them. But I think that hard line stance is changing. In the second edition of Extreme Programming Explained, Kent talks about core XP practices and ways to move towards your values. At Agile 2005, in a keynote Bob Martin talked about the trend of adopting agile practices from a “Chinese Menu”. It is a better strategy to adopt agile practices that fit your development lifestyle. As any dieter knows, any successful eating plan has to fit into your lifestyle and work to your strengths. Some dieters can succeed eating a little chocolate each day. Some can’t. No food should be censored or out of bounds unless it’s too difficult to handle.

The same goes for agile development practices. Deviations from typical (published) agile practices shouldn’t automatically be censored in a knee-jerk fashion. That’s counter productive. But don’t cheat on your agile goals, either. If you find a particular recommended practice too hard to adopt, ask why and dig deeper. Maybe that practice doesn’t fit with your team or your company or with the way you work. Maybe you’ve got to change something first. Or maybe it just isn’t a good fit. But if a particular practice causes you to stray from your goals, take a long hard look at why it’s counterproductive and how you might clean up your act. Sorting it out will require some honest thinking, experimentation, and reflection. And that’s OK.

The Cost of Inertia

Last week I closed out a safety deposit box that I had rented but hadn’t touched for over 20 years. In theory, I paid for the box to hold tax returns and valuables, but never visited it after placing some “starter” something into it when I opened the account (what was it?). The bank branch in our town had closed years ago and the safety deposit box had been relocated to a branch in a nearby town. So I had to drive 6 miles for this little errand. I was damned if I was going to pay $39 for another year’s rental!! For some reason, I decided to take action. I’d rather donate the fee to the Oregon Food Bank, instead of lining the pockets of a bank. To my amusement, the box had 3 coins in it—2 Susan B Anthony dollars and a 500 lire piece. Over the years those coins have cost me $600.

Why did it take me so long to stop paying my annual fee and clean out the box? Plain and simple: inertia. As defined by

The tendency of a body to resist acceleration; the tendency of a body at rest to remain at rest or of a body in straight line motion to stay in motion in a straight line unless acted on by an outside force.
Resistance or disinclination to motion, action, or change.

In the busyness of life it’s all too easy to let things slide rather than fix ’em. I didn’t miss that $39 in my pocket, but I didn’t like paying for something I wasn’t using, either. As a small act of, well, determination, I took an hour out of my busy day and “fixed” the problem. Over the past year I’ve shut down my unused dialup service (saving $10/month, and readjusted my banking to eliminate most monthly fees). Now if I could figure out how to get my phone company to coalesce my home and DSL lines (that’s a long story not worth recounting here), I’d save myself another $300 per year. But two painfully long phone calls to my phone company haven’t fixed things yet. I’ve got to muster some determination before I try again. Every so often I get the itch to cut out waste.

I had lunch today with a software manager who described some costs of inertia in her organization. Inertia that makes for tedious retyping of data from one system into another instead of writing software that could handle the majority of data transfers. Inertia that keeps a 30 year old system chugging along, even though it hasn’t aged gracefully. Most software, as it ages, gets more complex and more difficult to maintain.

There’s a cost to inertia. In my case, a few hundred bucks a year. In companies with inefficient processes and creaky software, the cost can be quite high in dollars as well as the expense of people working at tedious (unnecessary) tasks. Lean software development practices aim to strip away excess waste in processes. But applying lean thinking to legacy systems and maintenance projects isn’t nearly as cool as using it on new initiatives. I wish it were. There may be no glory in chipping away at cruft built up because of inertia. But there should be. At the end of the day you will have left the world in a slightly better state. I know I have. My check to the food bank is in the mail.