Nothing ever goes exactly by the book

In Designing Object-Oriented Software, we used the design for an ATM (Automated Teller Machine) to illustrate how to design object-oriented software. We worked through a simplistic design for an ATM and produced a CRC (Class-Responsibility-Collaborator) Card design.

Several years after our book was published I received an email from an instructor at a company. Although he liked our book, he felt he was missing something important about Responsibility-Driven Design. No matter how hard he tried, his students never exactly reproduced our design! And worse yet, in his opinion, all their designs were different.

I was astonished that he expected to teach principles, practices, and design thinking and then magically his students would produce identical designs. Our minds don’t work alike, so why should we produce identical designs?

A couple of years ago I received a gift of a Blue Apron subscription. For those who are not familiar with Blue Apron, you select the meals you want for a week, then along with the recipes they ship you a box of ingredients. All you add is your own salt and olive oil, cooking utensils, stove, and voila—you make a tasty dish.

The first meal I cooked was Za’atar-Roasted Broccoli Salad. The picture of the dish on the recipe card looked like this:
Recipe
And here’s a photo of the dish as I prepared it:
DishICooked
Not bad! But I didn’t blindly follow the recipe. I used my brain, my eyes, and my cooking sense, as well as past experiences, when making that dish.

If you look closely the recipe card below you’ll see that the instructions are not equally precise. While the eggs should be boiled for exactly 9 minutes, there’s more tolerance in the time to roast the broccoli. Just exactly how much is a drizzle? And that there are so many points during the recipe when you are asked to add salt! Over time (and following several recipes’ instructions too closely) I learned that if you added salt every time they advised you to, the meals would be too salty. Over time, Blue Apron has adjusted their recipes to not include so many salt instructions.
Instructions

There is no substitute for learning from direct experience and observation. A beginning cook who expects to follow instructions exactly and get a well-made dish is bound to be frustrated.

And people who expect to follow someone else’s software design heuristics and end up with an optimal solution to their specific design problem will invariably be disappointed.

Sure, you may initially learn some design heuristics from a book or by reading code or from someone who is more senior or stack overflow or wherever. But never expect things to go exactly “by the book.” There are many details that experts never share that you’ll have to fill in. And don’t get frustrated when experts change their minds or design approaches or disagree (they do all the time). But don’t be silent, either, if you are puzzled or curious. Ask them about their thinking. They may have encountered an edge case or an exception to some “general” heuristic. Or there may have been more to the design problem than they had initially thought. By having that conversation you just might learn something from each other.

Writing, Remembering, and Sharing Design Heuristics

I’ve been experimenting with simple ways to record software design heuristics. I want to find ways to vividly bring to mind the design heuristics I use or learn about from others. Ideally, any written heuristic should be shorter than a software pattern and somewhat more detailed and informative than a single pithy phrase.

There are three reasons it is important to physically write heuristics instead of just talking about them or clipping online sources into a note taking app:

1. Writing helps us remember the important bits
For a good overview of the benefits of notetaking, read this lifehack article by Dustin Wax. If nothing else this should inspire you to jot down and/or draw by hand the key bits of the next talk or lecture you listen to. What we write we remember.

2. Writing in longhand helps us process our experiences
Research shows that the act of writing in longhand is quite different than typing. This Medical Daily article by Lizette Borreli summarizes research into the memory boosting benefits of writing longhand:

“…writing by hand allows the brain to receive feedback from a person’s motor actions, and this specific feedback is different than those received when touching and typing on a keyboard… Overall, it seems those who type their notes may potentially be at risk for ‘mindless processing.’ The old fashioned note taking method of pen and paper boosts memory and the ability to understand concepts and facts.”

3. Written heuristics can be shared
Once you’ve written some tried and true heuristics in your own words, you can share them with others. Even better, use these heuristics to stimulate a deeper discussion into the design heuristic space you are exploring. Don’t shy away from discussions about nuances, counter-examples, edge cases, and competing heuristics (alternative ways to tackle a specific design problem). You may uncover a wealth of new heuristics to ponder and open up your mind to new ways of solving familiar problems.

If the person you are conversing with doesn’t know much about the topic area of your heuristics, the questions they ask will likely lead you to clarify your thinking (or at least improve your explanations). They won’t get what you are saying if you speak too much in shorthand. If they know more about the topic, you may have a lively conversation about heuristics and lessons they’ve learned over time. They may validate your heuristics as well as give you some new ideas.

A Simple Idea- QHE Cards
I’ve been playing around with using index cards as a means to quickly capture a heuristic. I structure a heuristic in three parts: a question, the answer (which can be then polished into a couple sentence heuristic), and an example or two to help me remember. I call them QHE (Question-Heuristic-Example) or “Q-Hee” cards, for lack of a better name. Using index cards to record design heuristics is inspired by CRC (Class-Responsibility-Collaborators) cards invented by Ward Cunningham and Kent Beck and popularized in my first design book. I like index cards.

Here’s two QHE Cards describing heuristics found in a conversation with Mathias Verraes about designing event records:
AnnotatedQHECard2019_04_12 2_03 PM Office Lens~2 (1)

QHE cards are simple to write. I don’t worry about precision or formality. With a little bit of effort I can turn a question and answer into a short heuristic statement. For example, here’s my first cut at a heuristic for what information to include in an event record:

Heuristic: Only include information necessary to “replay” an event and achieve the same results. Don’t include sensitive information or extra information simply because you think it might be useful.

An example or two to jog my memory is essential. And just like CRC cards, they can be too terse for others to understand (or to remember, if I wait too long). To make a heuristic memorable, I need to actively integrate that heuristic into my design heuristic gestalt. This is especially true if it is about a design topic I am not that familiar with.

To create a stronger memory and deeper understanding I can write more about the heuristic, sketch out a more detailed example, write some code, and/or draw a diagram…whatever it takes to go a bit deeper.

Heuristic gists are especially useful when you want to share heuristics with others. They may need a bit more context to understand what you mean. I like to write them in a form similar to pattern gists. Here’s an example of a pattern gist from Fearless Change Patterns by Mary Lynn Manns and Linda Rising:
APatternGist

And here’s how I rewrote the QHE card for deciding when to generate different events from a process as a gist:

Multiple Events for a Single Process
You need to balance passing along information needed by downstream processes in a single business event with creating multiple event records, each designed to convey specific information needed by a specific downstream process.

Summary of Problem
How do you know how many events to generate for a single business process?

Summary of Solution
If different processes downstream react differently, generate different events. For example, handling a “rental car return” request might generate two events and event records: “car returned” and “mileage recorded.” Even though the mileage is recorded at the time a car is returned, mileage could be recorded at any other time as well. It is a cleaner design to generate two events, rather than cram information into a single, overloaded “car returned” event.

The act of writing a bit more detail in a gist often leads to further questions which lead to even more design heuristics. For example, these questions quickly come to mind: When is it OK to pass along extra information in an event? When is it not OK? What kinds of information should never be passed along in an event record? What happens if a new business process needs slightly different information?

And that’s the point. There are a myriad of details to work out for any real design. And many more heuristics for solving design problems and learning to live with their consequences. By writing down your heuristics, you’ll remember what you were thinking at the time. Note to future self: That effort just might be worth it!

What do typical design heuristics look like?

In my previous post I introduced the topic of design heuristics. Billy Vaughn Koen, in Discussion of The Method: Conducting the Engineer’s Approach to Problem Solving, defines a heuristic as, “anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

Common phrases that mean roughly the same thing as heuristic in English include “rule of thumb or “practical method” or “useful shortcut” or “approximation”. I don’t think approximation or shortcut convey the essence of what makes a heuristic handy or helpful. Wikipedia defines heuristic as, “any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”

Heuristics describe practical actions or attitudes we take in order to make design progress. Billy gives several examples of general engineering heuristics in his book:
Billy'sHeuristics
Reading these heuristics you might think, “Ah, heuristics are merely simple statements of what to do.”

Although heuristics can be boiled down into pithy advice, e.g. “Do this…” or “Don’t do that…,” I find that there usually is a lot more behind any simple phrase. Ask any design expert about how to tackle a specific design problem and she’ll add caveats and constraints and assumptions. Her heuristics might sound more like, “Do this when…and… unless … and here’s how I might …”. Or, “Try this first…and then…until…”.

Which leads me to observe that written design patterns–specifically software design and architecture patterns–are a handy form for sharing meaty, complex, nuanced design heuristics with others.
PatternsAreHandy

Consider this summary of the pattern, Do a Mock Installation, from Object-Oriented Re-engineering Patterns that illustrates some of that extra useful stuff that can be explained by a pattern.

  • Pattern: Do a Mock Installation
  • Intent: Check whether you have the necessary artifacts available by installing the system and recompiling the code.
  • Problem: How can you be sure that you will be able to (re)build the system?
  • Difficulties:
    • The system is new to you, so you do not know which files you need.
    • The system may depend on libraries, frameworks, and patches, and you’re uncertain you have the right versions available.
    • The system is large and complex, and the exact configuration under which the system is supposed to run is unclear.
    • Maintainers may answer these questions, or you may find answers in documentation, but you still must verify whether this information is complete.
  • Solving this is feasible because:
    • You have access to the source code and the necessary build tools
    • You have the ability to reinstall the system in an environment that is similar to that of the running system
    • Maybe the system includes some kind of self-test
  • Solution: Try to install and build the system in a clean environment taking a limited amount of time (at most one day).
  • After the build prepare a report:
    • Version number of libraries, frameworks and patches used
    • Dependencies between the infrastructure
    • Problems you encountered and how you tried to solve them
    • Suggestions for improvement
    • Assessment of the situation including possible solutions/workarounds (if failed)
  • Tradeoffs:
    • Pros:
    • It is an essential prerequisite
    • Demands precision
    • Increases your credibility
    • Cons:
    • Tedious activity
    • No certainty
    • Difficulties: Easy to get carried away.

When written well, patterns include just the right amount of extra advice. Advice that a competent designer has gained through experience and reflections. Patterns can even warn us about potential roadblocks or advise us on what to try next if the pattern’s solution isn’t a good fit for our specific problem.

But in addition to those larger-grained design problems typical of patterns, there are many smaller design decisions and gnarly design details we have to deal with. And as experienced designers, we too, have heuristics for addressing them rattling around in our brains. Wouldn’t it be great if we could find easy ways (easier than writing patterns) to record and share our heuristics with each other?

Here’s a photo of some heuristic snippets captured in a one-day workshop I held at DDD Europe:
ExampleHeuristics
I’ve been experimenting with simple ways to record, share, and communicate design heuristics that require far less effort to create than full-fledged patterns yet convey more than can be shared on a sticky note. Somewhere between full-fledged written patterns and pithy phrases is a sweet spot I’m striving for. That’s topic of my next post.

Growing Your Personal Design Heuristics Toolkit

Billy Vaughn Koen’s Discussion of The Method has had a profound influence on my thoughts about design. This book has inspired me to explore the role that software patterns play alongside other design heuristics and led me to muse about design uncertainty. I’ve been inspired to experiment with simple ways to capture design heuristics and I’ve spent time heuristics hunting with other designers.

In this blog post I’ll introduce you to Billy Vaughn Kohn’s ideas about problem solving and heuristics. Billy defines heuristic as

“anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

There are no guarantees. Heuristics can fail. The tenacity that defines an engineer is when she steps back, regroups, and finds a different heuristic to try next. Another important thing to consider is that there are always competing heuristics to choose from. There simply isn’t a single right way to solve any complex design problem. Based on our assessment of the situation, our judgment, and the fit of the heuristics we know to the problem at hand, we decide what to do next. And when designers disagree, it may be because they are focusing on different aspects of the problem, or that they have a different collection of cherished heuristics in their toolkit (whether or not they can articulate them).

Courtesy of xkcd.com https://xkcd.com/309/

Courtesy of xkcd.com https://xkcd.com/309/

Billy describes three different type of heuristics:

    1. Heuristics we use to solve a specific problem;
    2. Heuristics that guide our use of other heuristics (meta-heuristics, if you will); and
    3. Heuristics that determine our attitude and behavior towards design or the world and the way we work.

Let’s take a closer look at each type.

Heuristics we apply to solve a problem. For example, in his book, Billy gives several examples of general engineering heuristics: Heuristic: Always be prepared to give an answer (and when you need to give an answer, use the best approach you know for figuring out that answer based on the time constraints and resources you have at hand). As an example, imagine the approach you might take for determining the number of Ping-Pong balls that fill up a room if you had to give an answer in 2 seconds. What if you had 2 minutes, an hour, or a week to come up with an answer? Would your approaches differ?

Heuristic: Use feedback to stabilize your design. Billy Vaughn Koen is a Professor Emeritus of Nuclear Physics as well as a philosopher of engineering. He taught people who designed nuclear reactors. I’m glad he taught them about the importance of feedback loops to verify design assumptions. And that is one reason why frequent delivery of software, retrospectives, and reviews are valued activities in agile development.

And oh yes, one last heuristic: Always give yourself a chance to retreat. Backing out of a half-baked or unworkable design is important if you don’t want to paint yourself into a corner.

As software designers we have many heuristics in our toolkit. If we know about design patterns, we use them. If we’ve built high-performance systems, we may have a wealth of heuristics for tuning cache performance. If we are familiar with Domain Driven Design concepts, our preferred way to structure a complex system is by identifying bounded contexts. For those unfamiliar with Domain Driven Design, a bounded context might be considered to be roughly equivalent to a subsystem or subdomain. The distinguishing characteristic of a bounded context is that within it there is a consistent, single, unambiguous meaning for business concepts and events. And we may visualize event flows and relationships between bounded contexts by drawing a context map.

Heuristics that guide our use of other heuristics. These heuristics tell us what to try next. The best examples I know of these heuristics are in Object-Oriented Reengineering Patterns. Each chapter in this book is a small pattern language, describing a rough sequence of actions for solving a particular system-reengineering goal. For example, chapter 3 is a pattern language for making “first contact” with a legacy system. Based on what you have just learned, the language spells out options for what to explore next.

First contact design pattern language

Heuristics that determine our attitude and behavior. Agile software designers value frequent feedback. That’s the premise behind frequently deploying working software in order to have its functionality exercised by actual customers. And one heuristic for reducing the risk of downtime when frequently deploying software is to use a blue/green deployment environment.

While we may have learned design patterns (or other published patterns, for that matter) by reading about them, most nitty-gritty design heuristics come from on-the-job experience. We learn a lot by living with the myriad design choices we make (and revisit) as our system’s functionality grows. We learn even more as we support a system over its lifetime. Our heuristics are shaped not only by the kinds of software systems we have designed, built, and maintained but also by the design culture at our places of work.

How do we improve as designers? Through designing and building software. By reflecting on that experience. And by polishing and refining our heuristics and sharing our cherished heuristics with each other.

What’s going on here?

Question: What is more exasperating than reading design documentation that doesn’t synch up with the code?
One answer: Writing such useless design documentation.
Another answer: Writing documents that don’t stand a chance of being aligned with the code.
My answer: Reading design documentation that is at the wrong level of abstraction or detail to help me do the task at hand.

Courtesy xkcd https://xkcd.com/license.html


A few years ago I worked with someone, who, when asked for a high level overview of some complex bits of code we were going to refactor, produced a detailed Visio diagram that went on for 40+ pages! In that document were statements directly lifted from his code. The intent was to represent the control logic and branching structure of some very complicated algorithms (he copied conditional statements into decision diamonds and copied code that performed steps of the algorithm directly into action blocks). To him, the exercise of creating this documentation had really clarified his design. I was amazed he went to such an effort. To me, the control structure was evident by simply reading the code. Moreover, what I was looking for, and obviously didn’t communicate very well, was that I would appreciate some discussion at a higher level, explaining what algorithm variations there were and how code and data were structured in order to handle myriad typical and special cases.

I wanted to be oriented to the code and data and parts that were configurable and parameterized.

Later, I learned from one of his colleagues that “Bob” always explained things in great detail and wasn’t very good at “boiling things down.” If you wanted to ask “Bob” a question about the code you’d better plan on at least spending half an hour with him.

Another time a colleague created a sequence diagram explaining how data was created that was used by a nightly cron job. What was missing was any description of what that cron job did. Maybe it was so obvious to him that he didn’t think it was important. But I lacked the context to get the full picture. So I had to dig into the cron job script to understand what was really going on.

When I want to get oriented to a large body of code, I like to see a depiction of how it is structured and organized and the major responsibilities of each significant part. Not the package or file structure (that will be useful, eventually), but how it is organized into significant modules, functions, classes and/or call structures. I want to know what the major parts of the system are and why they are there and relationships between them. And at a high level, how they interact. And then I want to understand key aspects of the system dynamics. Just by staring at code I’ll never fathom all the ins and outs of its execution model (e.g. what threads or processes there are, what important information is consumed, produced, or passed around).

Simon Brown gave a talk at XP2016 on The Art of Visualizing Software Architecture. Simon travels the world giving advice and training and talks at conferences about how to do this. He has also build a tool called Structerizr which:

“ blend[s] together the best parts of the various approaches and allow software developers to easily create software architecture diagrams that remain up to date when the code changes. Structurizr allows you to take a hybrid “extract and supplement” approach. It provides you with a way to create software architecture models using code. This means you can extract information from your code using static analysis and reflection, supplementing the model where information isn’t readily available. The resulting diagrams can be visualized and shared via the web.”

I had the opportunity to spend an hour with Simon getting a personalized demo of Structurizr, and discussing the ins and outs of his tool works, how it has been used, and some aspirations for the future.

I was initially surprised about how you specify what your “model elements and relationships” are: you write code specifications for this. To me, it seemed that this could have more easily be accomplished by writing some parseable external description or using a diagramming tool which enabled me to link to the relevant code.

However, I know why Simon’s made this choice: He deeply believes that code should be the reference point for all structural architecture information. So he thinks it a natural extension that developers write a bit more code to specify the important elements of the models they want to see and how to depict them. To Simon, it isn’t sensical to just “add an element” to a diagram unless there is some attachment to actual code. For example, you may define a Hibernate specification to represent data access to a repository. Then that can be specified as a data source. With external descriptions that are not based on code, you have a greater chance of it becoming disconnected from what code actually does. So be it. This is where my values diverge a bit from Simon’s. I favor a richer or higher level documentation that isn’t code based if it tells me something I need to know to do a task (or something I shouldn’t do) or that gives me insights that I wouldn’t find otherwise. I also like the freedom to embellish my diagrams with extra annotations, colors, highlights, etc., etc. so I can focus viewers’ attention. And consequently, I also like to create a key that describes the elements of any diagram so casual readers can understand my notation, too.

After processing these model descriptions, Structurizr spits out JSON descriptions that represent aspects of a C4 model: Context, Containers, Components, and Class diagrams. These are then submitted to a service, which then create various model visualizations that can directly link to parts of your code source. There’s also a rudimentary ability to add additional bits of information to textually describe your architecture and architecture requirements. I’m glad that, recognizing the limits of what code can tell about architecture, the tool provides an easy entry way into developers telling more about the architecture in text.

In our conversation, Simon emphasized that one important benefit of linking architecture document to code is that it never gets out of synch. The code for your architecture documentation can be versioned, right along with your production code.

Fair enough. Useful even. But how helpful is the documentation that is generated in understanding what’s going on in your system?

I think this is one way (certainly not the only way, however) to provide an orientation into the overall structure of your system and major relationships between its parts. But there are definite limits to what you can describe by models generated from static analysis of code. Such models don’t tell me the responsibilities of a component. I have to glean that from supplementary text (which I didn’t see an easy way to attach to the model descriptions) or to abstract that by reading code. And they certainly won’t explain any dynamic system behavior or system qualities.

While I find Structurizr’s direct connection to the code intriguing, it is also its biggest limitation. Structurizr is intended to be used by developers who want to create some architecture documentation, which stays in synch with their code and that also can be shared via websites or wikis or embedding it into other documents.

Structurizr has succeeded with this fairly modest goal: assist developers who don’t ordinarily generate any useful architecture diagrams whatsoever to do something.

But this isn’t enough. As designers or architects, only you can tell me answers to my larger questions about why your code is structured the way it is, and what are the important bits about the system and runtime execution that are intentional and crucial to understand and preserve. No diagram generated by a tool will ever replace your insights and wisdom. No amount of code comments can convey all that either. I need to hear (and read) and see more of this kind of stuff from you.

Being Agile About Documenting and Communicating Architecture


Software architects and developers often need to defend, critique, define, or explain many different aspects of the design a complex system. Yet agile teams favor direct communications over documentation. Do we still need to document our designs?

Of course we do.

We won’t always be around to directly communicate our design to all current and future stakeholders. Personally, I’ve never found working code (or tests) to be the best expression of a software design. Tests express expectations about observable system behavior (not about the design choices we made in implementing that behavior). And the code doesn’t capture what we were thinking at the time we wrote that code or the ideas we considered and discarded as we got that code fully functioning. Neither tests nor code capture all our constraints and working assumptions or our hopes and aspirations for that code.

So what kind of design documentation should we create and how much documentation should we create, for whom, and when? And what is “good” enough documentation?
Eoin Woods gave a talk at XP 2016 titled, “Capturing Design (When You Really Have To)” that got me to revisit my own beliefs on the topic and to think about the state of the current agile practices on documenting architecture.

One take away from Eoin’s talk is to consider the primary purpose of any design description: is it primarily to immediately communicate or to be a long-term record? If your primary goal is to communicate on the fly, then Eoin claims that your documentation should be short lived, tailored to your audience, throwaway, and informal. On the other hand, design descriptions as records are likely to be long-lived, preserve information, be maintainable and organized, and more formal (or well-defined).

Since we are charged to deliver value with our working software, it is often hard to pay attention to any perceived efforts at “slowing down” to describe our systems as we build them. But if we’re building software that is expected to live long (and prosper), it makes sense to invest in documenting some aspects of that system—if nothing more to serve as breadcrumbs useful to those working on the system in the future or to our future selves.

So what can we do to keep design descriptions useful, relevant, unambiguous, and up-to-date? Eoin argues that to be palatable to agile projects, design documentation should be minimal, useful, and significant. It should explain what is important about the design and why it is important; what design decisions we made (and when), and what are the major system pieces, their responsibilities and key interactions. Because of my Responsibility-Driven Design values and roots, I like that he considers system elements and their responsibilities as being minimally useful information descriptions of a system. But to me this just a starting point to get an initial sense of what a system is and does and does not do. There certainly is room for more description, and more details when warranted.

And that gets us back to pragmatics. A design description isn’t the first thing that developers think of doing (not everyone is a visual thinker nor a writer). I know I’m atypical because, early in my engineering career, I enjoyed spending 3 weeks writing a document on my universal linker worked and how to extend it and its limitations. I was nearly as happy producing that document as I was in designing and implementing that linker. It pleased me that sustaining engineering found that document useful years after I had left for another job.

So for the rest of you who don’t find it natural to create documentation, here’s some advice from Eoin:

  • Do it sprint-by-sprint (or little by little). Don’t do it all at once.
  • Be aware of tradeoffs between fidelity and maintainability. The more precise it is, the harder descriptions will be to keep up to date.
  • Know the precision needed by your document’s users. If they need details, they need details. The more details, the more effort required to keep them up to date.
  • Consider linking design descriptions to your code (more on that in another blog post)
  • Create a “commons” where design descriptions are accessible and shared
  • Focus on the “gaps”—describe things that are poorly understood
  • Always ask what’s good enough. Don’t settle for less when more is needed or more when less is needed.

To this list I would add: if design descriptions are important to your company and your product or project, make it known. Explain why design documentation is important, respond to questions and challenges of that commitment, and then give people the support they need to create these kinds of descriptions. Let them perform experiments and build consensus around what is needed.

Be creative and incremental. One company I know made short video recordings of designers and architects giving short talks about why things worked the way they did. They were really short—five minutes or less. Another team created lightweight architecture documentation as they enhanced and made architectural improvements to the 300+ working applications they had to support. It was essential for them that there be more than just the code as the knowledge about these systems was getting lost and decaying over time. Rather than throw up their hands and give up, they just created enough design documentation using simple templates and only as new initiatives were started.

Find a willing documenter. Sometimes a new person (who is new to the system and to the company) is a good person to pair with another old hand to create a high-level description of the system as part of “getting their feet wet.” But don’t just stick them with the documentation. Have them write code and tests, too. From the start.

Beware of Dogma. No. Be aware of dogma

Dogma has several different meanings. I’m going to purposefully split hairs in this post, because I don’t want to attach negative connotations to dogma in a knee jerk fashion. I want to be more thoughtful about my choice of words and my reactions to them.

Here are four meanings for dogma:

“1. an official system of principles or tenets concerning faith, morals, behavior, etc., as of a church.
2. a specific tenet or doctrine authoritatively laid down.
3. prescribed doctrine proclaimed as unquestionably true by a particular group.
4. a settled or established opinion, belief, or principle”
from dictionary.com

At first, these subtle differences in meanings annoyed me. But I wanted to push through that to see what I can learn about dogma. So here goes…

An official set of principles or tenets concerning faith, morals and behavior.
As a software professional, do I have an “official” set of principles and tenets that I believe in?

I have a set of guiding principles and practices for how I work, think about design, write code and tests that I’ve built up over 20+ years of practice. They have become part of how I prefer to operate. I’ve changed and refined them over time, discarding some practices, fine tuning others.

The guiding principles I follow weren’t handed down by authorities. I discovered them working alongside smart people and interacting with thoughtful designers who cared deeply about how they built and implemented software. I wanted to understand how productive people thought and worked, and try to incorporate what I saw as good practices and beliefs into my own beliefs and ways of working.

In the process, I co-wrote two object design books that shared a way of thinking about objects that I still find effective and powerful. Maybe writing books made me an authority. But I also have become a seeker of new and better ways of working. Over the years I have “blended” into my personal set of practices and beliefs about design some powerful ideas of others. This process of incorporating these ways of thinking and problem solving to me feels highly integrative rather than just “accepting” them as unchallenged beliefs or tenets. I have to sort through them, adjust them and then make them part of who I am and what I do. I am not one to blindly accept dogma.

The 3rd definition of dogma has negative connotations: a prescribed doctrine proclaimed as unquestionably true by a particular group.

Hm. I don’t hold many things about software design as being unquestionably true. I find it disconcerting when groups and factions form around the latest truth or discovery. For example, some fervent agile developers I know unquestionably believe that test-first development is the only way to design software. (I’m more of a test-frequent designer by nature). Those who refuse to acknowledge that there are other effective pathways to producing well-written, well-designed, maintainable code are trying to push a dogma of the 3rd meaning.

I find myself questioning any software doctrine that is held as being “universally” true. How presumptuous! There are so many different ways to solve problems and build great software.

I try to keep an open mind. My most strongly held beliefs are ones I should challenge from time to time. To do that, I have to push myself out of my comfort zone. For example, I have discovered a few things by letting go of several strongly held beliefs and performing some interesting experiments: How much code that checks expected behaviors do I really need to keep around to keep software from regressing? How many tests does any organization really need to keep? How many comments do I need in my code? How much of my code should check for well-formed arguments? Is it better to fail fast or fail last? What’s the effect on my code to put in all those checks? What’s the effect of leaving them out.

Not all dogma is handed down from on high or authoritatively laid down…nor is it necessarily bad to hold a common set of beliefs and opinions (the fourth definition of dogma). I’ve been in dysfunctional groups where we couldn’t agree on anything. It was extremely stressful and unproductive.

If as a group we establish and hold a common set of beliefs and practices, then we can just get on with our jobs without all that friction jockeying for who is right and the right way to do things.

But, here’s the rub…if you accept a certain amount of dogma (and I’m not saying what kind of dogma that might be…if you are an agile software developer I am sure you hold certain beliefs on testing, task estimation, collaboration, specification, keeping your code clean, whatever…) be wary of becoming complacent. Dogma needs to be challenged and re-examined from time to time. But don’t toss your current dogma aside on a whim, either. Old beliefs can get stale. But they may still be valid. We need to try out new ideas. But not simply discard older beliefs because shiny new ones are there to distract us.

Why Process Matters

I’ve been working on a talk for Smalltalks 2014 about Discovering Alexander’s Properties in Your Code and Life.

I don’t want it to be an esoteric review of Alexander’s properties.

That won’t satisfy my audience or me.

I want to impart information about how Alexander’s physical properties might translate to properties of our software code as well as illustrate poignant personal examples in the physical world.

But equally important, I want impress upon my audience that process is vital to making lively things (software and physical things). In his, The Process of Creating Life: Nature of Order, Book 2, Alexander states,

“Processes play a more fundamental role in determining the life or death of the building than does the ‘design’.”

Traditionally, building architects hand off their designs as a set of formal drawings for others others to build. Does this remind you of waterfall software development? There isn’t anything inherently wrong with constructing formal architectural drawings…but they never end up reflecting accurately what was built. Due to errors in design, situational decisions based on new discoveries made as things are built, better construction techniques, changing requirements, limitations in tools or materials, a building is never exactly constructed as an architect draws it up.

Builders know that. Good ones exercise their judgment as they make on the spot tactical re-design decisions. Architects who are deeply involved in the building process know that.

Alexander is rather unhappy with how buildings are typically created and suggests that any “living” process (whether it be for building design or software or any other complex process) incorporate the following ten characteristics.

He challenges us software makers to do better, too:

“The way forward in the next decades, towards programs with highly adapted human performance, will be through programs which are generated through unfolding, in some fashion comparable to what I have described for buildings.”

As software designers and implementers we know that nothing is ever built exactly as initially conceived. Not even close. Over the past decade or so we have made significant strides our processes and our tools that enable us to be more effective at adaptively and incrementally building software. My thoughts on some ways we have tackled these characteristics are interspersed in italics, below.

Characteristics of Living Processes

1.Step-by-step adaptive. Small increments with opportunity for feedback and correction.
Incremental delivery, retrospectives, stakeholder reviews
Repetitive incremental design cycles:
Design a little– implement–refactor rework refine–design…
Design/test cycles: Write specifications of behavior, write some code that correctly works according to the specification, test and adapt…
Tests and production code equally valued

2. Whatever the greater whole is always the main focus of attention and the driving force.
Working deployable software, minimally-marketable features

3. The entire process is governed and guided by the formation of living centers (that help each other)
Code with defined boundaries, separate responsibilities, and planned for interconnections

4. Steps take place in a specific sequence to control the unfolding.
We have a rhythm to our work. Whether it is test-first or test-frequent development, conversations with customers to define behavioral “specifications”, or other specific actions. In order to control unfolding we need to understand what we need to build, build it, then refine as we go. And we have tools that let us manage and incrementally build and record our changes.

5. Parts created must become locally unique.
Build the next thing so it fits with and expands the wholeness of what we are building. Consider our options. Refactor and rework our design. Make functions/classes/code cohesive. Bust up things that are too big into smaller elements. Revise.

6. The formation of generic centers is guided by patterns.
We have in mind a high-level software architecture that guides our design and implementation.

7. Congruent with feeling and governed by feeling.
Instead of just making a test pass, see if what you just wrote feels right (or if it feels like an ugly hack). Reflect on how and what we are building. Don’t be merely satisfied with making your code work. How do you feel about what you’ve just built? How do those using your software react to it? How do those who have to maintain and live with your code feel about it?

8. For buildings, the formation of structure is guided by the emergence of an aperiodic grid, which brings coherent geometric order
Software is structured, too…we’ve got to be aware of how we are structuring our code.

9.Oriented by a form language that provides concrete methods of implemented adapted structure through simple combinatory rules
We use accepted “schemas” to create coherent software systems. We have software architecture styles, framework support, and even pattern languages emerging…

10. Oriented by the simplicity transformation, and is pruned steadily
We can consistently refactor and rework our code with the goal of simplifying in order to enable building more functionality. We rebuild to create sustainable software structures. Even if we come back to some old working code and see how to simplify it, we can rework it taking into consideration what we’ve learned in the meantime.

Yet, let’s not be complacent. Agile or Lean or Clean Code or Scrum practices don’t address every process characteristic Alexander mentions. I am not sure that all these characteristics are important for building lively software. Alexander is not a builder of software systems, although he spent a lot of time talking with some pioneers and leaders of the software patterns movement.

Some process ideas of Alexander sound expensive and time consuming. Do we always need to reflect on how we feel about what we code? Sometimes we need to build quickly, not painstakingly. We need to prove its worth, and then refine our software. Our main thought may be on just simply making it work, not how it makes us or others feel. So how do we add liveliness to this quickly fashioned software? What’s a good process for that? Mike Feathers wrote about Working Effectively With Legacy Code, but there is a lot more to consider. Maybe that quickly fashioned software has tests, maybe it doesn’t, maybe some parts have a reasonable structure, and maybe other parts should be tossed.

We often build disposable and hopefully short-lived software. Problems crop up when that code gets rudely hacked to extend its capabilities and live past its expiration date.

There are most likely different processes for creating lively software, based on where you start, where you think you are headed, and how lively it needs to be (not everything needs to be fashioned with such care).

People are continually building new and better tools and libraries. There is a rich and growing ecosystem of innovative open source software. Process matters. I think we have a lot still to learn about building lively software. It is a heady time to be building complex software systems.

Making Strong, Lively Centers

Making things with lively, cohesive centers (whether software, buildings, landscapes, educational experiences, or artfully designed bento boxes) involves hard work, practice, skill, reflection, and the development of a discriminating eye.

One great example of hard work over a long period of time was this bonsai boat tree I saw in Kyoto. This tree is over 600 years old!

Can you imagine the effort and attention the bonsai gardeners spent over the centuries to create, grow, and maintain this beautiful shape with its many centers?

I wish I could sit with great software designers and architects, soak up their wisdom, and then effortlessly incorporate that wisdom into my own code. I would love to write lively code without breaking a sweat. But that hasn’t been my experience.

My first Smalltalk code wasn’t very good. I didn’t immediately get the shift from procedural thinking, where I had to worry about controlling every aspect of the call chain, to that flowing object-oriented style where learning how to delegate responsibility was key.

To understand how to make my Smalltalk code lively (because of stronger centers) took practice and experimentation, reflection, and more practice. And letting go of preconceived notions that no longer fit.

As I program in a yet another programming language, I can’t avoid bringing along techniques I learned earlier. Some fit. Some do not. (I keep re-framing my notions of how to implement a good design). And I keep adding useful programming techniques to my toolkit.

Techniques for constructing well designed code is programming-language specific, even though underlying good design principles seem universal.

It took a while for me to realize that to become a better Smalltalk programmer I had to let go of my incessant urge to understand and control every little detail (I had to do that as an 8086 assembly language programmer, my prior language). Trust in polymorphism. Delegate. Don’t try to do too much in any one method. Don’t pass in too many arguments. Let objects take responsibility for their actions.

Even as I learned to let go of details, I still made dumb mistakes.

Initially I didn’t understand the difference between elegant and overly clever code (I liked Smalltalk blocks—er, closures). I didn’t realize the overhead of lots of closures that held on to context. I thought it was clever that my font management code held blocks that could read fonts from the file system (embedding references to external files in them for goodness sakes).

Seasoned Smalltalkers don’t make these mistakes. See this wiki page for a short discussion of Smalltalk and Closures and this Stack Overflow posting.

Was I tone deaf when it came to using blocks? I don’t think so. I just wasn’t paying attention to the right details. And I wasn’t looking in the right places for inspiration or guidance.

Instead of performing my own experiments, ideally, I should’ve been studying and emulating good examples. Such as the Smalltalk collection hierarchy’s use of closures. There, code blocks are used elegantly to execute differential behavior. The Smalltalk collection hierarchy is one of the most beautiful set of classes I’ve ever seen.

Fortunately, I had people around me who took the time to rewrite my code and explain to me why they did what they did. Consequently, I learned to write simpler, less clever, less resource intensive, more maintainable Smalltalk code.

Recently I have been programming in JavaScript. I was motivated to develop JavaScript code to front-end our client-side Java reference app we developed and use in our Enterprise Application Design course. For that initial programming exercise I took the stance that I’d use pretty much “stock” JavaScript libraries (hence me learning about JQuery) and keep things pretty simple.

Since that first whiff of JavaScript programming, I’ve been honing my JavaScript by learning more libraries and plugins and improving my programming skills. I am no expert. Not yet.

I’ve learned effective techniques somewhat randomly because I am not surrounded by JavaScript experts who teach me their craft. Combing through the Internet for advice and inspiration is haphazard and compounded by the fact that our notion of good programming practices evolves over time as languages and tools and libraries grow and evolve.

But now, after more time and experience, I can appreciate several coding practices that contribute to maintainable JavaScript. Such as:

Modules. At first, the coding technique to define a module just seemed confusing. It is. But modularity, which helps to define and separate code “centers” is really important. Not only does it strengthen a “center” by making it more defined (and encapsulated), it makes it more easily integrated with other code.

Being aware of variable scope and limiting it.

Not constantly searching and mucking with DOM objects on every event. Initially I was content if my JQuery searches were “optimized”. Now I am thinking how to avoid DOM references by caching appropriate state in my own variables.

Not blindly nesting anonymous callbacks, but defining functions and then using them.

These techniques contribute to better-defined untangled code centers. But I want to caution you: don’t blindly follow coding best practices without knowing about and buying into the rationale behind them. Arguably your code might be better if you do. But you won’t learn how to exercise judgment until you know more about why you are doing what you are doing. Understanding how to write code that has strong, lively centers takes time, feedback, and the right kind of experience.

When I first started programming in JavaScript I could not have appreciated these techniques. I needed to gain more experience before I could see their value. With time writing more code, looking at more good and bad code, discussions with others, and reflection, I have gotten better at JavaScript. I’m not sure what steps I could leave out to shorten this process. It certainly is easier to learn how to write lively code if you work with others who care deeply about the code they write and who willingly point out and explain the good bits to you when you are ready to absorb them. If you are fortunate to have wise souls around you, take advantage of their wisdom…then put in the time you need to become better.

Distinguishing between testing and checking

At Agile 2013 Matt Heusser presented a history of how agile testing ideas have evolved in “Twelve Years of Agile Testing: And What Do We Do Now?” The most intellectually challenging idea I came away from Matt’s talk was the notion that testing and checking are different. I’m still trying to wrap my head around this distinction.

Disclosure: I’m not a testing insider. However, along with effective design and architecture practices, pragmatic testing is a passion of mine. I have presented talks at Agile with my colleague Joe Yoder on pragmatic test driven design and quality scenarios.

Like most, I suspect, I have a hard time teasing out a meaningful distinction between checking and testing. When I looked up definitions for testing and checking there was significant overlap. Consider these two definitions:

Testing-the means by which the presence, quality, or genuineness of anything is determined

Testing-a particular process or method for trying or assessing.

And these for checking:

Checking-to investigate or verify as to correctness.

Checking-to make an inquiry into, search through, etc.

Using the first definition for testing, I can say, “By testing I determine what my software does.” For example, a test can determine the amount of interest calculated for a late payment or the number of transactions that are processed in an hour. Using the second meaning of testing, I can say that, “I perform unit testing by following the test first cycle of classic TDD” or that, “I write my test code to verify my class’ behavior after I’ve completed a first cut implementation that compiles.” Both are particular testing processes or methods.

I can say, “I check that my software correctly behaves according to some standard or specification (first meaning).” I can also perform a check (using the second definition) by writing code that measure how many transactions can be performed within a time period.

I can check my software by performing manual procedures and observing results.

I can check my software by writing test code and creating an automated test suite.

I might want to assess how my software works without necessarily verifying its correctness. When tests (or evaluations) are compared against a standard of expected behavior they also are checks. Testing is in some sense a larger concept or category that encompasses checking.

Confused by all this word play? I hope not.

Humans (and speakers of any native language) explore the dimensions and extent of categories by observing and learning from concrete examples. One thing that distinguishes a native speaker from a non-native speaker is that she knows the difference between similar categories, and uses the appropriate concept in context. To non-native speakers the edges and boundaries of categories seem arbitrary and unfathomable (meanings aren’t found by merely reading dictionary definitions).

I’ve been reading about categories and their nuances in Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences. (Just yesterday I read about subtle difference between the phrases, “Letting the cat out of the bag” and “Spilling the beans.”)

So what’s the big deal about making a distinction between testing and checking?

Matt pointed us to Michael Bolton’s blog entry, Testing vs. Checking. Along with James Bach, Michael has nudged the testing world to distinguish between automated “checks” that verify expected behaviors versus “testing” activities that require human guided investigation and intellect and aren’t automatable.

In James Bach’s blog, Testing and Checking Refined, they makee these distinctions:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.
(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)”

My first reaction was to throw up my hands and shout “Enough!” My reaction was that of a non-native speaker trying to understand a foreign idiom! But then I calmed down, let go of my urge to precisely know James and Michael’s meanings, accept some ambiguity, and looked for deeper insight.

When Michael explained,

“Checking is something that we do with the motivation of confirming existing beliefs” while, “Testing is something that we do with the motivation of finding new information.”

it suddenly became more clear. We might be doing what appears to be the same activity (writing code to probe our software), but if our intentions are different, we could either be checking or testing.

Why is this important?

The first time I write test code and execute it I learn something new (I also might confirm my expectations). When I repeatedly run that test code as part of a test suite, I am checking that my software continues to work as expected. I’m not really learning anything new. Still, it can be valuable to keep performing those checks. Especially when the code base is rapidly changing.

But I only need to execute checks repeated on code that has the potential to break. If my code is stable (and unchanging), perhaps I should question the value of (and false confidence gained by) repeatedly executing the same tired old automated tests. Maybe I should write new tests to probe even more corners of my software.

And if tests frequently break (even though the software is still working), perhaps I need to readjust my checks. I’m betting I’ll find test code that verifies details that should be hidden/aren’t really essential to my software’s behavior. Writing good checks that don’t break so easily makes it easier to change my software design. And that enables me to evolve my software with greater ease.

When test code becomes stale, it is precisely because it isn’t buying any new information. It might even be holding me back.

I have a long way to go to become a fluent native testing speaker. And I wish that James and Michael could have chosen different phrases to describe these two categories of “testing” (perhaps exploration and verification?).

But they didn’t.
Fair enough.