Growing Your Personal Design Heuristics Toolkit

Billy Vaughn Koen’s Discussion of The Method has had a profound influence on my thoughts about design. This book has inspired me to explore the role that software patterns play alongside other design heuristics and led me to muse about design uncertainty. I’ve been inspired to experiment with simple ways to capture design heuristics and I’ve spent time heuristics hunting with other designers.

In this blog post I’ll introduce you to Billy Vaughn Kohn’s ideas about problem solving and heuristics. Billy defines heuristic as

“anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

There are no guarantees. Heuristics can fail. The tenacity that defines an engineer is when she steps back, regroups, and finds a different heuristic to try next. Another important thing to consider is that there are always competing heuristics to choose from. There simply isn’t a single right way to solve any complex design problem. Based on our assessment of the situation, our judgment, and the fit of the heuristics we know to the problem at hand, we decide what to do next. And when designers disagree, it may be because they are focusing on different aspects of the problem, or that they have a different collection of cherished heuristics in their toolkit (whether or not they can articulate them).

Courtesy of xkcd.com https://xkcd.com/309/

Courtesy of xkcd.com https://xkcd.com/309/

Billy describes three different type of heuristics:

    1. Heuristics we use to solve a specific problem;
    2. Heuristics that guide our use of other heuristics (meta-heuristics, if you will); and
    3. Heuristics that determine our attitude and behavior towards design or the world and the way we work.

Let’s take a closer look at each type.

Heuristics we apply to solve a problem. For example, in his book, Billy gives several examples of general engineering heuristics: Heuristic: Always be prepared to give an answer (and when you need to give an answer, use the best approach you know for figuring out that answer based on the time constraints and resources you have at hand). As an example, imagine the approach you might take for determining the number of Ping-Pong balls that fill up a room if you had to give an answer in 2 seconds. What if you had 2 minutes, an hour, or a week to come up with an answer? Would your approaches differ?

Heuristic: Use feedback to stabilize your design. Billy Vaughn Koen is a Professor Emeritus of Nuclear Physics as well as a philosopher of engineering. He taught people who designed nuclear reactors. I’m glad he taught them about the importance of feedback loops to verify design assumptions. And that is one reason why frequent delivery of software, retrospectives, and reviews are valued activities in agile development.

And oh yes, one last heuristic: Always give yourself a chance to retreat. Backing out of a half-baked or unworkable design is important if you don’t want to paint yourself into a corner.

As software designers we have many heuristics in our toolkit. If we know about design patterns, we use them. If we’ve built high-performance systems, we may have a wealth of heuristics for tuning cache performance. If we are familiar with Domain Driven Design concepts, our preferred way to structure a complex system is by identifying bounded contexts. For those unfamiliar with Domain Driven Design, a bounded context might be considered to be roughly equivalent to a subsystem or subdomain. The distinguishing characteristic of a bounded context is that within it there is a consistent, single, unambiguous meaning for business concepts and events. And we may visualize event flows and relationships between bounded contexts by drawing a context map.

Heuristics that guide our use of other heuristics. These heuristics tell us what to try next. The best examples I know of these heuristics are in Object-Oriented Reengineering Patterns. Each chapter in this book is a small pattern language, describing a rough sequence of actions for solving a particular system-reengineering goal. For example, chapter 3 is a pattern language for making “first contact” with a legacy system. Based on what you have just learned, the language spells out options for what to explore next.

First contact design pattern language

Heuristics that determine our attitude and behavior. Agile software designers value frequent feedback. That’s the premise behind frequently deploying working software in order to have its functionality exercised by actual customers. And one heuristic for reducing the risk of downtime when frequently deploying software is to use a blue/green deployment environment.

While we may have learned design patterns (or other published patterns, for that matter) by reading about them, most nitty-gritty design heuristics come from on-the-job experience. We learn a lot by living with the myriad design choices we make (and revisit) as our system’s functionality grows. We learn even more as we support a system over its lifetime. Our heuristics are shaped not only by the kinds of software systems we have designed, built, and maintained but also by the design culture at our places of work.

How do we improve as designers? Through designing and building software. By reflecting on that experience. And by polishing and refining our heuristics and sharing our cherished heuristics with each other.

What’s going on here?

Question: What is more exasperating than reading design documentation that doesn’t synch up with the code?
One answer: Writing such useless design documentation.
Another answer: Writing documents that don’t stand a chance of being aligned with the code.
My answer: Reading design documentation that is at the wrong level of abstraction or detail to help me do the task at hand.

Courtesy xkcd https://xkcd.com/license.html


A few years ago I worked with someone, who, when asked for a high level overview of some complex bits of code we were going to refactor, produced a detailed Visio diagram that went on for 40+ pages! In that document were statements directly lifted from his code. The intent was to represent the control logic and branching structure of some very complicated algorithms (he copied conditional statements into decision diamonds and copied code that performed steps of the algorithm directly into action blocks). To him, the exercise of creating this documentation had really clarified his design. I was amazed he went to such an effort. To me, the control structure was evident by simply reading the code. Moreover, what I was looking for, and obviously didn’t communicate very well, was that I would appreciate some discussion at a higher level, explaining what algorithm variations there were and how code and data were structured in order to handle myriad typical and special cases.

I wanted to be oriented to the code and data and parts that were configurable and parameterized.

Later, I learned from one of his colleagues that “Bob” always explained things in great detail and wasn’t very good at “boiling things down.” If you wanted to ask “Bob” a question about the code you’d better plan on at least spending half an hour with him.

Another time a colleague created a sequence diagram explaining how data was created that was used by a nightly cron job. What was missing was any description of what that cron job did. Maybe it was so obvious to him that he didn’t think it was important. But I lacked the context to get the full picture. So I had to dig into the cron job script to understand what was really going on.

When I want to get oriented to a large body of code, I like to see a depiction of how it is structured and organized and the major responsibilities of each significant part. Not the package or file structure (that will be useful, eventually), but how it is organized into significant modules, functions, classes and/or call structures. I want to know what the major parts of the system are and why they are there and relationships between them. And at a high level, how they interact. And then I want to understand key aspects of the system dynamics. Just by staring at code I’ll never fathom all the ins and outs of its execution model (e.g. what threads or processes there are, what important information is consumed, produced, or passed around).

Simon Brown gave a talk at XP2016 on The Art of Visualizing Software Architecture. Simon travels the world giving advice and training and talks at conferences about how to do this. He has also build a tool called Structerizr which:

“ blend[s] together the best parts of the various approaches and allow software developers to easily create software architecture diagrams that remain up to date when the code changes. Structurizr allows you to take a hybrid “extract and supplement” approach. It provides you with a way to create software architecture models using code. This means you can extract information from your code using static analysis and reflection, supplementing the model where information isn’t readily available. The resulting diagrams can be visualized and shared via the web.”

I had the opportunity to spend an hour with Simon getting a personalized demo of Structurizr, and discussing the ins and outs of his tool works, how it has been used, and some aspirations for the future.

I was initially surprised about how you specify what your “model elements and relationships” are: you write code specifications for this. To me, it seemed that this could have more easily be accomplished by writing some parseable external description or using a diagramming tool which enabled me to link to the relevant code.

However, I know why Simon’s made this choice: He deeply believes that code should be the reference point for all structural architecture information. So he thinks it a natural extension that developers write a bit more code to specify the important elements of the models they want to see and how to depict them. To Simon, it isn’t sensical to just “add an element” to a diagram unless there is some attachment to actual code. For example, you may define a Hibernate specification to represent data access to a repository. Then that can be specified as a data source. With external descriptions that are not based on code, you have a greater chance of it becoming disconnected from what code actually does. So be it. This is where my values diverge a bit from Simon’s. I favor a richer or higher level documentation that isn’t code based if it tells me something I need to know to do a task (or something I shouldn’t do) or that gives me insights that I wouldn’t find otherwise. I also like the freedom to embellish my diagrams with extra annotations, colors, highlights, etc., etc. so I can focus viewers’ attention. And consequently, I also like to create a key that describes the elements of any diagram so casual readers can understand my notation, too.

After processing these model descriptions, Structurizr spits out JSON descriptions that represent aspects of a C4 model: Context, Containers, Components, and Class diagrams. These are then submitted to a service, which then create various model visualizations that can directly link to parts of your code source. There’s also a rudimentary ability to add additional bits of information to textually describe your architecture and architecture requirements. I’m glad that, recognizing the limits of what code can tell about architecture, the tool provides an easy entry way into developers telling more about the architecture in text.

In our conversation, Simon emphasized that one important benefit of linking architecture document to code is that it never gets out of synch. The code for your architecture documentation can be versioned, right along with your production code.

Fair enough. Useful even. But how helpful is the documentation that is generated in understanding what’s going on in your system?

I think this is one way (certainly not the only way, however) to provide an orientation into the overall structure of your system and major relationships between its parts. But there are definite limits to what you can describe by models generated from static analysis of code. Such models don’t tell me the responsibilities of a component. I have to glean that from supplementary text (which I didn’t see an easy way to attach to the model descriptions) or to abstract that by reading code. And they certainly won’t explain any dynamic system behavior or system qualities.

While I find Structurizr’s direct connection to the code intriguing, it is also its biggest limitation. Structurizr is intended to be used by developers who want to create some architecture documentation, which stays in synch with their code and that also can be shared via websites or wikis or embedding it into other documents.

Structurizr has succeeded with this fairly modest goal: assist developers who don’t ordinarily generate any useful architecture diagrams whatsoever to do something.

But this isn’t enough. As designers or architects, only you can tell me answers to my larger questions about why your code is structured the way it is, and what are the important bits about the system and runtime execution that are intentional and crucial to understand and preserve. No diagram generated by a tool will ever replace your insights and wisdom. No amount of code comments can convey all that either. I need to hear (and read) and see more of this kind of stuff from you.

Being Agile About Documenting and Communicating Architecture


Software architects and developers often need to defend, critique, define, or explain many different aspects of the design a complex system. Yet agile teams favor direct communications over documentation. Do we still need to document our designs?

Of course we do.

We won’t always be around to directly communicate our design to all current and future stakeholders. Personally, I’ve never found working code (or tests) to be the best expression of a software design. Tests express expectations about observable system behavior (not about the design choices we made in implementing that behavior). And the code doesn’t capture what we were thinking at the time we wrote that code or the ideas we considered and discarded as we got that code fully functioning. Neither tests nor code capture all our constraints and working assumptions or our hopes and aspirations for that code.

So what kind of design documentation should we create and how much documentation should we create, for whom, and when? And what is “good” enough documentation?
Eoin Woods gave a talk at XP 2016 titled, “Capturing Design (When You Really Have To)” that got me to revisit my own beliefs on the topic and to think about the state of the current agile practices on documenting architecture.

One take away from Eoin’s talk is to consider the primary purpose of any design description: is it primarily to immediately communicate or to be a long-term record? If your primary goal is to communicate on the fly, then Eoin claims that your documentation should be short lived, tailored to your audience, throwaway, and informal. On the other hand, design descriptions as records are likely to be long-lived, preserve information, be maintainable and organized, and more formal (or well-defined).

Since we are charged to deliver value with our working software, it is often hard to pay attention to any perceived efforts at “slowing down” to describe our systems as we build them. But if we’re building software that is expected to live long (and prosper), it makes sense to invest in documenting some aspects of that system—if nothing more to serve as breadcrumbs useful to those working on the system in the future or to our future selves.

So what can we do to keep design descriptions useful, relevant, unambiguous, and up-to-date? Eoin argues that to be palatable to agile projects, design documentation should be minimal, useful, and significant. It should explain what is important about the design and why it is important; what design decisions we made (and when), and what are the major system pieces, their responsibilities and key interactions. Because of my Responsibility-Driven Design values and roots, I like that he considers system elements and their responsibilities as being minimally useful information descriptions of a system. But to me this just a starting point to get an initial sense of what a system is and does and does not do. There certainly is room for more description, and more details when warranted.

And that gets us back to pragmatics. A design description isn’t the first thing that developers think of doing (not everyone is a visual thinker nor a writer). I know I’m atypical because, early in my engineering career, I enjoyed spending 3 weeks writing a document on my universal linker worked and how to extend it and its limitations. I was nearly as happy producing that document as I was in designing and implementing that linker. It pleased me that sustaining engineering found that document useful years after I had left for another job.

So for the rest of you who don’t find it natural to create documentation, here’s some advice from Eoin:

  • Do it sprint-by-sprint (or little by little). Don’t do it all at once.
  • Be aware of tradeoffs between fidelity and maintainability. The more precise it is, the harder descriptions will be to keep up to date.
  • Know the precision needed by your document’s users. If they need details, they need details. The more details, the more effort required to keep them up to date.
  • Consider linking design descriptions to your code (more on that in another blog post)
  • Create a “commons” where design descriptions are accessible and shared
  • Focus on the “gaps”—describe things that are poorly understood
  • Always ask what’s good enough. Don’t settle for less when more is needed or more when less is needed.

To this list I would add: if design descriptions are important to your company and your product or project, make it known. Explain why design documentation is important, respond to questions and challenges of that commitment, and then give people the support they need to create these kinds of descriptions. Let them perform experiments and build consensus around what is needed.

Be creative and incremental. One company I know made short video recordings of designers and architects giving short talks about why things worked the way they did. They were really short—five minutes or less. Another team created lightweight architecture documentation as they enhanced and made architectural improvements to the 300+ working applications they had to support. It was essential for them that there be more than just the code as the knowledge about these systems was getting lost and decaying over time. Rather than throw up their hands and give up, they just created enough design documentation using simple templates and only as new initiatives were started.

Find a willing documenter. Sometimes a new person (who is new to the system and to the company) is a good person to pair with another old hand to create a high-level description of the system as part of “getting their feet wet.” But don’t just stick them with the documentation. Have them write code and tests, too. From the start.

Beware of Dogma. No. Be aware of dogma

Dogma has several different meanings. I’m going to purposefully split hairs in this post, because I don’t want to attach negative connotations to dogma in a knee jerk fashion. I want to be more thoughtful about my choice of words and my reactions to them.

Here are four meanings for dogma:

“1. an official system of principles or tenets concerning faith, morals, behavior, etc., as of a church.
2. a specific tenet or doctrine authoritatively laid down.
3. prescribed doctrine proclaimed as unquestionably true by a particular group.
4. a settled or established opinion, belief, or principle”
from dictionary.com

At first, these subtle differences in meanings annoyed me. But I wanted to push through that to see what I can learn about dogma. So here goes…

An official set of principles or tenets concerning faith, morals and behavior.
As a software professional, do I have an “official” set of principles and tenets that I believe in?

I have a set of guiding principles and practices for how I work, think about design, write code and tests that I’ve built up over 20+ years of practice. They have become part of how I prefer to operate. I’ve changed and refined them over time, discarding some practices, fine tuning others.

The guiding principles I follow weren’t handed down by authorities. I discovered them working alongside smart people and interacting with thoughtful designers who cared deeply about how they built and implemented software. I wanted to understand how productive people thought and worked, and try to incorporate what I saw as good practices and beliefs into my own beliefs and ways of working.

In the process, I co-wrote two object design books that shared a way of thinking about objects that I still find effective and powerful. Maybe writing books made me an authority. But I also have become a seeker of new and better ways of working. Over the years I have “blended” into my personal set of practices and beliefs about design some powerful ideas of others. This process of incorporating these ways of thinking and problem solving to me feels highly integrative rather than just “accepting” them as unchallenged beliefs or tenets. I have to sort through them, adjust them and then make them part of who I am and what I do. I am not one to blindly accept dogma.

The 3rd definition of dogma has negative connotations: a prescribed doctrine proclaimed as unquestionably true by a particular group.

Hm. I don’t hold many things about software design as being unquestionably true. I find it disconcerting when groups and factions form around the latest truth or discovery. For example, some fervent agile developers I know unquestionably believe that test-first development is the only way to design software. (I’m more of a test-frequent designer by nature). Those who refuse to acknowledge that there are other effective pathways to producing well-written, well-designed, maintainable code are trying to push a dogma of the 3rd meaning.

I find myself questioning any software doctrine that is held as being “universally” true. How presumptuous! There are so many different ways to solve problems and build great software.

I try to keep an open mind. My most strongly held beliefs are ones I should challenge from time to time. To do that, I have to push myself out of my comfort zone. For example, I have discovered a few things by letting go of several strongly held beliefs and performing some interesting experiments: How much code that checks expected behaviors do I really need to keep around to keep software from regressing? How many tests does any organization really need to keep? How many comments do I need in my code? How much of my code should check for well-formed arguments? Is it better to fail fast or fail last? What’s the effect on my code to put in all those checks? What’s the effect of leaving them out.

Not all dogma is handed down from on high or authoritatively laid down…nor is it necessarily bad to hold a common set of beliefs and opinions (the fourth definition of dogma). I’ve been in dysfunctional groups where we couldn’t agree on anything. It was extremely stressful and unproductive.

If as a group we establish and hold a common set of beliefs and practices, then we can just get on with our jobs without all that friction jockeying for who is right and the right way to do things.

But, here’s the rub…if you accept a certain amount of dogma (and I’m not saying what kind of dogma that might be…if you are an agile software developer I am sure you hold certain beliefs on testing, task estimation, collaboration, specification, keeping your code clean, whatever…) be wary of becoming complacent. Dogma needs to be challenged and re-examined from time to time. But don’t toss your current dogma aside on a whim, either. Old beliefs can get stale. But they may still be valid. We need to try out new ideas. But not simply discard older beliefs because shiny new ones are there to distract us.

Why Process Matters

I’ve been working on a talk for Smalltalks 2014 about Discovering Alexander’s Properties in Your Code and Life.

I don’t want it to be an esoteric review of Alexander’s properties.

That won’t satisfy my audience or me.

I want to impart information about how Alexander’s physical properties might translate to properties of our software code as well as illustrate poignant personal examples in the physical world.

But equally important, I want impress upon my audience that process is vital to making lively things (software and physical things). In his, The Process of Creating Life: Nature of Order, Book 2, Alexander states,

“Processes play a more fundamental role in determining the life or death of the building than does the ‘design’.”

Traditionally, building architects hand off their designs as a set of formal drawings for others others to build. Does this remind you of waterfall software development? There isn’t anything inherently wrong with constructing formal architectural drawings…but they never end up reflecting accurately what was built. Due to errors in design, situational decisions based on new discoveries made as things are built, better construction techniques, changing requirements, limitations in tools or materials, a building is never exactly constructed as an architect draws it up.

Builders know that. Good ones exercise their judgment as they make on the spot tactical re-design decisions. Architects who are deeply involved in the building process know that.

Alexander is rather unhappy with how buildings are typically created and suggests that any “living” process (whether it be for building design or software or any other complex process) incorporate the following ten characteristics.

He challenges us software makers to do better, too:

“The way forward in the next decades, towards programs with highly adapted human performance, will be through programs which are generated through unfolding, in some fashion comparable to what I have described for buildings.”

As software designers and implementers we know that nothing is ever built exactly as initially conceived. Not even close. Over the past decade or so we have made significant strides our processes and our tools that enable us to be more effective at adaptively and incrementally building software. My thoughts on some ways we have tackled these characteristics are interspersed in italics, below.

Characteristics of Living Processes

1.Step-by-step adaptive. Small increments with opportunity for feedback and correction.
Incremental delivery, retrospectives, stakeholder reviews
Repetitive incremental design cycles:
Design a little– implement–refactor rework refine–design…
Design/test cycles: Write specifications of behavior, write some code that correctly works according to the specification, test and adapt…
Tests and production code equally valued

2. Whatever the greater whole is always the main focus of attention and the driving force.
Working deployable software, minimally-marketable features

3. The entire process is governed and guided by the formation of living centers (that help each other)
Code with defined boundaries, separate responsibilities, and planned for interconnections

4. Steps take place in a specific sequence to control the unfolding.
We have a rhythm to our work. Whether it is test-first or test-frequent development, conversations with customers to define behavioral “specifications”, or other specific actions. In order to control unfolding we need to understand what we need to build, build it, then refine as we go. And we have tools that let us manage and incrementally build and record our changes.

5. Parts created must become locally unique.
Build the next thing so it fits with and expands the wholeness of what we are building. Consider our options. Refactor and rework our design. Make functions/classes/code cohesive. Bust up things that are too big into smaller elements. Revise.

6. The formation of generic centers is guided by patterns.
We have in mind a high-level software architecture that guides our design and implementation.

7. Congruent with feeling and governed by feeling.
Instead of just making a test pass, see if what you just wrote feels right (or if it feels like an ugly hack). Reflect on how and what we are building. Don’t be merely satisfied with making your code work. How do you feel about what you’ve just built? How do those using your software react to it? How do those who have to maintain and live with your code feel about it?

8. For buildings, the formation of structure is guided by the emergence of an aperiodic grid, which brings coherent geometric order
Software is structured, too…we’ve got to be aware of how we are structuring our code.

9.Oriented by a form language that provides concrete methods of implemented adapted structure through simple combinatory rules
We use accepted “schemas” to create coherent software systems. We have software architecture styles, framework support, and even pattern languages emerging…

10. Oriented by the simplicity transformation, and is pruned steadily
We can consistently refactor and rework our code with the goal of simplifying in order to enable building more functionality. We rebuild to create sustainable software structures. Even if we come back to some old working code and see how to simplify it, we can rework it taking into consideration what we’ve learned in the meantime.

Yet, let’s not be complacent. Agile or Lean or Clean Code or Scrum practices don’t address every process characteristic Alexander mentions. I am not sure that all these characteristics are important for building lively software. Alexander is not a builder of software systems, although he spent a lot of time talking with some pioneers and leaders of the software patterns movement.

Some process ideas of Alexander sound expensive and time consuming. Do we always need to reflect on how we feel about what we code? Sometimes we need to build quickly, not painstakingly. We need to prove its worth, and then refine our software. Our main thought may be on just simply making it work, not how it makes us or others feel. So how do we add liveliness to this quickly fashioned software? What’s a good process for that? Mike Feathers wrote about Working Effectively With Legacy Code, but there is a lot more to consider. Maybe that quickly fashioned software has tests, maybe it doesn’t, maybe some parts have a reasonable structure, and maybe other parts should be tossed.

We often build disposable and hopefully short-lived software. Problems crop up when that code gets rudely hacked to extend its capabilities and live past its expiration date.

There are most likely different processes for creating lively software, based on where you start, where you think you are headed, and how lively it needs to be (not everything needs to be fashioned with such care).

People are continually building new and better tools and libraries. There is a rich and growing ecosystem of innovative open source software. Process matters. I think we have a lot still to learn about building lively software. It is a heady time to be building complex software systems.

Making Strong, Lively Centers

Making things with lively, cohesive centers (whether software, buildings, landscapes, educational experiences, or artfully designed bento boxes) involves hard work, practice, skill, reflection, and the development of a discriminating eye.

One great example of hard work over a long period of time was this bonsai boat tree I saw in Kyoto. This tree is over 600 years old!

Can you imagine the effort and attention the bonsai gardeners spent over the centuries to create, grow, and maintain this beautiful shape with its many centers?

I wish I could sit with great software designers and architects, soak up their wisdom, and then effortlessly incorporate that wisdom into my own code. I would love to write lively code without breaking a sweat. But that hasn’t been my experience.

My first Smalltalk code wasn’t very good. I didn’t immediately get the shift from procedural thinking, where I had to worry about controlling every aspect of the call chain, to that flowing object-oriented style where learning how to delegate responsibility was key.

To understand how to make my Smalltalk code lively (because of stronger centers) took practice and experimentation, reflection, and more practice. And letting go of preconceived notions that no longer fit.

As I program in a yet another programming language, I can’t avoid bringing along techniques I learned earlier. Some fit. Some do not. (I keep re-framing my notions of how to implement a good design). And I keep adding useful programming techniques to my toolkit.

Techniques for constructing well designed code is programming-language specific, even though underlying good design principles seem universal.

It took a while for me to realize that to become a better Smalltalk programmer I had to let go of my incessant urge to understand and control every little detail (I had to do that as an 8086 assembly language programmer, my prior language). Trust in polymorphism. Delegate. Don’t try to do too much in any one method. Don’t pass in too many arguments. Let objects take responsibility for their actions.

Even as I learned to let go of details, I still made dumb mistakes.

Initially I didn’t understand the difference between elegant and overly clever code (I liked Smalltalk blocks—er, closures). I didn’t realize the overhead of lots of closures that held on to context. I thought it was clever that my font management code held blocks that could read fonts from the file system (embedding references to external files in them for goodness sakes).

Seasoned Smalltalkers don’t make these mistakes. See this wiki page for a short discussion of Smalltalk and Closures and this Stack Overflow posting.

Was I tone deaf when it came to using blocks? I don’t think so. I just wasn’t paying attention to the right details. And I wasn’t looking in the right places for inspiration or guidance.

Instead of performing my own experiments, ideally, I should’ve been studying and emulating good examples. Such as the Smalltalk collection hierarchy’s use of closures. There, code blocks are used elegantly to execute differential behavior. The Smalltalk collection hierarchy is one of the most beautiful set of classes I’ve ever seen.

Fortunately, I had people around me who took the time to rewrite my code and explain to me why they did what they did. Consequently, I learned to write simpler, less clever, less resource intensive, more maintainable Smalltalk code.

Recently I have been programming in JavaScript. I was motivated to develop JavaScript code to front-end our client-side Java reference app we developed and use in our Enterprise Application Design course. For that initial programming exercise I took the stance that I’d use pretty much “stock” JavaScript libraries (hence me learning about JQuery) and keep things pretty simple.

Since that first whiff of JavaScript programming, I’ve been honing my JavaScript by learning more libraries and plugins and improving my programming skills. I am no expert. Not yet.

I’ve learned effective techniques somewhat randomly because I am not surrounded by JavaScript experts who teach me their craft. Combing through the Internet for advice and inspiration is haphazard and compounded by the fact that our notion of good programming practices evolves over time as languages and tools and libraries grow and evolve.

But now, after more time and experience, I can appreciate several coding practices that contribute to maintainable JavaScript. Such as:

Modules. At first, the coding technique to define a module just seemed confusing. It is. But modularity, which helps to define and separate code “centers” is really important. Not only does it strengthen a “center” by making it more defined (and encapsulated), it makes it more easily integrated with other code.

Being aware of variable scope and limiting it.

Not constantly searching and mucking with DOM objects on every event. Initially I was content if my JQuery searches were “optimized”. Now I am thinking how to avoid DOM references by caching appropriate state in my own variables.

Not blindly nesting anonymous callbacks, but defining functions and then using them.

These techniques contribute to better-defined untangled code centers. But I want to caution you: don’t blindly follow coding best practices without knowing about and buying into the rationale behind them. Arguably your code might be better if you do. But you won’t learn how to exercise judgment until you know more about why you are doing what you are doing. Understanding how to write code that has strong, lively centers takes time, feedback, and the right kind of experience.

When I first started programming in JavaScript I could not have appreciated these techniques. I needed to gain more experience before I could see their value. With time writing more code, looking at more good and bad code, discussions with others, and reflection, I have gotten better at JavaScript. I’m not sure what steps I could leave out to shorten this process. It certainly is easier to learn how to write lively code if you work with others who care deeply about the code they write and who willingly point out and explain the good bits to you when you are ready to absorb them. If you are fortunate to have wise souls around you, take advantage of their wisdom…then put in the time you need to become better.

Distinguishing between testing and checking

At Agile 2013 Matt Heusser presented a history of how agile testing ideas have evolved in “Twelve Years of Agile Testing: And What Do We Do Now?” The most intellectually challenging idea I came away from Matt’s talk was the notion that testing and checking are different. I’m still trying to wrap my head around this distinction.

Disclosure: I’m not a testing insider. However, along with effective design and architecture practices, pragmatic testing is a passion of mine. I have presented talks at Agile with my colleague Joe Yoder on pragmatic test driven design and quality scenarios.

Like most, I suspect, I have a hard time teasing out a meaningful distinction between checking and testing. When I looked up definitions for testing and checking there was significant overlap. Consider these two definitions:

Testing-the means by which the presence, quality, or genuineness of anything is determined

Testing-a particular process or method for trying or assessing.

And these for checking:

Checking-to investigate or verify as to correctness.

Checking-to make an inquiry into, search through, etc.

Using the first definition for testing, I can say, “By testing I determine what my software does.” For example, a test can determine the amount of interest calculated for a late payment or the number of transactions that are processed in an hour. Using the second meaning of testing, I can say that, “I perform unit testing by following the test first cycle of classic TDD” or that, “I write my test code to verify my class’ behavior after I’ve completed a first cut implementation that compiles.” Both are particular testing processes or methods.

I can say, “I check that my software correctly behaves according to some standard or specification (first meaning).” I can also perform a check (using the second definition) by writing code that measure how many transactions can be performed within a time period.

I can check my software by performing manual procedures and observing results.

I can check my software by writing test code and creating an automated test suite.

I might want to assess how my software works without necessarily verifying its correctness. When tests (or evaluations) are compared against a standard of expected behavior they also are checks. Testing is in some sense a larger concept or category that encompasses checking.

Confused by all this word play? I hope not.

Humans (and speakers of any native language) explore the dimensions and extent of categories by observing and learning from concrete examples. One thing that distinguishes a native speaker from a non-native speaker is that she knows the difference between similar categories, and uses the appropriate concept in context. To non-native speakers the edges and boundaries of categories seem arbitrary and unfathomable (meanings aren’t found by merely reading dictionary definitions).

I’ve been reading about categories and their nuances in Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences. (Just yesterday I read about subtle difference between the phrases, “Letting the cat out of the bag” and “Spilling the beans.”)

So what’s the big deal about making a distinction between testing and checking?

Matt pointed us to Michael Bolton’s blog entry, Testing vs. Checking. Along with James Bach, Michael has nudged the testing world to distinguish between automated “checks” that verify expected behaviors versus “testing” activities that require human guided investigation and intellect and aren’t automatable.

In James Bach’s blog, Testing and Checking Refined, they makee these distinctions:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.
(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)”

My first reaction was to throw up my hands and shout “Enough!” My reaction was that of a non-native speaker trying to understand a foreign idiom! But then I calmed down, let go of my urge to precisely know James and Michael’s meanings, accept some ambiguity, and looked for deeper insight.

When Michael explained,

“Checking is something that we do with the motivation of confirming existing beliefs” while, “Testing is something that we do with the motivation of finding new information.”

it suddenly became more clear. We might be doing what appears to be the same activity (writing code to probe our software), but if our intentions are different, we could either be checking or testing.

Why is this important?

The first time I write test code and execute it I learn something new (I also might confirm my expectations). When I repeatedly run that test code as part of a test suite, I am checking that my software continues to work as expected. I’m not really learning anything new. Still, it can be valuable to keep performing those checks. Especially when the code base is rapidly changing.

But I only need to execute checks repeated on code that has the potential to break. If my code is stable (and unchanging), perhaps I should question the value of (and false confidence gained by) repeatedly executing the same tired old automated tests. Maybe I should write new tests to probe even more corners of my software.

And if tests frequently break (even though the software is still working), perhaps I need to readjust my checks. I’m betting I’ll find test code that verifies details that should be hidden/aren’t really essential to my software’s behavior. Writing good checks that don’t break so easily makes it easier to change my software design. And that enables me to evolve my software with greater ease.

When test code becomes stale, it is precisely because it isn’t buying any new information. It might even be holding me back.

I have a long way to go to become a fluent native testing speaker. And I wish that James and Michael could have chosen different phrases to describe these two categories of “testing” (perhaps exploration and verification?).

But they didn’t.
Fair enough.

Why Domain Modeling?

One barrier to considering rich domain model architectures is a misconception about the value or purpose of a domain model. To some, creating a domain model seems a throwback to earlier days where design and modeling were perceived to be discrete, lengthy, and mostly unproductive activities.

When object technology was young, several notable authors made a strong distinction between object-oriented analysis and object-oriented design and programming. Ostensibly, during object-oriented analysis you analyzed a task that you wanted to automate and developed an underlying conceptual (object) model of that domain. You produced a set of task descriptions and a model of objects that included representations of domain concepts and showed how these objects interacted to accomplish some work. But you couldn’t directly implement these objects. During object-oriented design you refined this analysis model to consider implementation and technology constraints. Only then, after finishing design, would you write your program. The implication was that any model you produced during analysis or design needed extensive manipulation and refinement before you could write your program.

But even in those early days, many of us blurred the lines between object-oriented analysis, design and programming. In practice, how we worked was often quite different than suggested by the popular literature of the time. We might analyze the problem, quickly sketch out some design ideas and then implement then. We might use CRC cards to model our objects (which we would then discard). There weren’t distinct gaps between analyzing the problem and designing and implementing a solution. Sometimes different people did analysis while others did design and programming; but many times a developer would do all these activities. Sometimes, we created permanent representations of some models (in addition to our code). It depended on the situation and the need.

These days, I rarely see anyone produce detailed object analysis or design models. In fact, design and modeling have gotten somewhat of a bad wrap. Good object design is deemed too difficult for “average” programmers, and there isn’t time apart from coding to think about the domain and come up with any models.

The most common object models I see are created for one of two purposes: small conceptual models constructed to gain an understanding of significant new functionality; or informal design sketches intended to provide a quick overview to newcomers or non-code reading folks who need to “know more” about the software. A lack of modeling (unless you consider code or tests to be models—I don’t) is prevalent whether or not the team is following agile practices. The most common models I do see are very detailed E-R models that are more implementation specifications than models. They don’t leave out any details making it hard find the important bits.

Understanding and describing a domain and creating any model of it in any form takes a back seat to most development activities.

But if your software is complex, rapidly changing, and strategic and you aren’t doing any domain modeling, you may be missing out on something really important. If your software is complex enough, you can greatly benefit from domain modeling and thoughtfully doing some Domain-Driven Design activities.

For example, Domain Driven Design’s strategic design is a conscientious effort to create common understanding between business visionaries, domain experts and developers. Initial high-level domain discussions lead to understanding what is central to the problem (the core domain) and the relationships between all the important parts (sub-domains) that it interacts with. Gaining such consensus helps you focus your best design efforts and structure (or restructure) your software to enable it to sustainably grow and evolve.

But it doesn’t stop there. If you buy into domain modeling, you also commit to developing a deep shared understanding of the problem domain along with your code. Your mission isn’t to just deliver working functionality, but to embed domain knowledge in your working solution. Your code will have objects that represent domain concepts. You’ll be picky about how you name classes and methods so they accurately reflect the language of the domain. You will have ongoing discussions with domain experts and jointly discuss and refine your understanding of the domain. Along the way you may sketch and revise domain models. You’ll strive to identify, preserve, strengthen and make explicit the connections between the business problem and your code. When you refactor your design as you gain deeper understanding, you won’t forget to reflect the domain in your code. Your domain model lives and evolves along with the code.

As a consequence, there isn’t that big disconnect between what you code and what the business talks about. And that can be a powerful force for even closer collaborations between developers and domain experts.

Domain-Driven Design Applied

Vaughn Vernon’s new book, Implementing Domain-Driven Design, is full of advice for those who want to apply design principles and patterns to implementing rich domain model applications.

Check out my interview with Vaughn for InformIT. We discuss why he wrote this book and some important new Domain-Driven Design patterns and their architectural impact.

There is much to learn about good design of complex software from this book. In future blog posts I plan to unpack some of the meatier domain-driven design concepts and reflect on how best to apply them to designing enterprise applications.

How far should you look ahead?

I’m a consultant. So an easy answer would be, “It depends.”

My real answer is more nuanced: only look ahead far enough to see some significant issue and a plausible way to resolve it. If you want to get better, practice looking ahead in the context of your work and its established rhythms and natural cycles.

If your organization doesn’t value planning and you want to look ahead, start by taking small steps. The reason to look ahead is to check for some pre-work or preparation that is likely to save time or money.

If you are a tech lead, try looking ahead in the context of release planning. See what architecture risks you spot (or new design requirements) as new features are scoped. Take notes on what you are concerned about. Spend some time speculating on what you should be doing architecturally to support the next release. Have a conversation with a trusted colleague about your concerns. Identify concrete follow-on activities specifically targeted at addressing your biggest concerns.

If you are an agile developer, you may not want to look further ahead than the current sprint. If you aren’t already doing so, try some iteration modeling for a couple of hours at the beginning of an iteration. Grab a handful of related story cards for a new feature and sketch out some design ideas. Draw sequence diagrams on a white board. Jot down class or component responsibilities on CRC cards. Noodle around. Give yourself the luxury of exploring with your co-workers how things might work. Don’t expect a final design, just a plausible one that will be fleshed out and revised as you implement those stories.

If your organization tends to look too far ahead, take actions to personally shift your focus away from endless planning. Winston Churchill said, “It is always wise to look ahead, but difficult to look further than you can see.” Looking too far ahead is also a big waste of time. Remedy this time suck by limiting the scope of what you plan for. Find opportunities to build and measure things.Learn to perform small experiments, make small decisions, communicate your results and move things along.

Colleagues and co-workers I’d classify as over-eager forward-looking designers typically build up comfortable infrastructure they think will be needed as a matter of course before they settle in to the programming task at hand. A good indicator that they’ve gone overboard is how others react to their work. One unhappy forward-looking architect remarked recently, “Not many people appreciate what I am trying to accomplish.”

You may not be aware that you look too far ahead. One indicator: you view most programming tasks as an opportunity to build or try out a cool new framework or create your own special DSL. (If you are part of an infrastructure team, then, fine, that is your job. Don’t embellish your creations. And make sure others can actually use your stuff.) If you are an application developer, nudge yourself towards refactoring and finding abstractions through concrete scenarios. Hold yourself back from writing speculative code.

I look to my colleague, Joe Yoder, as someone who takes a balanced approach when creating infrastructure or suggesting novel architectures. Joe likes to build flexible, adaptable systems, in particular Adaptive Object-Model architectures. He has years of experience doing this. He’s enthusiastic when opportunities come up to build these kinds of systems, but knows that this architecture style isn’t a good fit for every rapidly changing system. And he doesn’t build every part of such a system this way; only the parts that require extensive, ongoing customizations.

How far do you look ahead? Do you think you’ve found a good balance?