Reconciling New Design Approaches with What You Already Know

Image

Change
Last week at the deliver:Agile conference in Nashville I attended a talk by Autumn Crossman explaining the benefits of functional programming to us old timey object-oriented designers. I also attended the session led by Declan Whelan and Sean Button, on “Overcoming dys-functional programming. Leverage & transcend years of OO know-how with FP.”

The implication in both talks was that although objects have strengths, they are often abused and not powerful enough for some of today’s problems. And that now is an opportune time for us OO designers to make some changes to our preferred ways of working.

Yet I find myself asking: when should I step away from what I’ve been doing and know how to do well and step into a totally new design approach?

No doubt, functional programming is becoming more popular. But objects aren’t going away, either.

There are some benefits of pure functional solutions to certain design problems. Pure functional programming solutions don’t have side effects. You make stream-processing steps easily composable by designing little, single purpose functions operating over immutable data. You are mutating data, it just isn’t being mutated in place. In OO terms, you aren’t changing the internal state of objects, you are creating new objects with different internal state. By using map-reduce you can avoid loop/iteration/end-condition programming errors (letting powerful functions handle those details). No need to define variables and counters. This is already familiar to Smalltalk programmers via do:, collect:, select:, and inject:into: methods which operate on collections (Ruby has its equivalents, too). And by operating on immutable data, multi-threading and parallelization get easier.

I get that.

But I can create immutable data using OO technology, too. Ever hear of the Value Object pattern? Long ago I learned to create designs that included both stateful and immutable objects. And to understand when it is appropriate to do so. I discovered and tweaked my heuristics for when it made sense to stream over immutable data and when to modify data in place. But in complex systems (or when you are new to libraries) it can be difficult to suss out what others are doing (or in the case of libraries, what they are forcing you to do).

But that’s not the point, really. The point is, once you understand how to use any technique, as you gain proficiency, you learn when and where to exploit it.

But is pure functional programming really, finally the panacea we’ve all been looking for? Or is it just another powerful tool in our toolkit? How powerful is it? Where is it best applied?

I’m still working through my answers to these questions. My answers will most likely differ from yours (after all, your design context and experience is different). That’s OK.

Whenever we encounter new approaches we need to reconcile them with our current preferred ways of designing. We may find ourselves going against the grain of popular trends or going with the flow. Whatever. We shouldn’t be afraid of trying something new.

Yet we also shouldn’t too easily discount and discard approaches that have worked in the past (and that still work under many conditions). Or, worse yet, we shouldn’t feel anxious that the expertise we’ve acquired is dated or that our expertise can’t be transferred to new technologies and design approaches. We can learn. We can adapt. And, yet, we don’t have to throw out everything we know in order to become proficient in other design approaches. But we do have to have an open mind.

We also shouldn’t be seduced by promises of “silver bullets.” Be aware that evangelists, enthusiasts, and entrepreneur frequently oversell the utility of technologies. To get us to adopt something new, they often encourage us to discard what has worked for us in the past.

While I like some aspects of functional programming, I see the value in multi-paradigm programming languages. I’m not a purist. Recently I’ve written some machine learning algorithms in Python for some Coursera courses I’ve taken. During that exercise, I rediscovered that powerful libraries make up for the shortcomings and quirks of any programming language. I still think Python has its fair share of quirks.

And while some consider Python to support functional programming, it isn’t a pure functional language. It takes that middle ground, as one stack overflow writer observes:

“And it should be noted that the “FP-ness” of a language isn’t binary, it’s on a continuum. Python doesn’t have built in support for efficient manipulation of immutable structures as far as I know. That’s one large knock against it, as immutability can be considered a strong aspect of FP. It also doesn’t support tail-call optimization, which can be a problem when dealing with recursive solutions. As you mentioned though, it does have first-class functions, and built-in support for some idioms, such as map/comprehensions/generator expressions, reduce and lazy processing of data.”

Python’s a multi-paradigm language with incredible support for matrix operations and a wealth of open machine learning open source libraries.

I haven’t had an opportunity to dial-up the knobs and solve larger design problems in a pure functional style. I hope to do so soon. My current thinking about a pure functional style of programming is that it works well for streaming over large volumes of data. I’m not sure how it helps support quirky, ever-changing business rules and lots of behavioral variations based on system state. Reconciling my “go to” design approaches with new ways of working takes some mental lifting and initial discomfort. But when I do take the time to new design approaches, I have no doubt that I’ll find some new heuristics, polish existing ones and learn more about design in the process.

What we say versus what we do

I’ve been hunting design heuristics for a couple of years. I’ve had conversations with designers in order to draw out their “go to” heuristics. I’ve joined design and programming sessions with experienced designers and captured on-the-fly what we were doing. My goal is to learn ways to effectively find heuristics in the wild, distill them, and then share them broadly.

But lately, I’ve been thinking about how to deal with this puzzle: What people say they do isn’t what they really do.

Let me give you an example. I joined the Cucumber folks last summer for several remote mobbing sessions. One heuristic they shared with me was this:

Heuristic: the person who has the most to learn (or knows the least about how to solve the problem) should take on the role of driver.

In “classic” mob programming as initially described, the person who is the driver and has his or her hands on the keyboard follows guidance of navigators—other mobbers who ostensibly guide the driver on what to do in order to make progress.

“In this “Driver/Navigator” pattern, the Navigator is doing the thinking about the direction we want to go, and then verbally describes and discusses the next steps of what the code must do. The Driver is translating the spoken English into code. In other words, all code written goes from the brain and mouth of the Navigator through the ears and hands of the Driver into the computer.”

What I observed the Cucumber mob doing was somewhat different. Sometimes the driver had an initial design idea and was keen to try it out. In this case, they often actively navigated and drove at the same time. Occasionally others would comment and offer advice. But mostly they just watched the design and implementation unfold. Sometimes that eager driver asked the others, should we try this now? But instead of waiting an uncomfortable length of time for them to chime in, the driver often continued on without any discussion. And I don’t think that driver was asking a rhetorical question. They wanted feedback if someone had any.

At other times the driver would stop to collect their thoughts and force a discussion. In this case the driver became uncomfortable when they didn’t get enough feedback. And sometimes they took themselves out of the driver’s role, asking someone else to fill in. In short, while I observed that driver was often in control of the wheel (and forward progress), at the same time, they didn’t overly dominate. Drivers rotated. Every one got their turn. But how these switches happened was very dynamic.

In all fairness, the mob programming website did touch on drivers and their participation in discussions:

“The main work is Navigators “thinking , describing, discussing, and steering” what we are designing/developing. The coding done by the Driver is simply the mechanics of getting actual code into the computer. The Driver is also often involved in the discussions, but her main job is to translate the ideas into code.”

While the main job of the driver may be “mechanics,” the small fast moving Cucumber team didn’t insist that getting the code into the computer be the driver’s main function. Now mind you, I suspect being remote affected their style of communications. They also knew each other well and knew each others’ common design approaches and preferences.

So why did the Cucumber mob behave this way? Did they believe one way but consciously act in another way? Did they intentionally lie about their heuristics? Or were they deceiving themselves? Are people wired to explain what they do through some kind of distortion field? How often do people believe one thing (and hold it up as an ideal) but then choose alternative heuristics? If so, is this OK?

I’m not sure the team was aware that their ways of driving/navigating deviated from the conventional driver/navigator roles until I shared my observations with them. I suspect that when they first started mobbing they were more rigorous about following the “rules” for these roles. Over time they found their own ways of working. And so the heuristics they collectively use to decide what to do, what design approach to try next, and how they interact with each other are much more fluid and nuanced than the simple descriptions of drivers and navigators on the mob programming website. They don’t exactly go “by the book.” And I suspect their heuristics for how they work together are still evolving.

So how should I as a heuristics hunter reconcile my simple goal of distilling essential heuristics with the messy realities I find on the ground?

Should I plunge into a concerted effort to sort out and formulate more nuanced heuristics? The short answer is, yes. While I want to find and record both general and more particular heuristics, I’m not inclined to want to sort them out into tidy, neat categories. After all, as Billy Vaughn Koen says, there is more than one way to solve any design problem and more than one heuristic that can work. By recording these nuances, I hope to get richer insights into the different conditions and cases and situations that lead to choosing them.

This still leaves me with one nagging question: How can I reconcile what people say they do and believe with what they actually do? My (current) approach is that as I distill heuristics I also describe the context where I find them. Should it bother me that designers don’t do as they say they do all the time? Probably not. After all, we’re wonderfully creative problem solvers. And there are always options.

Nothing ever goes exactly by the book

In Designing Object-Oriented Software, we used the design for an ATM (Automated Teller Machine) to illustrate how to design object-oriented software. We worked through a simplistic design for an ATM and produced a CRC (Class-Responsibility-Collaborator) Card design.

Several years after our book was published I received an email from an instructor at a company. Although he liked our book, he felt he was missing something important about Responsibility-Driven Design. No matter how hard he tried, his students never exactly reproduced our design! And worse yet, in his opinion, all their designs were different.

I was astonished that he expected to teach principles, practices, and design thinking and then magically his students would produce identical designs. Our minds don’t work alike, so why should we produce identical designs?

A couple of years ago I received a gift of a Blue Apron subscription. For those who are not familiar with Blue Apron, you select the meals you want for a week, then along with the recipes they ship you a box of ingredients. All you add is your own salt and olive oil, cooking utensils, stove, and voila—you make a tasty dish.

The first meal I cooked was Za’atar-Roasted Broccoli Salad. The picture of the dish on the recipe card looked like this:
Recipe
And here’s a photo of the dish as I prepared it:
DishICooked
Not bad! But I didn’t blindly follow the recipe. I used my brain, my eyes, and my cooking sense, as well as past experiences, when making that dish.

If you look closely the recipe card below you’ll see that the instructions are not equally precise. While the eggs should be boiled for exactly 9 minutes, there’s more tolerance in the time to roast the broccoli. Just exactly how much is a drizzle? And that there are so many points during the recipe when you are asked to add salt! Over time (and following several recipes’ instructions too closely) I learned that if you added salt every time they advised you to, the meals would be too salty. Over time, Blue Apron has adjusted their recipes to not include so many salt instructions.
Instructions

There is no substitute for learning from direct experience and observation. A beginning cook who expects to follow instructions exactly and get a well-made dish is bound to be frustrated.

And people who expect to follow someone else’s software design heuristics and end up with an optimal solution to their specific design problem will invariably be disappointed.

Sure, you may initially learn some design heuristics from a book or by reading code or from someone who is more senior or stack overflow or wherever. But never expect things to go exactly “by the book.” There are many details that experts never share that you’ll have to fill in. And don’t get frustrated when experts change their minds or design approaches or disagree (they do all the time). But don’t be silent, either, if you are puzzled or curious. Ask them about their thinking. They may have encountered an edge case or an exception to some “general” heuristic. Or there may have been more to the design problem than they had initially thought. By having that conversation you just might learn something from each other.

Writing, Remembering, and Sharing Design Heuristics

I’ve been experimenting with simple ways to record software design heuristics. I want to find ways to vividly bring to mind the design heuristics I use or learn about from others. Ideally, any written heuristic should be shorter than a software pattern and somewhat more detailed and informative than a single pithy phrase.

There are three reasons it is important to physically write heuristics instead of just talking about them or clipping online sources into a note taking app:

1. Writing helps us remember the important bits
For a good overview of the benefits of notetaking, read this lifehack article by Dustin Wax. If nothing else this should inspire you to jot down and/or draw by hand the key bits of the next talk or lecture you listen to. What we write we remember.

2. Writing in longhand helps us process our experiences
Research shows that the act of writing in longhand is quite different than typing. This Medical Daily article by Lizette Borreli summarizes research into the memory boosting benefits of writing longhand:

“…writing by hand allows the brain to receive feedback from a person’s motor actions, and this specific feedback is different than those received when touching and typing on a keyboard… Overall, it seems those who type their notes may potentially be at risk for ‘mindless processing.’ The old fashioned note taking method of pen and paper boosts memory and the ability to understand concepts and facts.”

3. Written heuristics can be shared
Once you’ve written some tried and true heuristics in your own words, you can share them with others. Even better, use these heuristics to stimulate a deeper discussion into the design heuristic space you are exploring. Don’t shy away from discussions about nuances, counter-examples, edge cases, and competing heuristics (alternative ways to tackle a specific design problem). You may uncover a wealth of new heuristics to ponder and open up your mind to new ways of solving familiar problems.

If the person you are conversing with doesn’t know much about the topic area of your heuristics, the questions they ask will likely lead you to clarify your thinking (or at least improve your explanations). They won’t get what you are saying if you speak too much in shorthand. If they know more about the topic, you may have a lively conversation about heuristics and lessons they’ve learned over time. They may validate your heuristics as well as give you some new ideas.

A Simple Idea- QHE Cards
I’ve been playing around with using index cards as a means to quickly capture a heuristic. I structure a heuristic in three parts: a question, the answer (which can be then polished into a couple sentence heuristic), and an example or two to help me remember. I call them QHE (Question-Heuristic-Example) or “Q-Hee” cards, for lack of a better name. Using index cards to record design heuristics is inspired by CRC (Class-Responsibility-Collaborators) cards invented by Ward Cunningham and Kent Beck and popularized in my first design book. I like index cards.

Here’s two QHE Cards describing heuristics found in a conversation with Mathias Verraes about designing event records:
AnnotatedQHECard2019_04_12 2_03 PM Office Lens~2 (1)

QHE cards are simple to write. I don’t worry about precision or formality. With a little bit of effort I can turn a question and answer into a short heuristic statement. For example, here’s my first cut at a heuristic for what information to include in an event record:

Heuristic: Only include information necessary to “replay” an event and achieve the same results. Don’t include sensitive information or extra information simply because you think it might be useful.

An example or two to jog my memory is essential. And just like CRC cards, they can be too terse for others to understand (or to remember, if I wait too long). To make a heuristic memorable, I need to actively integrate that heuristic into my design heuristic gestalt. This is especially true if it is about a design topic I am not that familiar with.

To create a stronger memory and deeper understanding I can write more about the heuristic, sketch out a more detailed example, write some code, and/or draw a diagram…whatever it takes to go a bit deeper.

Heuristic gists are especially useful when you want to share heuristics with others. They may need a bit more context to understand what you mean. I like to write them in a form similar to pattern gists. Here’s an example of a pattern gist from Fearless Change Patterns by Mary Lynn Manns and Linda Rising:
APatternGist

And here’s how I rewrote the QHE card for deciding when to generate different events from a process as a gist:

Multiple Events for a Single Process
You need to balance passing along information needed by downstream processes in a single business event with creating multiple event records, each designed to convey specific information needed by a specific downstream process.

Summary of Problem
How do you know how many events to generate for a single business process?

Summary of Solution
If different processes downstream react differently, generate different events. For example, handling a “rental car return” request might generate two events and event records: “car returned” and “mileage recorded.” Even though the mileage is recorded at the time a car is returned, mileage could be recorded at any other time as well. It is a cleaner design to generate two events, rather than cram information into a single, overloaded “car returned” event.

The act of writing a bit more detail in a gist often leads to further questions which lead to even more design heuristics. For example, these questions quickly come to mind: When is it OK to pass along extra information in an event? When is it not OK? What kinds of information should never be passed along in an event record? What happens if a new business process needs slightly different information?

And that’s the point. There are a myriad of details to work out for any real design. And many more heuristics for solving design problems and learning to live with their consequences. By writing down your heuristics, you’ll remember what you were thinking at the time. Note to future self: That effort just might be worth it!

What do typical design heuristics look like?

In my previous post I introduced the topic of design heuristics. Billy Vaughn Koen, in Discussion of The Method: Conducting the Engineer’s Approach to Problem Solving, defines a heuristic as, “anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

Common phrases that mean roughly the same thing as heuristic in English include “rule of thumb or “practical method” or “useful shortcut” or “approximation”. I don’t think approximation or shortcut convey the essence of what makes a heuristic handy or helpful. Wikipedia defines heuristic as, “any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”

Heuristics describe practical actions or attitudes we take in order to make design progress. Billy gives several examples of general engineering heuristics in his book:
Billy'sHeuristics
Reading these heuristics you might think, “Ah, heuristics are merely simple statements of what to do.”

Although heuristics can be boiled down into pithy advice, e.g. “Do this…” or “Don’t do that…,” I find that there usually is a lot more behind any simple phrase. Ask any design expert about how to tackle a specific design problem and she’ll add caveats and constraints and assumptions. Her heuristics might sound more like, “Do this when…and… unless … and here’s how I might …”. Or, “Try this first…and then…until…”.

Which leads me to observe that written design patterns–specifically software design and architecture patterns–are a handy form for sharing meaty, complex, nuanced design heuristics with others.
PatternsAreHandy

Consider this summary of the pattern, Do a Mock Installation, from Object-Oriented Re-engineering Patterns that illustrates some of that extra useful stuff that can be explained by a pattern.

  • Pattern: Do a Mock Installation
  • Intent: Check whether you have the necessary artifacts available by installing the system and recompiling the code.
  • Problem: How can you be sure that you will be able to (re)build the system?
  • Difficulties:
    • The system is new to you, so you do not know which files you need.
    • The system may depend on libraries, frameworks, and patches, and you’re uncertain you have the right versions available.
    • The system is large and complex, and the exact configuration under which the system is supposed to run is unclear.
    • Maintainers may answer these questions, or you may find answers in documentation, but you still must verify whether this information is complete.
  • Solving this is feasible because:
    • You have access to the source code and the necessary build tools
    • You have the ability to reinstall the system in an environment that is similar to that of the running system
    • Maybe the system includes some kind of self-test
  • Solution: Try to install and build the system in a clean environment taking a limited amount of time (at most one day).
  • After the build prepare a report:
    • Version number of libraries, frameworks and patches used
    • Dependencies between the infrastructure
    • Problems you encountered and how you tried to solve them
    • Suggestions for improvement
    • Assessment of the situation including possible solutions/workarounds (if failed)
  • Tradeoffs:
    • Pros:
    • It is an essential prerequisite
    • Demands precision
    • Increases your credibility
    • Cons:
    • Tedious activity
    • No certainty
    • Difficulties: Easy to get carried away.

When written well, patterns include just the right amount of extra advice. Advice that a competent designer has gained through experience and reflections. Patterns can even warn us about potential roadblocks or advise us on what to try next if the pattern’s solution isn’t a good fit for our specific problem.

But in addition to those larger-grained design problems typical of patterns, there are many smaller design decisions and gnarly design details we have to deal with. And as experienced designers, we too, have heuristics for addressing them rattling around in our brains. Wouldn’t it be great if we could find easy ways (easier than writing patterns) to record and share our heuristics with each other?

Here’s a photo of some heuristic snippets captured in a one-day workshop I held at DDD Europe:
ExampleHeuristics
I’ve been experimenting with simple ways to record, share, and communicate design heuristics that require far less effort to create than full-fledged patterns yet convey more than can be shared on a sticky note. Somewhere between full-fledged written patterns and pithy phrases is a sweet spot I’m striving for. That’s topic of my next post.

Growing Your Personal Design Heuristics Toolkit

Billy Vaughn Koen’s Discussion of The Method has had a profound influence on my thoughts about design. This book has inspired me to explore the role that software patterns play alongside other design heuristics and led me to muse about design uncertainty. I’ve been inspired to experiment with simple ways to capture design heuristics and I’ve spent time heuristics hunting with other designers.

In this blog post I’ll introduce you to Billy Vaughn Kohn’s ideas about problem solving and heuristics. Billy defines heuristic as

“anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

There are no guarantees. Heuristics can fail. The tenacity that defines an engineer is when she steps back, regroups, and finds a different heuristic to try next. Another important thing to consider is that there are always competing heuristics to choose from. There simply isn’t a single right way to solve any complex design problem. Based on our assessment of the situation, our judgment, and the fit of the heuristics we know to the problem at hand, we decide what to do next. And when designers disagree, it may be because they are focusing on different aspects of the problem, or that they have a different collection of cherished heuristics in their toolkit (whether or not they can articulate them).

Courtesy of xkcd.com https://xkcd.com/309/

Courtesy of xkcd.com https://xkcd.com/309/

Billy describes three different type of heuristics:

    1. Heuristics we use to solve a specific problem;
    2. Heuristics that guide our use of other heuristics (meta-heuristics, if you will); and
    3. Heuristics that determine our attitude and behavior towards design or the world and the way we work.

Let’s take a closer look at each type.

Heuristics we apply to solve a problem. For example, in his book, Billy gives several examples of general engineering heuristics: Heuristic: Always be prepared to give an answer (and when you need to give an answer, use the best approach you know for figuring out that answer based on the time constraints and resources you have at hand). As an example, imagine the approach you might take for determining the number of Ping-Pong balls that fill up a room if you had to give an answer in 2 seconds. What if you had 2 minutes, an hour, or a week to come up with an answer? Would your approaches differ?

Heuristic: Use feedback to stabilize your design. Billy Vaughn Koen is a Professor Emeritus of Nuclear Physics as well as a philosopher of engineering. He taught people who designed nuclear reactors. I’m glad he taught them about the importance of feedback loops to verify design assumptions. And that is one reason why frequent delivery of software, retrospectives, and reviews are valued activities in agile development.

And oh yes, one last heuristic: Always give yourself a chance to retreat. Backing out of a half-baked or unworkable design is important if you don’t want to paint yourself into a corner.

As software designers we have many heuristics in our toolkit. If we know about design patterns, we use them. If we’ve built high-performance systems, we may have a wealth of heuristics for tuning cache performance. If we are familiar with Domain Driven Design concepts, our preferred way to structure a complex system is by identifying bounded contexts. For those unfamiliar with Domain Driven Design, a bounded context might be considered to be roughly equivalent to a subsystem or subdomain. The distinguishing characteristic of a bounded context is that within it there is a consistent, single, unambiguous meaning for business concepts and events. And we may visualize event flows and relationships between bounded contexts by drawing a context map.

Heuristics that guide our use of other heuristics. These heuristics tell us what to try next. The best examples I know of these heuristics are in Object-Oriented Reengineering Patterns. Each chapter in this book is a small pattern language, describing a rough sequence of actions for solving a particular system-reengineering goal. For example, chapter 3 is a pattern language for making “first contact” with a legacy system. Based on what you have just learned, the language spells out options for what to explore next.

First contact design pattern language

Heuristics that determine our attitude and behavior. Agile software designers value frequent feedback. That’s the premise behind frequently deploying working software in order to have its functionality exercised by actual customers. And one heuristic for reducing the risk of downtime when frequently deploying software is to use a blue/green deployment environment.

While we may have learned design patterns (or other published patterns, for that matter) by reading about them, most nitty-gritty design heuristics come from on-the-job experience. We learn a lot by living with the myriad design choices we make (and revisit) as our system’s functionality grows. We learn even more as we support a system over its lifetime. Our heuristics are shaped not only by the kinds of software systems we have designed, built, and maintained but also by the design culture at our places of work.

How do we improve as designers? Through designing and building software. By reflecting on that experience. And by polishing and refining our heuristics and sharing our cherished heuristics with each other.

What’s going on here?

Question: What is more exasperating than reading design documentation that doesn’t synch up with the code?
One answer: Writing such useless design documentation.
Another answer: Writing documents that don’t stand a chance of being aligned with the code.
My answer: Reading design documentation that is at the wrong level of abstraction or detail to help me do the task at hand.

Courtesy xkcd https://xkcd.com/license.html


A few years ago I worked with someone, who, when asked for a high level overview of some complex bits of code we were going to refactor, produced a detailed Visio diagram that went on for 40+ pages! In that document were statements directly lifted from his code. The intent was to represent the control logic and branching structure of some very complicated algorithms (he copied conditional statements into decision diamonds and copied code that performed steps of the algorithm directly into action blocks). To him, the exercise of creating this documentation had really clarified his design. I was amazed he went to such an effort. To me, the control structure was evident by simply reading the code. Moreover, what I was looking for, and obviously didn’t communicate very well, was that I would appreciate some discussion at a higher level, explaining what algorithm variations there were and how code and data were structured in order to handle myriad typical and special cases.

I wanted to be oriented to the code and data and parts that were configurable and parameterized.

Later, I learned from one of his colleagues that “Bob” always explained things in great detail and wasn’t very good at “boiling things down.” If you wanted to ask “Bob” a question about the code you’d better plan on at least spending half an hour with him.

Another time a colleague created a sequence diagram explaining how data was created that was used by a nightly cron job. What was missing was any description of what that cron job did. Maybe it was so obvious to him that he didn’t think it was important. But I lacked the context to get the full picture. So I had to dig into the cron job script to understand what was really going on.

When I want to get oriented to a large body of code, I like to see a depiction of how it is structured and organized and the major responsibilities of each significant part. Not the package or file structure (that will be useful, eventually), but how it is organized into significant modules, functions, classes and/or call structures. I want to know what the major parts of the system are and why they are there and relationships between them. And at a high level, how they interact. And then I want to understand key aspects of the system dynamics. Just by staring at code I’ll never fathom all the ins and outs of its execution model (e.g. what threads or processes there are, what important information is consumed, produced, or passed around).

Simon Brown gave a talk at XP2016 on The Art of Visualizing Software Architecture. Simon travels the world giving advice and training and talks at conferences about how to do this. He has also build a tool called Structerizr which:

“ blend[s] together the best parts of the various approaches and allow software developers to easily create software architecture diagrams that remain up to date when the code changes. Structurizr allows you to take a hybrid “extract and supplement” approach. It provides you with a way to create software architecture models using code. This means you can extract information from your code using static analysis and reflection, supplementing the model where information isn’t readily available. The resulting diagrams can be visualized and shared via the web.”

I had the opportunity to spend an hour with Simon getting a personalized demo of Structurizr, and discussing the ins and outs of his tool works, how it has been used, and some aspirations for the future.

I was initially surprised about how you specify what your “model elements and relationships” are: you write code specifications for this. To me, it seemed that this could have more easily be accomplished by writing some parseable external description or using a diagramming tool which enabled me to link to the relevant code.

However, I know why Simon’s made this choice: He deeply believes that code should be the reference point for all structural architecture information. So he thinks it a natural extension that developers write a bit more code to specify the important elements of the models they want to see and how to depict them. To Simon, it isn’t sensical to just “add an element” to a diagram unless there is some attachment to actual code. For example, you may define a Hibernate specification to represent data access to a repository. Then that can be specified as a data source. With external descriptions that are not based on code, you have a greater chance of it becoming disconnected from what code actually does. So be it. This is where my values diverge a bit from Simon’s. I favor a richer or higher level documentation that isn’t code based if it tells me something I need to know to do a task (or something I shouldn’t do) or that gives me insights that I wouldn’t find otherwise. I also like the freedom to embellish my diagrams with extra annotations, colors, highlights, etc., etc. so I can focus viewers’ attention. And consequently, I also like to create a key that describes the elements of any diagram so casual readers can understand my notation, too.

After processing these model descriptions, Structurizr spits out JSON descriptions that represent aspects of a C4 model: Context, Containers, Components, and Class diagrams. These are then submitted to a service, which then create various model visualizations that can directly link to parts of your code source. There’s also a rudimentary ability to add additional bits of information to textually describe your architecture and architecture requirements. I’m glad that, recognizing the limits of what code can tell about architecture, the tool provides an easy entry way into developers telling more about the architecture in text.

In our conversation, Simon emphasized that one important benefit of linking architecture document to code is that it never gets out of synch. The code for your architecture documentation can be versioned, right along with your production code.

Fair enough. Useful even. But how helpful is the documentation that is generated in understanding what’s going on in your system?

I think this is one way (certainly not the only way, however) to provide an orientation into the overall structure of your system and major relationships between its parts. But there are definite limits to what you can describe by models generated from static analysis of code. Such models don’t tell me the responsibilities of a component. I have to glean that from supplementary text (which I didn’t see an easy way to attach to the model descriptions) or to abstract that by reading code. And they certainly won’t explain any dynamic system behavior or system qualities.

While I find Structurizr’s direct connection to the code intriguing, it is also its biggest limitation. Structurizr is intended to be used by developers who want to create some architecture documentation, which stays in synch with their code and that also can be shared via websites or wikis or embedding it into other documents.

Structurizr has succeeded with this fairly modest goal: assist developers who don’t ordinarily generate any useful architecture diagrams whatsoever to do something.

But this isn’t enough. As designers or architects, only you can tell me answers to my larger questions about why your code is structured the way it is, and what are the important bits about the system and runtime execution that are intentional and crucial to understand and preserve. No diagram generated by a tool will ever replace your insights and wisdom. No amount of code comments can convey all that either. I need to hear (and read) and see more of this kind of stuff from you.

Being Agile About Documenting and Communicating Architecture


Software architects and developers often need to defend, critique, define, or explain many different aspects of the design a complex system. Yet agile teams favor direct communications over documentation. Do we still need to document our designs?

Of course we do.

We won’t always be around to directly communicate our design to all current and future stakeholders. Personally, I’ve never found working code (or tests) to be the best expression of a software design. Tests express expectations about observable system behavior (not about the design choices we made in implementing that behavior). And the code doesn’t capture what we were thinking at the time we wrote that code or the ideas we considered and discarded as we got that code fully functioning. Neither tests nor code capture all our constraints and working assumptions or our hopes and aspirations for that code.

So what kind of design documentation should we create and how much documentation should we create, for whom, and when? And what is “good” enough documentation?
Eoin Woods gave a talk at XP 2016 titled, “Capturing Design (When You Really Have To)” that got me to revisit my own beliefs on the topic and to think about the state of the current agile practices on documenting architecture.

One take away from Eoin’s talk is to consider the primary purpose of any design description: is it primarily to immediately communicate or to be a long-term record? If your primary goal is to communicate on the fly, then Eoin claims that your documentation should be short lived, tailored to your audience, throwaway, and informal. On the other hand, design descriptions as records are likely to be long-lived, preserve information, be maintainable and organized, and more formal (or well-defined).

Since we are charged to deliver value with our working software, it is often hard to pay attention to any perceived efforts at “slowing down” to describe our systems as we build them. But if we’re building software that is expected to live long (and prosper), it makes sense to invest in documenting some aspects of that system—if nothing more to serve as breadcrumbs useful to those working on the system in the future or to our future selves.

So what can we do to keep design descriptions useful, relevant, unambiguous, and up-to-date? Eoin argues that to be palatable to agile projects, design documentation should be minimal, useful, and significant. It should explain what is important about the design and why it is important; what design decisions we made (and when), and what are the major system pieces, their responsibilities and key interactions. Because of my Responsibility-Driven Design values and roots, I like that he considers system elements and their responsibilities as being minimally useful information descriptions of a system. But to me this just a starting point to get an initial sense of what a system is and does and does not do. There certainly is room for more description, and more details when warranted.

And that gets us back to pragmatics. A design description isn’t the first thing that developers think of doing (not everyone is a visual thinker nor a writer). I know I’m atypical because, early in my engineering career, I enjoyed spending 3 weeks writing a document on my universal linker worked and how to extend it and its limitations. I was nearly as happy producing that document as I was in designing and implementing that linker. It pleased me that sustaining engineering found that document useful years after I had left for another job.

So for the rest of you who don’t find it natural to create documentation, here’s some advice from Eoin:

  • Do it sprint-by-sprint (or little by little). Don’t do it all at once.
  • Be aware of tradeoffs between fidelity and maintainability. The more precise it is, the harder descriptions will be to keep up to date.
  • Know the precision needed by your document’s users. If they need details, they need details. The more details, the more effort required to keep them up to date.
  • Consider linking design descriptions to your code (more on that in another blog post)
  • Create a “commons” where design descriptions are accessible and shared
  • Focus on the “gaps”—describe things that are poorly understood
  • Always ask what’s good enough. Don’t settle for less when more is needed or more when less is needed.

To this list I would add: if design descriptions are important to your company and your product or project, make it known. Explain why design documentation is important, respond to questions and challenges of that commitment, and then give people the support they need to create these kinds of descriptions. Let them perform experiments and build consensus around what is needed.

Be creative and incremental. One company I know made short video recordings of designers and architects giving short talks about why things worked the way they did. They were really short—five minutes or less. Another team created lightweight architecture documentation as they enhanced and made architectural improvements to the 300+ working applications they had to support. It was essential for them that there be more than just the code as the knowledge about these systems was getting lost and decaying over time. Rather than throw up their hands and give up, they just created enough design documentation using simple templates and only as new initiatives were started.

Find a willing documenter. Sometimes a new person (who is new to the system and to the company) is a good person to pair with another old hand to create a high-level description of the system as part of “getting their feet wet.” But don’t just stick them with the documentation. Have them write code and tests, too. From the start.

It’s Not That Simple: The Interplay Between Fast and Slow Thinking

Our system 2 thinking observes and constantly monitors our actions and thoughts (unless it is tired or compromised). It helps regulate our behavior and to:

  • Control impulses and stop doing something that we consider inappropriate (such as eating that second piece of pie) or do something, even if we don’t like it (such as not rising to the bait in a political discussion);
  • Manage our energy, emotions, attention and behavior in socially acceptable ways to achieve our goals;
  • Stay calm, focused, and alert; and
  • Deal with daily stresses such as noise, fatigue, challenging situations or tasks, or distractions.

System 2 thinking doesn’t merely look over your shoulder to “correct” illogical system 1 thinking. The interplay between these two systems is much more complex. The two systems have deep connections and influence each other.

Emotions generated by fast thinking affect logical system 2 thinking. And fast thinking, when it doesn’t find meaningful associations that fit the current context calls upon the logical system 2 for help to interpret what is going on.

I had an opportunity to experience this interplay sitting in a bar talking with Linda Rising at an agile conference. A loud angry voice coming from the lobby abruptly interrupted our conversation. Initially, I was frightened. Then annoyed. Then irritated. (My emotional system 1 thinking kicked in). The yelling didn’t stop. Did someone need help? Why was the yelling man so upset? (Again, my system 1 was trying to connect yelling with some reason to be yelling). But I couldn’t understand what the yelling was about. And then, seemingly out of the blue (system 2 thinking coming to the rescue) I remembered that there was also a conference in the hotel on Tourette’s syndrome. That loud angry voice now made sense. It was the voice of someone with Tourette’s.

It is too simple to say that our brain is either emotional and associative (fast thinking) or logical (slow thinking).

We don’t get to choose which form of thinking we use.

The two systems interact…and that’s where it gets really interesting. Being aware of how my brain works hasn’t stopped me from being swayed by my emotions. But it has made me more aware of thinking dynamics. I may not be able to stop my knee-jerk reactions, but I am aware now that my delayed reactions and thoughts might also be helpful, if I give them enough time to surface.

On Thinking

Daniel Kahneman, in Thinking Fast and Slow, introduces two “systems” of thinking: fast or type 1, and slow or type 2. We don’t actually have two different parts to our brains—but we do have two distinct types of thinking mechanisms, each with their respective strengths and drawbacks.

Fast thinking happens automatically with little or no effort or sense of voluntary control. Quick, what’s your first reaction to this photo?

My immediate visceral reaction: “That dog is ugly! It looks crazed, angry…. scary.”

The dog is Peanut, winner of the 2014 Ugliest Dog contest. His wild hair, bulging eyes and protruding teeth are at odds with his sweet personality. Holly Chandler of Greenville, North Carolina, Peanut’s owner says he was seriously burned as a puppy, resulting in bald patches all over his body. Peanut is healthy now and nothing like my first impression.

In a nutshell, fast thinking is automatic (we do not have to work at it), impulsive (we can’t easily control it) and emotional (once I’ve told you a little about Peanut you probably feel sorry and sad and more positive towards Peanut, despite your initial reaction). Fast thinking draws on our amazing abilities to quickly form associations between related things. When it works well, we easily come to conclusions, draw out inferences, and confidently decide without breaking a sweat.

It’s sweet when your associative memory clicks along in high gear providing ready answers. But fast thinking can be subtly and easily biased. The variety of cognitive biases we have is astonishing.

In contrast, slow or system 2 type thinking requires energy. We get tired when we do a lot of slow thinking. We use slow thinking to reason logically, compute, or, surprisingly, when paying attention to someone speaking to us in a crowded noisy room. Both writing code and writing prose are slow thinking tasks for me. Reasoning about the cause of a bug is too. The fast cycles of strict TDD help some with breaking up the necessary slow thinking into more manageable chunks. For me, deciding what the test should be and how to implement my design ideas are almost always slow thinking tasks. Writing code may or may not be, depending on how familiar I am with the programming language, tool set and design context.

It is too simplistic to say that our brains are wired to be either emotional or logical. We don’t always get to choose which form of thinking we use. Even more interesting is how the two systems interact. Our system 2 constantly monitors our system 1 thoughts, and kicks in when it notices an inconsistency, saying, “Hmm…that doesn’t seem right,” perhaps leading to some conscious system 2 thinking and problem solving.

But as this two-star Amazon reviewer observes, Kahneman’s book doesn’t contain easy-to-digest pop psychology advice for how to be a better thinking being:

“Peace to all…The idea of system 1 and 2 for the brain was the twist. (System 1 is impulsive autopilot, system 2 reflective and analysis oriented)
Other than that, the book is filled with too many experiments that i felt didnt add practical value to me. I was hoping for practical solutions to help make those two systems work together in harmony…the book did NOT deliver that which was disappointing.”

Kahnman won’t fill you with sound-bite sized advice on how to sharpen your thinking. So what can you do now that you are aware of how you think? Are there ways to counteract the bad things about these systems and amplify the good things? I think so.

Since reading Kahnman, I’ve observed how my environment affects my thinking and how my thinking demands shift from task to task. Sometimes fast thinking trips me up because what I assumed made it difficult for me to really see things as they were. I keep working to counteract cognitive biases of my own and those I work with. I’ve become more aware when others don’t react as I expect or aren’t making “logical” decisions. I’ve become more understanding and less impatient.

I am experimenting with ways to support and enhance system 2 thinking in others and myself. I find that I need to carve out a space where I can work without distraction and have enough time to get deeply engaged in a challenging system 2 type task. I am easily distracted by noise or visual stimulation. I also find the Pomodoro technique doesn’t help me get more work done. I don’t have a problem with focus. My problem is knowing when to take a break. And I don’t easily get back on task after a 5 minute break.

Your experience no doubt varies. Our brains don’t work the same way. Let’s continue exploring how our daily work practices enhance our thinking and where we need to take extra care and pay more attention.