Splitting a Domain Across Multiple Bounded Contexts

How designing for business opportunities and the rate of change may give you better contexts.





I have started collaborating with Mathias Verraes and writing on the topic of Bounded Contexts and strategic design. This blog post is the first in what I hope will be an ongoing discussion about design choices and their impacts on sustainable software systems. –Rebecca

Imagine a wholesaler of parts for agricultural machines. They’ve built a B2B webshop that resellers and machine servicing companies use to order. In their Ubiquitous Language, an Order represents this automated process. It enables customers to pick products, apply the right discounts, and push it to Shipment.

Our wholesaler merges with a competitor: They’re an older player with a solid customer base and a huge catalog. They also have an ordering system, but it’s much more traditional: customers call, and an account manager enters the order, applies an arbitrary discount, and pushes it to Shipment.


The merged company still has a single Sales Subdomain, but it now has two Sales Bounded Contexts. They both have concepts like Order and Discount in their models, and these concepts have fundamentally the same meaning. The employees from both wholesalers agree on what an order or a discount is. But they have different processes for using them, they enter different information in the forms, and there are different business rules.

In the technical sense that difference is expressed in the object design, the methods, the workflows, the logic for applying discounts, the database representation, and some of the language. It runs deeper though: for a software designer to be productive in either Bounded Context, they’d have to understand the many distinctions between the two models. Bounded Contexts represent Understandability Boundaries.

In a perfectly designed system, our ideal Bounded Contexts would usually align nicely with the boundaries of the subdomains. In reality though, Bounded Contexts follow the contours of the evolution of the system. Systems evolve along with the needs and opportunities of organisations. And unfortunately, needs and opportunities don’t often arise in ways that match our design sensibilities. We’re uncomfortable with code that could be made more consistent, if only we had the time to do so. We want to unify concepts and craft clean abstractions, because we think that is what we should do to create a well-designed system. But that might not be the better option.

Deliberate Design Choices

The example above trivially shows that a single subdomain may be represented by multiple Bounded Contexts.

Bounded Contexts may or may not align with app or service boundaries. Similarly, they may or may not align with domain boundaries. Domains live in the problem space. They are how an organisation perceives its areas of activity and expertise. Bounded Contexts are part of the solution space; they are deliberate design choices. As a systems designer, you choose these boundaries to manage the understandability of the system, by using different models to solve different aspects of the domain.

You might argue that in the wholesaler merger, the designers didn’t have a choice. It’s true that the engineers didn’t choose to merge the companies. And there will always be external triggers that put constraints on our designs. At this point however, the systems designers can make a case for:

  • merging the two Sales Contexts,
  • migrating one to the other,
  • building a new Sales Context to replace both,
  • postponing this effort,
  • or not doing anything and keeping the existing two Contexts.

These are design choices, even if ultimately the CEO picks from those options (because of the expected ROI for example). Perhaps, after considering the trade-offs, keeping the two Sales Contexts side by side is the best strategic design choice for now, as it allows the merged company to focus on new opportunities. After all, the existing systems do serve their purpose. The takeaway here is that having two Bounded Contexts for a single Subdomain can be a perfectly valid choice.

Twenty Commodities Traders

When there is no external trigger (as in the wholesaler merger), would you ever choose to split a single domain over multiple Contexts, deliberately?

One of us was brought in to consult for a small commodities trader. They were focused on a few specialty commodities, and consisted of 20 traders, some operational support roles, and around 10 developers. The lead engineer gave a tour of the system, which consisted of 20 Bounded Contexts for the single Trading Domain. There was one Context for each trader.

This seemed odd, and our instinct was to identify similarities, abstract them, and make them reusable. The developers were doing none of that. At best they were copy-pasting chunks of each other’s code. The lead engineer’s main concern was “Are we doing the optimal design for our situation?” They were worried they were in a big mess.

Were they?

Every trader had their own representation of a trade. There were even multiple representations of money throughout the company. The traders’ algorithms were different, although many were doing somewhat similar things. Each trader had a different dashboard. The developers used the same third party libraries, but when they shared their own code between each other, they made no attempt at unifying it. Instead, they copied code, and modified it over time as they saw fit. A lot of the work involved mathematical algorithms, more than typical business oriented IT.

It turned out that every trader had unique needs. They needed to move fast: they experimented with different algorithms, projections, and ways of looking at the market. The developers were serving the traders, and worked in close collaboration with them, constantly turning their ideas into new code. The traders were highly valued, they were the prima donnas in a high stress, highly competitive environment. You couldn’t be a slow coder or a math slacker if you wanted to be part of this. There were no (Jira) tickets, no feature backlogs. It was the ultimate fast feedback loop between domain experts and programmers.

Things were changing very rapidly, every day. Finding the right abstractions would have taken a lot of coordination, it would slow development down drastically. An attempt at unifying the code would have destroyed the company.

This design was not full of technical debt. It also wasn’t a legacy problem, where the design had accidentally grown this way over the years. The code was working. This lack of unifying abstractions was a deliberate design choice, fit for purpose, even if it seems like a radical choice at first. And all the developers and traders were happy with it.

These weren’t merely separate programs with a lot of repeated code either. This was a single domain, split over 20 Bounded Contexts, each with their own domain model, their own Ubiquitous Language, and their own rate of change. Coordinating the language and concepts in the models, would have increased the friction in this high speed environment. By deliberately choosing to design individual Contexts, they eliminated that friction.

Trade-offs

There are consequences of this design choice: When a developer wanted help with a problem, they had to bring that other developer up to speed. Each developer, when working in another bounded context, expected that they’d have to make a context switch. After all, their terms and concepts were different from each other, even though they shared similar terminology. Context-switching has a cost, which you’ve probably experienced if you’ve work on different projects throughout a day. But here, because the Contexts were clearly well-bounded, this didn’t cause many problems. And sometimes, by explaining a problem to another developer with a similar background (but a different Bounded Context), solutions became obvious.

Multiple Bounded Contexts in Ordinary IT Systems

The trading system is an extreme example, and you won’t come across many environments where a single Subdomain with 20 Bounded Contexts would make sense. But there are many common situations where you should consider splitting a domain. If in your company, the rules about pricing for individual and corporate customers are different, perhaps efforts to unify these rules in a single domain model will cost more than it is worth. Or in a payroll system, where the rules and processes for salaried and hourly employees are different, you might be better off splitting this domain.

The trading system is an extreme example, and you won’t come across many environments where a single Subdomain with 20 Bounded Contexts would make sense. But there are many common situations where you should consider splitting a domain. If in your company, the rules about pricing for individual and corporate customers are different, perhaps efforts to unify these rules in a single domain model will cost more than it is worth. Or in a payroll system, where the rules and processes for salaried and hourly employees are different, you might be better off splitting this domain.

Conclusion

The question is not: Can I unify this? Of course you can. But should you? Perhaps not. The right Context boundaries follow the contours of the business. Different areas change at different times and at different speeds. And over time what appears similar may diverge in surprising and unexpectedly productive ways (if given the opportunity). Squeezing concepts into a single model is constraining. You’re complicating your model by making it serve two distinct business needs, and taking a continued coordination cost. It’s a hidden dependency debt.

There’s two heuristics we can derive here:

  1. Bounded Contexts shouldn’t serve the designer’s sensibilities and need for perfection, but enable business opportunities.
  2. The Rate of Change Heuristic: Consider organizing Bounded Contexts so they manage related concepts that change at the same pace.

Written by Mathias Verraes @mathiasverraes and Rebecca Wirfs-Brock @rebeccawb.

Design (Un)Certainty and Decision Trees

Billy Vaughn Koen, in The Discussion of The Method: Conducting the Engineer’s Approach to Problem Solving, says, “the absolute value of a heuristic is not established by conflict, but depends upon its usefulness in a specific context.”

Heuristics often compete and conflict with each other. Frequently I use examples I find in presentations or blog posts to illustrate this point.

For example, I took this photo of a presentation by Michiel Overeem at DDD Europe where he surveyed various approaches people employed to update event stores and their event schemas.

Five different alternatives for updating event stores

Five different alternatives for updating event stores


Event stores are often used (not exclusively) in event-sourced architectures, where instead of storing data in traditional databases, changes to application state are stored as a sequence of events in event stores. For an introduction to event sourced architectures see Martin Fowler’s article. An event store is comprised of a collection of uniquely identified event streams which contain collections of events. Events are typed, uniquely identified and contain a set of attribute/value pairs.

Michiel found five different ways designers addressed schema updates, each with different tradeoffs and constraints. The numbers across the top of the slide indicate the number of times each approached was used across 24 different people surveyed. Several used more than one approach (if you add up the x/24 count at the top of the slide there are more than 24 updates). Simply because you successfully updated your event store using one approach doesn’t mean you must do it the same way the next time. It is up to us as designers to sort things out and decide what to do next based on the current context. The nature of the schema change, the amount of data to be updated, the sensitivity of that data, and the amount of traffic that runs through apps that use the data all play into deciding which approach or approaches to take.

The “Weak schema” approach allows for additional data to be passed in an event record. The “upcaster” approach transforms incoming event data into the new format. The “copy transform” approach makes a copy and then updates that copy. Michiel found that these were the most common. “Versioned events” and “In-Place” updates were infrequently applied.

I always want to know more about what drives designers to choose a particular heuristic over another. So I was happy to read the research paper Michiel and colleagues wrote for SANER 2017(the International Conference on Software Analysis, Evolution and Reengineering) appropriately titled The Dark Side of Event Sourcing: Managing Data Conversion. Their paper, which gives a fuller treatment of options for updating event stores and their data schemas, can be found here. In the paper they characterized updates as being either basic (simple) or complex. Updates could be made to events, event streams, or the event store itself.

Basic event updates included adding or deleting an attribute, or changing the name or value of an attribute. Complex event updates included merging or splitting attributes.

Basic stream update operations were adding, deleting, or renaming events. Complex Stream updates involved merging or splitting events, or moving attributes between events.

Basic store updates were adding, deleting or renaming a stream. Complex store updates were merging and splitting streams or moving an event from one stream to another.

An example of a simple event update might be deleting a discountCode attribute. An example of a complex event update might be splitting an address attribute into various subparts (street, number, etc.).

Importantly, their paper offered a decision framework for selecting an appropriate update strategy which walks through a set of decisions leading to approaches for updating the data, the applications that use that data, or both. I’ve recreated the decision tree they used to summarize their framework below:

Decision Tree for Upgrading an Event Store System from The Dark Side of Event Sourcing: Managing Data Conversion

Decision Tree for Upgrading an Event Store System from The Dark Side of Event Sourcing: Managing Data Conversion


The authors also tested out their decision framework with three experts. Those experts noted that in their experiences they found complex schema updates to be rare. Hmm, does that mean that half of the decision tree isn’t very useful?

Also, they offered that instead of updating event schema they could instead use compensating techniques to accomplish similar results. For example, they might write code to create a new projection used by queries which contained an event attribute split into subparts—no need to update the event itself. Should the left half of their decision tree be expanded to include other compensating techniques (such as rewriting projection code logic)? Perhaps.

Also, the experts preferred copy or in-place transformations instead of approaches that involved writing and maintaining lots of conversion code. I wonder, did they do this even in the case of “basic” updates. If so, what led them to make this decision?

Decision trees are a good way to capture design options. But they can also be deceptive about their coverage of the problem/solution space. A decision table is considered balanced or complete if it includes every possible combination of input variables. We programmers are used to writing precise, balanced decision structures in our code based on a complete set of parameters. But the “variables” that go into deciding what event schema update approach aren’t so well-defined.

To me, initially this decision framework seemed comprehensive. But on further reflection, I believe it could benefit from reflecting the heuristics used by more designers and maintainers of production event-sourced systems. While this framework was a good first cut, the decision tree for schema updates needs more details (and rework) to capture more factors that go into schema update design. I don’t know of an easy way to capture these heuristics except through interviews, conversations, and observing what people actually do (rather than what they say they prefer).

For example, none of the experts preferred event record versioning, and yet, if it were done carefully, maybe event versions would require less maintenance. Is there ever a good time for “converting” an old event version to a newer one? And, if you need to delete an attribute on an event because it is no longer available or relevant, what are the implications of deprecating, rather than deleting it? Should it always be the event consumers’ responsibility to infer values for missing (deleted) attributes? Or is it ever advantageous to create and pass along default values for deleted attributes?

This led me to consider the myriad heuristics that go into making any data schema change (not just for event records): Under what conditions is it better to do a quick, cheap, easy to implement update to a schema—for example deciding to pack on additional attributes, having a policy to never delete any?

People in the database world have written much about gritty schema update approaches. While event stores are a relatively new technology, heuristics for data migration are not. But in order to benefit from these insights, we need to reach back in time and recover and repurpose these heuristics from those technologies to event stores.Those who build production event sourced architectures have likely done this or else they’ve had to learn some practical data schema evolution heuristics through experience (and trial and error). As Billy Vaughn Koen states, “If this context changes, the heuristic may become uninteresting and disappear from view awaiting, perhaps, a change of fortune in the future.” He wasn’t talking about software architecture and design heuristics per se, but he could have been.

Reconciling New Design Approaches with What You Already Know

Image

Change
Last week at the deliver:Agile conference in Nashville I attended a talk by Autumn Crossman explaining the benefits of functional programming to us old timey object-oriented designers. I also attended the session led by Declan Whelan and Sean Button, on “Overcoming dys-functional programming. Leverage & transcend years of OO know-how with FP.”

The implication in both talks was that although objects have strengths, they are often abused and not powerful enough for some of today’s problems. And that now is an opportune time for us OO designers to make some changes to our preferred ways of working.

Yet I find myself asking: when should I step away from what I’ve been doing and know how to do well and step into a totally new design approach?

No doubt, functional programming is becoming more popular. But objects aren’t going away, either.

There are some benefits of pure functional solutions to certain design problems. Pure functional programming solutions don’t have side effects. You make stream-processing steps easily composable by designing little, single purpose functions operating over immutable data. You are mutating data, it just isn’t being mutated in place. In OO terms, you aren’t changing the internal state of objects, you are creating new objects with different internal state. By using map-reduce you can avoid loop/iteration/end-condition programming errors (letting powerful functions handle those details). No need to define variables and counters. This is already familiar to Smalltalk programmers via do:, collect:, select:, and inject:into: methods which operate on collections (Ruby has its equivalents, too). And by operating on immutable data, multi-threading and parallelization get easier.

I get that.

But I can create immutable data using OO technology, too. Ever hear of the Value Object pattern? Long ago I learned to create designs that included both stateful and immutable objects. And to understand when it is appropriate to do so. I discovered and tweaked my heuristics for when it made sense to stream over immutable data and when to modify data in place. But in complex systems (or when you are new to libraries) it can be difficult to suss out what others are doing (or in the case of libraries, what they are forcing you to do).

But that’s not the point, really. The point is, once you understand how to use any technique, as you gain proficiency, you learn when and where to exploit it.

But is pure functional programming really, finally the panacea we’ve all been looking for? Or is it just another powerful tool in our toolkit? How powerful is it? Where is it best applied?

I’m still working through my answers to these questions. My answers will most likely differ from yours (after all, your design context and experience is different). That’s OK.

Whenever we encounter new approaches we need to reconcile them with our current preferred ways of designing. We may find ourselves going against the grain of popular trends or going with the flow. Whatever. We shouldn’t be afraid of trying something new.

Yet we also shouldn’t too easily discount and discard approaches that have worked in the past (and that still work under many conditions). Or, worse yet, we shouldn’t feel anxious that the expertise we’ve acquired is dated or that our expertise can’t be transferred to new technologies and design approaches. We can learn. We can adapt. And, yet, we don’t have to throw out everything we know in order to become proficient in other design approaches. But we do have to have an open mind.

We also shouldn’t be seduced by promises of “silver bullets.” Be aware that evangelists, enthusiasts, and entrepreneur frequently oversell the utility of technologies. To get us to adopt something new, they often encourage us to discard what has worked for us in the past.

While I like some aspects of functional programming, I see the value in multi-paradigm programming languages. I’m not a purist. Recently I’ve written some machine learning algorithms in Python for some Coursera courses I’ve taken. During that exercise, I rediscovered that powerful libraries make up for the shortcomings and quirks of any programming language. I still think Python has its fair share of quirks.

And while some consider Python to support functional programming, it isn’t a pure functional language. It takes that middle ground, as one stack overflow writer observes:

“And it should be noted that the “FP-ness” of a language isn’t binary, it’s on a continuum. Python doesn’t have built in support for efficient manipulation of immutable structures as far as I know. That’s one large knock against it, as immutability can be considered a strong aspect of FP. It also doesn’t support tail-call optimization, which can be a problem when dealing with recursive solutions. As you mentioned though, it does have first-class functions, and built-in support for some idioms, such as map/comprehensions/generator expressions, reduce and lazy processing of data.”

Python’s a multi-paradigm language with incredible support for matrix operations and a wealth of open machine learning open source libraries.

I haven’t had an opportunity to dial-up the knobs and solve larger design problems in a pure functional style. I hope to do so soon. My current thinking about a pure functional style of programming is that it works well for streaming over large volumes of data. I’m not sure how it helps support quirky, ever-changing business rules and lots of behavioral variations based on system state. Reconciling my “go to” design approaches with new ways of working takes some mental lifting and initial discomfort. But when I do take the time to new design approaches, I have no doubt that I’ll find some new heuristics, polish existing ones and learn more about design in the process.

What we say versus what we do

I’ve been hunting design heuristics for a couple of years. I’ve had conversations with designers in order to draw out their “go to” heuristics. I’ve joined design and programming sessions with experienced designers and captured on-the-fly what we were doing. My goal is to learn ways to effectively find heuristics in the wild, distill them, and then share them broadly.

But lately, I’ve been thinking about how to deal with this puzzle: What people say they do isn’t what they really do.

Let me give you an example. I joined the Cucumber folks last summer for several remote mobbing sessions. One heuristic they shared with me was this:

Heuristic: the person who has the most to learn (or knows the least about how to solve the problem) should take on the role of driver.

In “classic” mob programming as initially described, the person who is the driver and has his or her hands on the keyboard follows guidance of navigators—other mobbers who ostensibly guide the driver on what to do in order to make progress.

“In this “Driver/Navigator” pattern, the Navigator is doing the thinking about the direction we want to go, and then verbally describes and discusses the next steps of what the code must do. The Driver is translating the spoken English into code. In other words, all code written goes from the brain and mouth of the Navigator through the ears and hands of the Driver into the computer.”

What I observed the Cucumber mob doing was somewhat different. Sometimes the driver had an initial design idea and was keen to try it out. In this case, they often actively navigated and drove at the same time. Occasionally others would comment and offer advice. But mostly they just watched the design and implementation unfold. Sometimes that eager driver asked the others, should we try this now? But instead of waiting an uncomfortable length of time for them to chime in, the driver often continued on without any discussion. And I don’t think that driver was asking a rhetorical question. They wanted feedback if someone had any.

At other times the driver would stop to collect their thoughts and force a discussion. In this case the driver became uncomfortable when they didn’t get enough feedback. And sometimes they took themselves out of the driver’s role, asking someone else to fill in. In short, while I observed that driver was often in control of the wheel (and forward progress), at the same time, they didn’t overly dominate. Drivers rotated. Every one got their turn. But how these switches happened was very dynamic.

In all fairness, the mob programming website did touch on drivers and their participation in discussions:

“The main work is Navigators “thinking , describing, discussing, and steering” what we are designing/developing. The coding done by the Driver is simply the mechanics of getting actual code into the computer. The Driver is also often involved in the discussions, but her main job is to translate the ideas into code.”

While the main job of the driver may be “mechanics,” the small fast moving Cucumber team didn’t insist that getting the code into the computer be the driver’s main function. Now mind you, I suspect being remote affected their style of communications. They also knew each other well and knew each others’ common design approaches and preferences.

So why did the Cucumber mob behave this way? Did they believe one way but consciously act in another way? Did they intentionally lie about their heuristics? Or were they deceiving themselves? Are people wired to explain what they do through some kind of distortion field? How often do people believe one thing (and hold it up as an ideal) but then choose alternative heuristics? If so, is this OK?

I’m not sure the team was aware that their ways of driving/navigating deviated from the conventional driver/navigator roles until I shared my observations with them. I suspect that when they first started mobbing they were more rigorous about following the “rules” for these roles. Over time they found their own ways of working. And so the heuristics they collectively use to decide what to do, what design approach to try next, and how they interact with each other are much more fluid and nuanced than the simple descriptions of drivers and navigators on the mob programming website. They don’t exactly go “by the book.” And I suspect their heuristics for how they work together are still evolving.

So how should I as a heuristics hunter reconcile my simple goal of distilling essential heuristics with the messy realities I find on the ground?

Should I plunge into a concerted effort to sort out and formulate more nuanced heuristics? The short answer is, yes. While I want to find and record both general and more particular heuristics, I’m not inclined to want to sort them out into tidy, neat categories. After all, as Billy Vaughn Koen says, there is more than one way to solve any design problem and more than one heuristic that can work. By recording these nuances, I hope to get richer insights into the different conditions and cases and situations that lead to choosing them.

This still leaves me with one nagging question: How can I reconcile what people say they do and believe with what they actually do? My (current) approach is that as I distill heuristics I also describe the context where I find them. Should it bother me that designers don’t do as they say they do all the time? Probably not. After all, we’re wonderfully creative problem solvers. And there are always options.

Nothing ever goes exactly by the book

In Designing Object-Oriented Software, we used the design for an ATM (Automated Teller Machine) to illustrate how to design object-oriented software. We worked through a simplistic design for an ATM and produced a CRC (Class-Responsibility-Collaborator) Card design.

Several years after our book was published I received an email from an instructor at a company. Although he liked our book, he felt he was missing something important about Responsibility-Driven Design. No matter how hard he tried, his students never exactly reproduced our design! And worse yet, in his opinion, all their designs were different.

I was astonished that he expected to teach principles, practices, and design thinking and then magically his students would produce identical designs. Our minds don’t work alike, so why should we produce identical designs?

A couple of years ago I received a gift of a Blue Apron subscription. For those who are not familiar with Blue Apron, you select the meals you want for a week, then along with the recipes they ship you a box of ingredients. All you add is your own salt and olive oil, cooking utensils, stove, and voila—you make a tasty dish.

The first meal I cooked was Za’atar-Roasted Broccoli Salad. The picture of the dish on the recipe card looked like this:
Recipe
And here’s a photo of the dish as I prepared it:
DishICooked
Not bad! But I didn’t blindly follow the recipe. I used my brain, my eyes, and my cooking sense, as well as past experiences, when making that dish.

If you look closely the recipe card below you’ll see that the instructions are not equally precise. While the eggs should be boiled for exactly 9 minutes, there’s more tolerance in the time to roast the broccoli. Just exactly how much is a drizzle? And that there are so many points during the recipe when you are asked to add salt! Over time (and following several recipes’ instructions too closely) I learned that if you added salt every time they advised you to, the meals would be too salty. Over time, Blue Apron has adjusted their recipes to not include so many salt instructions.
Instructions

There is no substitute for learning from direct experience and observation. A beginning cook who expects to follow instructions exactly and get a well-made dish is bound to be frustrated.

And people who expect to follow someone else’s software design heuristics and end up with an optimal solution to their specific design problem will invariably be disappointed.

Sure, you may initially learn some design heuristics from a book or by reading code or from someone who is more senior or stack overflow or wherever. But never expect things to go exactly “by the book.” There are many details that experts never share that you’ll have to fill in. And don’t get frustrated when experts change their minds or design approaches or disagree (they do all the time). But don’t be silent, either, if you are puzzled or curious. Ask them about their thinking. They may have encountered an edge case or an exception to some “general” heuristic. Or there may have been more to the design problem than they had initially thought. By having that conversation you just might learn something from each other.

Writing, Remembering, and Sharing Design Heuristics

I’ve been experimenting with simple ways to record software design heuristics. I want to find ways to vividly bring to mind the design heuristics I use or learn about from others. Ideally, any written heuristic should be shorter than a software pattern and somewhat more detailed and informative than a single pithy phrase.

There are three reasons it is important to physically write heuristics instead of just talking about them or clipping online sources into a note taking app:

1. Writing helps us remember the important bits
For a good overview of the benefits of notetaking, read this lifehack article by Dustin Wax. If nothing else this should inspire you to jot down and/or draw by hand the key bits of the next talk or lecture you listen to. What we write we remember.

2. Writing in longhand helps us process our experiences
Research shows that the act of writing in longhand is quite different than typing. This Medical Daily article by Lizette Borreli summarizes research into the memory boosting benefits of writing longhand:

“…writing by hand allows the brain to receive feedback from a person’s motor actions, and this specific feedback is different than those received when touching and typing on a keyboard… Overall, it seems those who type their notes may potentially be at risk for ‘mindless processing.’ The old fashioned note taking method of pen and paper boosts memory and the ability to understand concepts and facts.”

3. Written heuristics can be shared
Once you’ve written some tried and true heuristics in your own words, you can share them with others. Even better, use these heuristics to stimulate a deeper discussion into the design heuristic space you are exploring. Don’t shy away from discussions about nuances, counter-examples, edge cases, and competing heuristics (alternative ways to tackle a specific design problem). You may uncover a wealth of new heuristics to ponder and open up your mind to new ways of solving familiar problems.

If the person you are conversing with doesn’t know much about the topic area of your heuristics, the questions they ask will likely lead you to clarify your thinking (or at least improve your explanations). They won’t get what you are saying if you speak too much in shorthand. If they know more about the topic, you may have a lively conversation about heuristics and lessons they’ve learned over time. They may validate your heuristics as well as give you some new ideas.

A Simple Idea- QHE Cards
I’ve been playing around with using index cards as a means to quickly capture a heuristic. I structure a heuristic in three parts: a question, the answer (which can be then polished into a couple sentence heuristic), and an example or two to help me remember. I call them QHE (Question-Heuristic-Example) or “Q-Hee” cards, for lack of a better name. Using index cards to record design heuristics is inspired by CRC (Class-Responsibility-Collaborators) cards invented by Ward Cunningham and Kent Beck and popularized in my first design book. I like index cards.

Here’s two QHE Cards describing heuristics found in a conversation with Mathias Verraes about designing event records:
AnnotatedQHECard2019_04_12 2_03 PM Office Lens~2 (1)

QHE cards are simple to write. I don’t worry about precision or formality. With a little bit of effort I can turn a question and answer into a short heuristic statement. For example, here’s my first cut at a heuristic for what information to include in an event record:

Heuristic: Only include information necessary to “replay” an event and achieve the same results. Don’t include sensitive information or extra information simply because you think it might be useful.

An example or two to jog my memory is essential. And just like CRC cards, they can be too terse for others to understand (or to remember, if I wait too long). To make a heuristic memorable, I need to actively integrate that heuristic into my design heuristic gestalt. This is especially true if it is about a design topic I am not that familiar with.

To create a stronger memory and deeper understanding I can write more about the heuristic, sketch out a more detailed example, write some code, and/or draw a diagram…whatever it takes to go a bit deeper.

Heuristic gists are especially useful when you want to share heuristics with others. They may need a bit more context to understand what you mean. I like to write them in a form similar to pattern gists. Here’s an example of a pattern gist from Fearless Change Patterns by Mary Lynn Manns and Linda Rising:
APatternGist

And here’s how I rewrote the QHE card for deciding when to generate different events from a process as a gist:

Multiple Events for a Single Process
You need to balance passing along information needed by downstream processes in a single business event with creating multiple event records, each designed to convey specific information needed by a specific downstream process.

Summary of Problem
How do you know how many events to generate for a single business process?

Summary of Solution
If different processes downstream react differently, generate different events. For example, handling a “rental car return” request might generate two events and event records: “car returned” and “mileage recorded.” Even though the mileage is recorded at the time a car is returned, mileage could be recorded at any other time as well. It is a cleaner design to generate two events, rather than cram information into a single, overloaded “car returned” event.

The act of writing a bit more detail in a gist often leads to further questions which lead to even more design heuristics. For example, these questions quickly come to mind: When is it OK to pass along extra information in an event? When is it not OK? What kinds of information should never be passed along in an event record? What happens if a new business process needs slightly different information?

And that’s the point. There are a myriad of details to work out for any real design. And many more heuristics for solving design problems and learning to live with their consequences. By writing down your heuristics, you’ll remember what you were thinking at the time. Note to future self: That effort just might be worth it!

What do typical design heuristics look like?

In my previous post I introduced the topic of design heuristics. Billy Vaughn Koen, in Discussion of The Method: Conducting the Engineer’s Approach to Problem Solving, defines a heuristic as, “anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

Common phrases that mean roughly the same thing as heuristic in English include “rule of thumb or “practical method” or “useful shortcut” or “approximation”. I don’t think approximation or shortcut convey the essence of what makes a heuristic handy or helpful. Wikipedia defines heuristic as, “any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”

Heuristics describe practical actions or attitudes we take in order to make design progress. Billy gives several examples of general engineering heuristics in his book:
Billy'sHeuristics
Reading these heuristics you might think, “Ah, heuristics are merely simple statements of what to do.”

Although heuristics can be boiled down into pithy advice, e.g. “Do this…” or “Don’t do that…,” I find that there usually is a lot more behind any simple phrase. Ask any design expert about how to tackle a specific design problem and she’ll add caveats and constraints and assumptions. Her heuristics might sound more like, “Do this when…and… unless … and here’s how I might …”. Or, “Try this first…and then…until…”.

Which leads me to observe that written design patterns–specifically software design and architecture patterns–are a handy form for sharing meaty, complex, nuanced design heuristics with others.
PatternsAreHandy

Consider this summary of the pattern, Do a Mock Installation, from Object-Oriented Re-engineering Patterns that illustrates some of that extra useful stuff that can be explained by a pattern.

  • Pattern: Do a Mock Installation
  • Intent: Check whether you have the necessary artifacts available by installing the system and recompiling the code.
  • Problem: How can you be sure that you will be able to (re)build the system?
  • Difficulties:
    • The system is new to you, so you do not know which files you need.
    • The system may depend on libraries, frameworks, and patches, and you’re uncertain you have the right versions available.
    • The system is large and complex, and the exact configuration under which the system is supposed to run is unclear.
    • Maintainers may answer these questions, or you may find answers in documentation, but you still must verify whether this information is complete.
  • Solving this is feasible because:
    • You have access to the source code and the necessary build tools
    • You have the ability to reinstall the system in an environment that is similar to that of the running system
    • Maybe the system includes some kind of self-test
  • Solution: Try to install and build the system in a clean environment taking a limited amount of time (at most one day).
  • After the build prepare a report:
    • Version number of libraries, frameworks and patches used
    • Dependencies between the infrastructure
    • Problems you encountered and how you tried to solve them
    • Suggestions for improvement
    • Assessment of the situation including possible solutions/workarounds (if failed)
  • Tradeoffs:
    • Pros:
    • It is an essential prerequisite
    • Demands precision
    • Increases your credibility
    • Cons:
    • Tedious activity
    • No certainty
    • Difficulties: Easy to get carried away.

When written well, patterns include just the right amount of extra advice. Advice that a competent designer has gained through experience and reflections. Patterns can even warn us about potential roadblocks or advise us on what to try next if the pattern’s solution isn’t a good fit for our specific problem.

But in addition to those larger-grained design problems typical of patterns, there are many smaller design decisions and gnarly design details we have to deal with. And as experienced designers, we too, have heuristics for addressing them rattling around in our brains. Wouldn’t it be great if we could find easy ways (easier than writing patterns) to record and share our heuristics with each other?

Here’s a photo of some heuristic snippets captured in a one-day workshop I held at DDD Europe:
ExampleHeuristics
I’ve been experimenting with simple ways to record, share, and communicate design heuristics that require far less effort to create than full-fledged patterns yet convey more than can be shared on a sticky note. Somewhere between full-fledged written patterns and pithy phrases is a sweet spot I’m striving for. That’s topic of my next post.

Growing Your Personal Design Heuristics Toolkit

Billy Vaughn Koen’s Discussion of The Method has had a profound influence on my thoughts about design. This book has inspired me to explore the role that software patterns play alongside other design heuristics and led me to muse about design uncertainty. I’ve been inspired to experiment with simple ways to capture design heuristics and I’ve spent time heuristics hunting with other designers.

In this blog post I’ll introduce you to Billy Vaughn Kohn’s ideas about problem solving and heuristics. Billy defines heuristic as

“anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.”

There are no guarantees. Heuristics can fail. The tenacity that defines an engineer is when she steps back, regroups, and finds a different heuristic to try next. Another important thing to consider is that there are always competing heuristics to choose from. There simply isn’t a single right way to solve any complex design problem. Based on our assessment of the situation, our judgment, and the fit of the heuristics we know to the problem at hand, we decide what to do next. And when designers disagree, it may be because they are focusing on different aspects of the problem, or that they have a different collection of cherished heuristics in their toolkit (whether or not they can articulate them).

Courtesy of xkcd.com https://xkcd.com/309/

Courtesy of xkcd.com https://xkcd.com/309/

Billy describes three different type of heuristics:

    1. Heuristics we use to solve a specific problem;
    2. Heuristics that guide our use of other heuristics (meta-heuristics, if you will); and
    3. Heuristics that determine our attitude and behavior towards design or the world and the way we work.

Let’s take a closer look at each type.

Heuristics we apply to solve a problem. For example, in his book, Billy gives several examples of general engineering heuristics: Heuristic: Always be prepared to give an answer (and when you need to give an answer, use the best approach you know for figuring out that answer based on the time constraints and resources you have at hand). As an example, imagine the approach you might take for determining the number of Ping-Pong balls that fill up a room if you had to give an answer in 2 seconds. What if you had 2 minutes, an hour, or a week to come up with an answer? Would your approaches differ?

Heuristic: Use feedback to stabilize your design. Billy Vaughn Koen is a Professor Emeritus of Nuclear Physics as well as a philosopher of engineering. He taught people who designed nuclear reactors. I’m glad he taught them about the importance of feedback loops to verify design assumptions. And that is one reason why frequent delivery of software, retrospectives, and reviews are valued activities in agile development.

And oh yes, one last heuristic: Always give yourself a chance to retreat. Backing out of a half-baked or unworkable design is important if you don’t want to paint yourself into a corner.

As software designers we have many heuristics in our toolkit. If we know about design patterns, we use them. If we’ve built high-performance systems, we may have a wealth of heuristics for tuning cache performance. If we are familiar with Domain Driven Design concepts, our preferred way to structure a complex system is by identifying bounded contexts. For those unfamiliar with Domain Driven Design, a bounded context might be considered to be roughly equivalent to a subsystem or subdomain. The distinguishing characteristic of a bounded context is that within it there is a consistent, single, unambiguous meaning for business concepts and events. And we may visualize event flows and relationships between bounded contexts by drawing a context map.

Heuristics that guide our use of other heuristics. These heuristics tell us what to try next. The best examples I know of these heuristics are in Object-Oriented Reengineering Patterns. Each chapter in this book is a small pattern language, describing a rough sequence of actions for solving a particular system-reengineering goal. For example, chapter 3 is a pattern language for making “first contact” with a legacy system. Based on what you have just learned, the language spells out options for what to explore next.

First contact design pattern language

Heuristics that determine our attitude and behavior. Agile software designers value frequent feedback. That’s the premise behind frequently deploying working software in order to have its functionality exercised by actual customers. And one heuristic for reducing the risk of downtime when frequently deploying software is to use a blue/green deployment environment.

While we may have learned design patterns (or other published patterns, for that matter) by reading about them, most nitty-gritty design heuristics come from on-the-job experience. We learn a lot by living with the myriad design choices we make (and revisit) as our system’s functionality grows. We learn even more as we support a system over its lifetime. Our heuristics are shaped not only by the kinds of software systems we have designed, built, and maintained but also by the design culture at our places of work.

How do we improve as designers? Through designing and building software. By reflecting on that experience. And by polishing and refining our heuristics and sharing our cherished heuristics with each other.

What’s going on here?

Question: What is more exasperating than reading design documentation that doesn’t synch up with the code?
One answer: Writing such useless design documentation.
Another answer: Writing documents that don’t stand a chance of being aligned with the code.
My answer: Reading design documentation that is at the wrong level of abstraction or detail to help me do the task at hand.

Courtesy xkcd https://xkcd.com/license.html


A few years ago I worked with someone, who, when asked for a high level overview of some complex bits of code we were going to refactor, produced a detailed Visio diagram that went on for 40+ pages! In that document were statements directly lifted from his code. The intent was to represent the control logic and branching structure of some very complicated algorithms (he copied conditional statements into decision diamonds and copied code that performed steps of the algorithm directly into action blocks). To him, the exercise of creating this documentation had really clarified his design. I was amazed he went to such an effort. To me, the control structure was evident by simply reading the code. Moreover, what I was looking for, and obviously didn’t communicate very well, was that I would appreciate some discussion at a higher level, explaining what algorithm variations there were and how code and data were structured in order to handle myriad typical and special cases.

I wanted to be oriented to the code and data and parts that were configurable and parameterized.

Later, I learned from one of his colleagues that “Bob” always explained things in great detail and wasn’t very good at “boiling things down.” If you wanted to ask “Bob” a question about the code you’d better plan on at least spending half an hour with him.

Another time a colleague created a sequence diagram explaining how data was created that was used by a nightly cron job. What was missing was any description of what that cron job did. Maybe it was so obvious to him that he didn’t think it was important. But I lacked the context to get the full picture. So I had to dig into the cron job script to understand what was really going on.

When I want to get oriented to a large body of code, I like to see a depiction of how it is structured and organized and the major responsibilities of each significant part. Not the package or file structure (that will be useful, eventually), but how it is organized into significant modules, functions, classes and/or call structures. I want to know what the major parts of the system are and why they are there and relationships between them. And at a high level, how they interact. And then I want to understand key aspects of the system dynamics. Just by staring at code I’ll never fathom all the ins and outs of its execution model (e.g. what threads or processes there are, what important information is consumed, produced, or passed around).

Simon Brown gave a talk at XP2016 on The Art of Visualizing Software Architecture. Simon travels the world giving advice and training and talks at conferences about how to do this. He has also build a tool called Structerizr which:

“ blend[s] together the best parts of the various approaches and allow software developers to easily create software architecture diagrams that remain up to date when the code changes. Structurizr allows you to take a hybrid “extract and supplement” approach. It provides you with a way to create software architecture models using code. This means you can extract information from your code using static analysis and reflection, supplementing the model where information isn’t readily available. The resulting diagrams can be visualized and shared via the web.”

I had the opportunity to spend an hour with Simon getting a personalized demo of Structurizr, and discussing the ins and outs of his tool works, how it has been used, and some aspirations for the future.

I was initially surprised about how you specify what your “model elements and relationships” are: you write code specifications for this. To me, it seemed that this could have more easily be accomplished by writing some parseable external description or using a diagramming tool which enabled me to link to the relevant code.

However, I know why Simon’s made this choice: He deeply believes that code should be the reference point for all structural architecture information. So he thinks it a natural extension that developers write a bit more code to specify the important elements of the models they want to see and how to depict them. To Simon, it isn’t sensical to just “add an element” to a diagram unless there is some attachment to actual code. For example, you may define a Hibernate specification to represent data access to a repository. Then that can be specified as a data source. With external descriptions that are not based on code, you have a greater chance of it becoming disconnected from what code actually does. So be it. This is where my values diverge a bit from Simon’s. I favor a richer or higher level documentation that isn’t code based if it tells me something I need to know to do a task (or something I shouldn’t do) or that gives me insights that I wouldn’t find otherwise. I also like the freedom to embellish my diagrams with extra annotations, colors, highlights, etc., etc. so I can focus viewers’ attention. And consequently, I also like to create a key that describes the elements of any diagram so casual readers can understand my notation, too.

After processing these model descriptions, Structurizr spits out JSON descriptions that represent aspects of a C4 model: Context, Containers, Components, and Class diagrams. These are then submitted to a service, which then create various model visualizations that can directly link to parts of your code source. There’s also a rudimentary ability to add additional bits of information to textually describe your architecture and architecture requirements. I’m glad that, recognizing the limits of what code can tell about architecture, the tool provides an easy entry way into developers telling more about the architecture in text.

In our conversation, Simon emphasized that one important benefit of linking architecture document to code is that it never gets out of synch. The code for your architecture documentation can be versioned, right along with your production code.

Fair enough. Useful even. But how helpful is the documentation that is generated in understanding what’s going on in your system?

I think this is one way (certainly not the only way, however) to provide an orientation into the overall structure of your system and major relationships between its parts. But there are definite limits to what you can describe by models generated from static analysis of code. Such models don’t tell me the responsibilities of a component. I have to glean that from supplementary text (which I didn’t see an easy way to attach to the model descriptions) or to abstract that by reading code. And they certainly won’t explain any dynamic system behavior or system qualities.

While I find Structurizr’s direct connection to the code intriguing, it is also its biggest limitation. Structurizr is intended to be used by developers who want to create some architecture documentation, which stays in synch with their code and that also can be shared via websites or wikis or embedding it into other documents.

Structurizr has succeeded with this fairly modest goal: assist developers who don’t ordinarily generate any useful architecture diagrams whatsoever to do something.

But this isn’t enough. As designers or architects, only you can tell me answers to my larger questions about why your code is structured the way it is, and what are the important bits about the system and runtime execution that are intentional and crucial to understand and preserve. No diagram generated by a tool will ever replace your insights and wisdom. No amount of code comments can convey all that either. I need to hear (and read) and see more of this kind of stuff from you.

Being Agile About Documenting and Communicating Architecture


Software architects and developers often need to defend, critique, define, or explain many different aspects of the design a complex system. Yet agile teams favor direct communications over documentation. Do we still need to document our designs?

Of course we do.

We won’t always be around to directly communicate our design to all current and future stakeholders. Personally, I’ve never found working code (or tests) to be the best expression of a software design. Tests express expectations about observable system behavior (not about the design choices we made in implementing that behavior). And the code doesn’t capture what we were thinking at the time we wrote that code or the ideas we considered and discarded as we got that code fully functioning. Neither tests nor code capture all our constraints and working assumptions or our hopes and aspirations for that code.

So what kind of design documentation should we create and how much documentation should we create, for whom, and when? And what is “good” enough documentation?
Eoin Woods gave a talk at XP 2016 titled, “Capturing Design (When You Really Have To)” that got me to revisit my own beliefs on the topic and to think about the state of the current agile practices on documenting architecture.

One take away from Eoin’s talk is to consider the primary purpose of any design description: is it primarily to immediately communicate or to be a long-term record? If your primary goal is to communicate on the fly, then Eoin claims that your documentation should be short lived, tailored to your audience, throwaway, and informal. On the other hand, design descriptions as records are likely to be long-lived, preserve information, be maintainable and organized, and more formal (or well-defined).

Since we are charged to deliver value with our working software, it is often hard to pay attention to any perceived efforts at “slowing down” to describe our systems as we build them. But if we’re building software that is expected to live long (and prosper), it makes sense to invest in documenting some aspects of that system—if nothing more to serve as breadcrumbs useful to those working on the system in the future or to our future selves.

So what can we do to keep design descriptions useful, relevant, unambiguous, and up-to-date? Eoin argues that to be palatable to agile projects, design documentation should be minimal, useful, and significant. It should explain what is important about the design and why it is important; what design decisions we made (and when), and what are the major system pieces, their responsibilities and key interactions. Because of my Responsibility-Driven Design values and roots, I like that he considers system elements and their responsibilities as being minimally useful information descriptions of a system. But to me this just a starting point to get an initial sense of what a system is and does and does not do. There certainly is room for more description, and more details when warranted.

And that gets us back to pragmatics. A design description isn’t the first thing that developers think of doing (not everyone is a visual thinker nor a writer). I know I’m atypical because, early in my engineering career, I enjoyed spending 3 weeks writing a document on my universal linker worked and how to extend it and its limitations. I was nearly as happy producing that document as I was in designing and implementing that linker. It pleased me that sustaining engineering found that document useful years after I had left for another job.

So for the rest of you who don’t find it natural to create documentation, here’s some advice from Eoin:

  • Do it sprint-by-sprint (or little by little). Don’t do it all at once.
  • Be aware of tradeoffs between fidelity and maintainability. The more precise it is, the harder descriptions will be to keep up to date.
  • Know the precision needed by your document’s users. If they need details, they need details. The more details, the more effort required to keep them up to date.
  • Consider linking design descriptions to your code (more on that in another blog post)
  • Create a “commons” where design descriptions are accessible and shared
  • Focus on the “gaps”—describe things that are poorly understood
  • Always ask what’s good enough. Don’t settle for less when more is needed or more when less is needed.

To this list I would add: if design descriptions are important to your company and your product or project, make it known. Explain why design documentation is important, respond to questions and challenges of that commitment, and then give people the support they need to create these kinds of descriptions. Let them perform experiments and build consensus around what is needed.

Be creative and incremental. One company I know made short video recordings of designers and architects giving short talks about why things worked the way they did. They were really short—five minutes or less. Another team created lightweight architecture documentation as they enhanced and made architectural improvements to the 300+ working applications they had to support. It was essential for them that there be more than just the code as the knowledge about these systems was getting lost and decaying over time. Rather than throw up their hands and give up, they just created enough design documentation using simple templates and only as new initiatives were started.

Find a willing documenter. Sometimes a new person (who is new to the system and to the company) is a good person to pair with another old hand to create a high-level description of the system as part of “getting their feet wet.” But don’t just stick them with the documentation. Have them write code and tests, too. From the start.