Too Much Salt?

Practiced speakers and writers know that good examples rarely tell the whole story. Instead they shape their narratives to make the big ideas stand out. Stories are bent ever so slightly, plot details are pared down, leaving space for emphasis and audience impact.

I wouldn’t go so far as to say we invent fiction, but rather that we simplify our stories to make them compelling. Too many details and our audience would tune us out. And when we repeatedly tell these stories, we come to believe we’ve pared down the narrative to its essence. We’ve nailed it!

But what happens when you encounter information that sheds new light on such a story? What if the story you’ve told no longer rings quite true?

The past few years I’ve explored Billy Vaughn Koen’s definition of heuristics as they relate to software design and architecture. I’ve written blog posts and essays, presented talks, keynotes, and workshops about heuristics (for a gentle introduction to different kinds of heuristics see Growing Your Personal Design Heuristics Toolkit).

Along the way I’ve encouraged people to discover, distill, and own their personal heuristics. I advise them to not just take every bit of advice they find about software design as being authoritative. Instead, they should question the validity of that advice’s applicability to their specific context. They should also bring their own heuristics they’ve accrued through experience to bear on the problem at hand.

I start most heuristics presentations with a story about my experience cooking my very first Blue Apron recipe for Za’atar Roasted Broccoli Salad (for details see Nothing Ever Goes Exactly by the Book). I jokingly point out all the places that the recipe suggests adding salt. I then postulate that if I just blindly followed Blue Apron instructions without applying any judgment, the dish would be way too salty.

Instead of following the recipe, I told how I used my past experiences to “modify” the instructions to fit with my understanding of what makes for a tasty dish. In short, I ignored lots of places where the recipe suggested adding salt.

My heuristic for this situation was to ignore advice on where to add salt if it seems excessive and only add salt to taste at the end. Following that heuristic, I most likely made a much blander dish that, while it looked great, undoubtedly lacked flavor.

But… achieving a tasty dish wasn’t the point of my original story!

Instead, it was to encourage using personal judgment and heuristics based on past experiences. I wanted to emphasize that we each have experiences and insights that we can and should draw on in many situations. Simply trusting and blindly following “experts” or “recipes” because they are published or credentialed can lead us astray—or to cooking inedible dishes. We should value and treasure our experiences and draw upon the heuristics we’ve accrued through those experiences.

Ta-da! Point made! Perhaps…

A week ago as I was waiting for surgery to repair my broken nose (that’s another story for another time) I started reading How to Taste, by Becky Selengut. At the time I was detached, slightly impatient, and resigned to just being there in the moment. The doctor was late and I had time to kill.

The introductory first chapter starts: “Telling you to ‘season to taste’ does nothing to teach you how to taste—and that is precisely the lofty goal of this book. Once you know the most common culprits when your dish is out of whack, you’ll save tons of time spinning your wheels grabbing for random solutions. You’ll start thinking like a chef. Some people are born knowing how to do this—they are few and far between and most likely have more Michelin stars that you or I; the rest of us need to be taught. I’ve got your back.”

Now that grabbed my attention!

Unless I was superhuman (I’m not), I can’t rely on my instincts to become a better cook, knowing when and how much seasoning or salt to add.

My experiences cooking have certainly been ad hoc. And the heuristic I applied for salting that Blue Apron dish came from who knows where. I never learned why I was doing what I was doing when following a recipe or ignored some parts of it. Instead, I learned a few shortcuts and substitutions, largely through combing the internet. And while my technique may have improved over time, I haven’t developed the ability to craft a dish with nuanced flavors, let alone improvise one.

Becky suggests reading her book “…start[ing] at the beginning, as I intend to build upon the concepts one puzzle piece at a time.” Each chapter presents fundamental facts, reinforced by a recipe that highlights the important points of the chapter and then suggesting Experiment Time activities intended to develop a reader’s palate

Aha, again!

A good way to learn how to exercise judgment is to perform structured experiments after you’ve learned a bit of theory and why things—in this case, the chemistry of cooking—work the way they do.

I quickly read through the chapter on Salt and learned: Salt is a flavorant—bringing out the flavor of other ingredients. Salting early and often can improve taste dramatically. For example, adding salt to onions as they sauté can speed up the cooking process and causes them to sweat out water. And when you only season a soup at the end, no matter how much salt you add, the flavors of unsalted ingredients (for example potatoes), fall flat. You end up over salting the soup stock and still having tasteless, bland potatoes. Salt needs to be added at the right time, often at several steps in the cooking process, to have the desired result. And to my surprise, different kinds of salt—iodized, kosher, flaky, fine-grained sea salt, each have their own flavoring properties and ratios in recipes.

This brought to mind a whole new way of thinking about my Blue Apron cooking experience. Blue Apron didn’t have bad recipes, but their recipes didn’t make me a better cook either. This is because most recipes focus on the how—not the why. Their pretty little pictures and step-by-step instructions did nothing to help me to achieve an understanding of how to achieve tasty dishes.

And that’s a problem if I want to get better at cooking tasty dishes and not simply at following recipes.

I’m afraid way too much information we absorb—whether it is about cooking or agile practices or software development—is presented as step-by-step lists of instructions, without any explanation of why it makes sense to do so or the consequences of not doing a particular step specifically as instructed.

Consequently, we learn a bunch of procedures, or simply cut and paste them. We follow instructions because somebody says this is what we should do. Over time we may build up a playbook of those procedures but our understanding of why these procedures work isn’t very deep or rich or adaptable.

If we want to truly gain proficiency in cooking (or software design or programming or running or gardening or basket weaving), instruction that emphasizes the why along with the how is what we need.

Teach me some facts that ground what I’m about to do in a bit of knowledge. Spark my curiosity. Inspire me. And then give me tasks that let me tinker and practice applying that knowledge. Only then will my actions become integrated with that knowledge, allowing me to build up adaptable heuristics that I can use in novel situations.

In hindsight, I now believe that the story I told about applying my personal heuristics and knowledge to a problem was OK. It’s reasonable to be a healthy skeptic when someone says, “Just do as I say. Trust me,” when attempting a new task. Distilling you own heuristics from previous experiences and applying them in familiar situations is also good. And writing them down helps to bring them to your awareness.

But in addition, I now think it is equally important to seek the why behind the what you are doing. And to loosen your grip on those simpler narratives you’ve held dear. They are not the whole story and they may be holding you back. Be open to new information that may reshape your stories and enhance your heuristic toolkit.

Perhaps one day, with enough knowledge and practice, I’ll be able to create a flavor profile for a dish instead of merely following the recipe.

Draw a Tree

I often use a short, icebreaker to introduce design storytelling in talks and classes. I hand out an index card and ask people to draw a tree in 60 seconds. I’ve adapted this from Thiagi’s 99 second Draw a Tree exercise. I ask attendees to draw a tree, any old tree, and to be sure to autograph it as there will be a prize. At the conclusion of the exercise I pick someone at random to receive a small gift or book.

I have collected hundreds of drawings, some are very beautiful. Rarely I get drawings of bamboo.

Invariably one nerd who wants to win the prize and show off his computer geekiness draws a directed graph. After all, he doesn’t know the criteria I’ll use to choose a winner (none, it is a random drawing).

But most draw physical trees.

I get canonical tree shapes: mainly deciduous trees, with and without leaves, and a few conifers.


After I’ve collected the drawings, I ask how many drew roots and if so, why? If not, why not? Invariably, as Thiagi observes, most do not include roots, but some include roots or hints of root structures.

When asked why they didn’t draw any roots, invariably the answers is, “Because I drew what I can see. No need to show what’s below ground.” When asked why they included roots, those who did answer, “Because trees have roots.” Some software folks are very detailed and want to show everything. I’ve even received trees with tree parts labeled.

And there is my hook into the art of design storytelling. It depends upon your audience and the goal for telling your story whether you should include roots or not. There’s no “right” answer. Depending upon what your audience already knows and what you want to focus on, it is perfectly OK to leave out certain details.

The art of effectively drawing or describing any aspect of a software design, is to knowing what to leave out. It’s just as important to know what to omit as it is to know what to include. There’s always more detail. Effective design storytelling leaves out unessential details so that the important stuff can get emphasized.


Challenges When Communicating Designs

Tuesday evening I gave a talk about the challenges software developers face when communicating design ideas. I started by making the connection between telling others about designs and storytelling. Effective designers need to tell good stories. And the tone and means by which we communicate design ideas should vary depending on the reasons we have for telling a particular story, and our audience’s background and expectations. Perhaps we need to educate newcomers or explain our design to get constructive feedback. Maybe we want to convince others to take some specific action or “buy in” to a change. Regardless of motive, we need to communicate about our designs in compelling and engaging ways.

At the beginning of my talk I asked attendees to write down their most challenging communication problem. I figured it was a fair exchange: I’d get direct feedback from about their problems, and two lucky attendees would walk away a book. Looking over their feedback, I’d categorize them as

Communicating to others who are not like me
“Communicating across domains (UI design to SW) or cultures (US to India).”
“The hardest communication was when I had to present a design to a group that does not specialize in my area.”
“As an embedded software engineer, the rest of my team are hardware engineers so they have neither the training in software methods nor the software mindset.”
“Communicating technical design to non-technical people.”
“Communicating to a non technical customer.”

Getting others to appreciate the important bits
“One of the most challenging aspects of communicating a design is educating the receiver of the design on design paradigms. This is especially true when the person is not familiar with or comfortable with object oriented design/analysis.”
“As a developer working in an agile environment, I often receive partially-conceived designs, sometimes as little as a single Photoshop mock-up. It’s easy to spot short comings, but difficult to communicate them. I sometimes end up implementing a feature just to illustrate its problems.”
“I sometimes have trouble getting others to understand why a simple solution is insufficient when the other person has very limited time to understand the problem.”
“To clearly point out the subtleties and nuances of the most critical or pivotal aspects of the design–what’s really important.”

Gaining common understanding
“Vocabulary/definitions”
“Definitions [that] are not the same for the same term.”
“Just getting to a mutual understanding of the idea has been an issue for me.”

Story telling mechanics
“Communicating at the right level. What can we assume, what needs to be explicit.”
“Knowing what to put down.”
“Keeping the explanation simple. Explaining only the parts which are needed.”
“Pulling your imagination into paper.”

Most designers could tell far more about their designs than they should. We also could benefit from practice telling coherent stories and ensuring that the important parts get emphasized. If you have insights on how to effectively communicate design ideas or design communications challenge you’d like to share, I’d love to hear from you. Over the past year or two I’ve been working on effective design communication.

I also want to announce that I’ve put together a new one-day course, The Art of Telling Your Design Story. I’ll be teaching it publicly at OGI’s Center for Professional Development in Beaverton, Oregon, November 30th. The day before I’m offering another new course, Practical UML (if I called it Impractical UML would anyone sign up?). Design stories don’t always need formal UML notations (in fact, one of the challenges is communicating subtle ideas to non-technical folks). But I’ve seen UML so disabused that I want to give developers some straight talk on how to effectively communicate using UML at different levels of detail (and show some nuanced design ideas effectively).

Deconstructing Frankenstein


One of my favorite things I do in any architecture or design course I teach is to discuss AntiPatterns— design ideas hatched with good intentions but that prove problematic over time. We’ve all seen examples of software done badly. The purpose of an AntiPattern is to document a bad solution to a common problem, explain how people can slide into an AntiPattern, and mention ways to remedy it. The point isn’t as much to say “don’t do this” as it is to say, “you probably already have this problem, you just might not have a name for it. And here is how you might’ve gotten there, here is what you might do to prevent this happening in the future, and some things you might do to fix up your design.”

A Boat Anchor is a piece of software or hardware that serves no useful purpose on the current project. Often, the BoatAnchor is a costly acquisition, which makes the purchase even more ironic. A Lava Flow is when unused blobs of code are hanging around in a system. It is characterized by the lava-like “flows” of previous developmental versions strewn about the code landscape, but now hardened into a basalt-like, immovable, generally useless mass of code (perhaps commented out, perhaps not) which no one can remember much if anything about.

Last week students at my class were incredibly inventive— they weren’t content to limit their discussion to examples of AntiPatterns that I mentioned. The new AntiPattern names are so good I want to share some of them. The first was the Frankenstein AntiPattern. It came about because too many cooks were watching the pot. Everyone wanted to contribute so they did, just not in any organized fashion. As requirements kept rolling in, people kept adding functionality in a disjointed, haphazard fashion. Oh, you want two eyes? OK. And a body. That sounds good. You didn’t say you need toes. Hm. OK, we’ll bolt some on. How many do you need? Where should they go? Everybody contributed, design coherence wasn’t a goal, and the implementation just kept rolling in requirements.

One might say that good disciplined designers should’ve detected this emerging porblem and prevented it from happening. Well, projects get hectic and sometimes things slide. But what’s nice about this story is that it has a happy ending. Frankenstein was banished after a diligent designer who couldn’t with a clear conscience keep bolting on stuff and make it work said “Enough!” and worked to untangle this mess. He employed several strategies. One was refusing to take code directly from marketing and to accept requirements and functionality via pseudo code instead of patches.

Another AntiPattern that was brought up was Rocky Road. It’s similar to the Lava Flow, but includes overloaded data fields and cobbled together or interpreted data fields in the mix. Not only is there dead code to stub your toes on, but there’s complicated data with overloaded, tangled encodings, too. The intentions were good: keep using the same schema but add more functionality to the software and keep encoding data in complex ways because heaven knows the data can’t be redesigned. But over time this system became extremely difficult to work with. The code was complex in that it had to decode and vary functionality based on complex interpretations of the data, and the data fields grew more complex and entangled in support of new functionally. Now what’s great about this AntiPattern name is that “Rocky Road” is an ice cream flavor as well as a travel hazard. What might start out as a sweet, quick fix, can over time turn into an unnavigable development landscape. I’ve seen this situation at other companies I’ve worked at and there is no quick fix. Someone or several people have to take the time to analyze the code and the data implications and to propose modest “safe” and agreed upon modifications. These repairs don’t usually smooth out all the bumps they keep the system from totally becoming unworkable. Usually there has to be a compelling reason to make deep and significant changes (think Y2K or migration to a new database technology).

Sometimes discussing AntiPatterns can be depressing. Especially when people work in places where painful examples are in abundance, and little opportunity or incentive exists to improve things. I like hearing stories where people have been able to repair design problems and improve how systems function. Even better when these efforts are supported and encouraged by informed management. If you have any AntiPattern remediation successes, I’d love to hear about them.

Good enough domain models

Eric Evans talked about Domain-Driven Design at our Portland SPIN meeting Wednesday. Eric’s thesis is that unless you capture the “ubiquitous language” that people use to talk about the functions of the business and create a domain model representing object concepts, you are developing software at wrong level of detail. Instead of talking about Shipping Routes, Legs, and Itinerary, you’ll be talking about “creating rows in the stop table” for each port along a shipping route. Why create Itinerary and Leg objects when you can get by stuffing a database table with “stop” records? Because it makes other parts of the system easier to program. Re-routing cargo gets easier if you can remove all Legs after a particular destination and splice one Itinerary onto another one. Lack of a domain model can severely limit the effectiveness of software (and make it hard to maintain systems and add new functionality).

Creating a domain model is more complicated than just capturing the language people use to talk about system functionality and creating software objects with appropriate names. In complex software, development teams often work on different sub-problems. Each subsystem may need its own model. Meanwhile, subsystems and teams still need to define appropriate ways to communicate with each other. In addition to ubiquitous languages, you need to define the appropriate common languages for inter-system/inter-team communication. Nothing is ever easy!

Eric’s masterful talk motivated me to ponder about why developers often end up with muddy models instead of ones that more clearly incorporate domain concepts. Eric says that domain modeling isn’t looking for a perfect model, only ones that are “good enough” to support the hardest problem well. Why don’t more development teams end up with “good enough” models? I suspect there are many reasons. What constitutes “good enough” can be so subjective that people don’t want to get bogged down. They give up and take the easiest paths, not the simplest ones. For lack of a good object to relational mapping tool, some developers may compromise on database tables being ‘good enough’ approximations to classes. And their models get compromised. There are many reasons why software falls short of capturing “ubiquitous language” in a domain model. I suspect that a big reason is that it isn’t always obvious that a model is needed. If your code shuffles data back and forth from the UI to a database with few edits, why create a model? Only when there is significant behavior and computation, does a model pay off. Now if we all could agree on what “significant” means. If you have ideas about what constitutes significant enough behavior to warrant a model, I’d like to hear your thoughts.