Beware of Dogma. No. Be aware of dogma

Dogma has several different meanings. I’m going to purposefully split hairs in this post, because I don’t want to attach negative connotations to dogma in a knee jerk fashion. I want to be more thoughtful about my choice of words and my reactions to them.

Here are four meanings for dogma:

“1. an official system of principles or tenets concerning faith, morals, behavior, etc., as of a church.
2. a specific tenet or doctrine authoritatively laid down.
3. prescribed doctrine proclaimed as unquestionably true by a particular group.
4. a settled or established opinion, belief, or principle”

At first, these subtle differences in meanings annoyed me. But I wanted to push through that to see what I can learn about dogma. So here goes…

An official set of principles or tenets concerning faith, morals and behavior.
As a software professional, do I have an “official” set of principles and tenets that I believe in?

I have a set of guiding principles and practices for how I work, think about design, write code and tests that I’ve built up over 20+ years of practice. They have become part of how I prefer to operate. I’ve changed and refined them over time, discarding some practices, fine tuning others.

The guiding principles I follow weren’t handed down by authorities. I discovered them working alongside smart people and interacting with thoughtful designers who cared deeply about how they built and implemented software. I wanted to understand how productive people thought and worked, and try to incorporate what I saw as good practices and beliefs into my own beliefs and ways of working.

In the process, I co-wrote two object design books that shared a way of thinking about objects that I still find effective and powerful. Maybe writing books made me an authority. But I also have become a seeker of new and better ways of working. Over the years I have “blended” into my personal set of practices and beliefs about design some powerful ideas of others. This process of incorporating these ways of thinking and problem solving to me feels highly integrative rather than just “accepting” them as unchallenged beliefs or tenets. I have to sort through them, adjust them and then make them part of who I am and what I do. I am not one to blindly accept dogma.

The 3rd definition of dogma has negative connotations: a prescribed doctrine proclaimed as unquestionably true by a particular group.

Hm. I don’t hold many things about software design as being unquestionably true. I find it disconcerting when groups and factions form around the latest truth or discovery. For example, some fervent agile developers I know unquestionably believe that test-first development is the only way to design software. (I’m more of a test-frequent designer by nature). Those who refuse to acknowledge that there are other effective pathways to producing well-written, well-designed, maintainable code are trying to push a dogma of the 3rd meaning.

I find myself questioning any software doctrine that is held as being “universally” true. How presumptuous! There are so many different ways to solve problems and build great software.

I try to keep an open mind. My most strongly held beliefs are ones I should challenge from time to time. To do that, I have to push myself out of my comfort zone. For example, I have discovered a few things by letting go of several strongly held beliefs and performing some interesting experiments: How much code that checks expected behaviors do I really need to keep around to keep software from regressing? How many tests does any organization really need to keep? How many comments do I need in my code? How much of my code should check for well-formed arguments? Is it better to fail fast or fail last? What’s the effect on my code to put in all those checks? What’s the effect of leaving them out.

Not all dogma is handed down from on high or authoritatively laid down…nor is it necessarily bad to hold a common set of beliefs and opinions (the fourth definition of dogma). I’ve been in dysfunctional groups where we couldn’t agree on anything. It was extremely stressful and unproductive.

If as a group we establish and hold a common set of beliefs and practices, then we can just get on with our jobs without all that friction jockeying for who is right and the right way to do things.

But, here’s the rub…if you accept a certain amount of dogma (and I’m not saying what kind of dogma that might be…if you are an agile software developer I am sure you hold certain beliefs on testing, task estimation, collaboration, specification, keeping your code clean, whatever…) be wary of becoming complacent. Dogma needs to be challenged and re-examined from time to time. But don’t toss your current dogma aside on a whim, either. Old beliefs can get stale. But they may still be valid. We need to try out new ideas. But not simply discard older beliefs because shiny new ones are there to distract us.

Why Process Matters

I’ve been working on a talk for Smalltalks 2014 about Discovering Alexander’s Properties in Your Code and Life.

I don’t want it to be an esoteric review of Alexander’s properties.

That won’t satisfy my audience or me.

I want to impart information about how Alexander’s physical properties might translate to properties of our software code as well as illustrate poignant personal examples in the physical world.

But equally important, I want impress upon my audience that process is vital to making lively things (software and physical things). In his, The Process of Creating Life: Nature of Order, Book 2, Alexander states,

“Processes play a more fundamental role in determining the life or death of the building than does the ‘design’.”

Traditionally, building architects hand off their designs as a set of formal drawings for others others to build. Does this remind you of waterfall software development? There isn’t anything inherently wrong with constructing formal architectural drawings…but they never end up reflecting accurately what was built. Due to errors in design, situational decisions based on new discoveries made as things are built, better construction techniques, changing requirements, limitations in tools or materials, a building is never exactly constructed as an architect draws it up.

Builders know that. Good ones exercise their judgment as they make on the spot tactical re-design decisions. Architects who are deeply involved in the building process know that.

Alexander is rather unhappy with how buildings are typically created and suggests that any “living” process (whether it be for building design or software or any other complex process) incorporate the following ten characteristics.

He challenges us software makers to do better, too:

“The way forward in the next decades, towards programs with highly adapted human performance, will be through programs which are generated through unfolding, in some fashion comparable to what I have described for buildings.”

As software designers and implementers we know that nothing is ever built exactly as initially conceived. Not even close. Over the past decade or so we have made significant strides our processes and our tools that enable us to be more effective at adaptively and incrementally building software. My thoughts on some ways we have tackled these characteristics are interspersed in italics, below.

Characteristics of Living Processes

1.Step-by-step adaptive. Small increments with opportunity for feedback and correction.
Incremental delivery, retrospectives, stakeholder reviews
Repetitive incremental design cycles:
Design a little– implement–refactor rework refine–design…
Design/test cycles: Write specifications of behavior, write some code that correctly works according to the specification, test and adapt…
Tests and production code equally valued

2. Whatever the greater whole is always the main focus of attention and the driving force.
Working deployable software, minimally-marketable features

3. The entire process is governed and guided by the formation of living centers (that help each other)
Code with defined boundaries, separate responsibilities, and planned for interconnections

4. Steps take place in a specific sequence to control the unfolding.
We have a rhythm to our work. Whether it is test-first or test-frequent development, conversations with customers to define behavioral “specifications”, or other specific actions. In order to control unfolding we need to understand what we need to build, build it, then refine as we go. And we have tools that let us manage and incrementally build and record our changes.

5. Parts created must become locally unique.
Build the next thing so it fits with and expands the wholeness of what we are building. Consider our options. Refactor and rework our design. Make functions/classes/code cohesive. Bust up things that are too big into smaller elements. Revise.

6. The formation of generic centers is guided by patterns.
We have in mind a high-level software architecture that guides our design and implementation.

7. Congruent with feeling and governed by feeling.
Instead of just making a test pass, see if what you just wrote feels right (or if it feels like an ugly hack). Reflect on how and what we are building. Don’t be merely satisfied with making your code work. How do you feel about what you’ve just built? How do those using your software react to it? How do those who have to maintain and live with your code feel about it?

8. For buildings, the formation of structure is guided by the emergence of an aperiodic grid, which brings coherent geometric order
Software is structured, too…we’ve got to be aware of how we are structuring our code.

9.Oriented by a form language that provides concrete methods of implemented adapted structure through simple combinatory rules
We use accepted “schemas” to create coherent software systems. We have software architecture styles, framework support, and even pattern languages emerging…

10. Oriented by the simplicity transformation, and is pruned steadily
We can consistently refactor and rework our code with the goal of simplifying in order to enable building more functionality. We rebuild to create sustainable software structures. Even if we come back to some old working code and see how to simplify it, we can rework it taking into consideration what we’ve learned in the meantime.

Yet, let’s not be complacent. Agile or Lean or Clean Code or Scrum practices don’t address every process characteristic Alexander mentions. I am not sure that all these characteristics are important for building lively software. Alexander is not a builder of software systems, although he spent a lot of time talking with some pioneers and leaders of the software patterns movement.

Some process ideas of Alexander sound expensive and time consuming. Do we always need to reflect on how we feel about what we code? Sometimes we need to build quickly, not painstakingly. We need to prove its worth, and then refine our software. Our main thought may be on just simply making it work, not how it makes us or others feel. So how do we add liveliness to this quickly fashioned software? What’s a good process for that? Mike Feathers wrote about Working Effectively With Legacy Code, but there is a lot more to consider. Maybe that quickly fashioned software has tests, maybe it doesn’t, maybe some parts have a reasonable structure, and maybe other parts should be tossed.

We often build disposable and hopefully short-lived software. Problems crop up when that code gets rudely hacked to extend its capabilities and live past its expiration date.

There are most likely different processes for creating lively software, based on where you start, where you think you are headed, and how lively it needs to be (not everything needs to be fashioned with such care).

People are continually building new and better tools and libraries. There is a rich and growing ecosystem of innovative open source software. Process matters. I think we have a lot still to learn about building lively software. It is a heady time to be building complex software systems.

When in Rome…

I attended my first XP conference in Rome in May. As they say, “when in Rome, do as the Romans do.” The actual quote attributed to St. Ambrose is, “si fueris Romae, Romano vivito more; si fueris alibi, vivito sicut ib,” or “if you should be in Rome, live in the Roman manner; if you should be elsewhere, live as they do there.”

As Italians do, I enjoyed good food, good company, and great wine.

I gave a workshop on Understanding Design Complexity (using commonality-variability analysis) and a tutorial on Agile Architecture Values and Practices.

I also sampled research AND non-research sessions in equal measure. Unlike other agile conferences I’ve attended, research is a prominent part of this conference. I listened to several research presentations and volunteered to be a commenteer for one research paper.

The XP 2014 paper acceptance rate was somewhat selective, with over 50% of the submissions rejected. Research topics were wide-ranging including a case study on UX Design, a survey of user story size and estimation accuracy, another on agile development practices, a case study on visualizing testing, another on agile and lean values, and one comparing scripted with exploratory testing. Short papers touched on agile organizational transformations, Randoori Coding Dojos, and how expertise is located on agile projects. In addition, four experience reports were published (in contrast, 27 experience reports will be published and presented this year at the Agile Conference).

If the research papers I sampled are an indicator, PhD students seem to be busy doing empirical studies on agile practices, processes, and values. If you go to the Open University’s website you’ll see these topics listed under their Empirical Studies PhD program: The emergence of Agile software development, the role of physicality and co-location in agile software development, and XP and end user development

Agile software development is being studied and data being collected. The paper I commenteered, “Why We Need a Granularity Concept for User Stories” by Olga Liskin and her colleagues, reported on results gleaned from surveying developers (who self-selected themselves as agile developers) working on both private and open source projects on GitHub. I had three short minutes after the presentation to carry on a dialog with Olga about their findings. Fortunately we also had more time to discuss her work over lunch.

This paper raised as many questions in my mind as it answered. On small projects (10 people or less), 55% of the respondents said they did not estimate their stories on a daily basis. Is this an affirmation of the No Estimates movement, or just how people work on certain kinds of projects? Open source projects are quite different from product development. (Not all of the GitHub projects were open source ones, but still…). Depending on your project, you may simply work off a backlog, not necessarily do any estimates to forecast how much you can accomplish. Several developers I know only \break down their work into identifiable tasks. Effort estimates (as long as they know how to do the work) aren’t that important. They’re done when they are done. And if you build shippable, workable software each sprint, well, you always have something potentially useful to deliver.

Here’s just a sampling of what I’d like to know: For those who estimated, how did story size correlate to estimation accuracy? And what happens when stories are split? Are estimates for split stories more accurate? And just how important is estimation accuracy to those who make estimates? It just something they did to get a rough idea, was it “required” of them, or did they effectively use estimates to plan and make forecasts? Did people who estimate improve their accuracy over time? It seems that if you are learning how to use a new technology, once you’ve spun up, estimates should be more accurate. But how long does it take to get up to speed on your estimates?

Not surprisingly, the authors found that the smaller the story size and the better known the technology was, the more accurate estimates were. Research results often confirm the obvious (but it is still it is nice to have some empirical evidence to back up our intuitions).

It their conclusions the authors don’t recommend a “best story size”. I’m happy they didn’t. They didn’t have enough evidence. Conventional wisdom says stories should fit into sprints (comfortably). Makes me want to know more about the accuracy of larger stories. It seems reasonable, that the more work involved, the more possibility you’ll miss something that may influence the accuracy of your estimate. And you also may fudge on your estimates just so you make sure a story fits inside a sprint (because you don’t want to split it). But estimates are just estimates. They shouldn’t be expected to be 100% accurate. How do people behave differently when estimation accuracy is rewarded (or worse yet, punished)? Estimates are just estimates. You learn to recalibrate your efforts when a task is harder or easier than expected.

The authors cautioned that developers’ views about story size and estimates need to be balanced by others’ concerns. Too many stories can be burden a product owner. Developers in dysfunctional organizations might pad estimates, just so they have some slack (does anyone knowingly pad estimates? I’d like to hear from you.).

So is it better to bundle up small, related stories and estimate them as a single unit? Maybe. Back in the days when I had to estimate work, I didn’t like tasks being too small. If they were, my manager would look at them more closely (I’m not sure why). I remember once telling a manager, you can ask what we will be doing every day and how long each task will take, or we will guarantee that we will deliver all the features above the cut line in the next two weeks. But you can’t have both daily accuracy and predictability. She backed off on knowing exactly what we were doing every day (as long as we weren’t stressed out). This was long before the Agile software development movement, but it still seems relevant. Our small team worked off a prioritized list of features. It didn’t matter who did what task, whenever a task was finished, the next one was picked up. And we finished our work on schedule…because we wanted to make our commitment.

Here’s one parting thought about empirical studies. I’m very wary of biases that work their way into them. Those who answered the agile estimation survey might be very different from those who did not. Self-reporting of any thing we do (whether it be software estimation, the amount of food we eat, or how much we exercise) is notoriously inaccurate. We underestimate our weight, overestimate our capabilities, and don’t remember accurately. More accurate evidence is obtained through field studies where people are observed working. I wish more software empirical researchers could have opportunities to work directly agile teams and spend significant time getting to know them and how they work (in addition to just asking them questions).

Making Strong, Lively Centers

Making things with lively, cohesive centers (whether software, buildings, landscapes, educational experiences, or artfully designed bento boxes) involves hard work, practice, skill, reflection, and the development of a discriminating eye.

One great example of hard work over a long period of time was this bonsai boat tree I saw in Kyoto. This tree is over 600 years old!

Can you imagine the effort and attention the bonsai gardeners spent over the centuries to create, grow, and maintain this beautiful shape with its many centers?

I wish I could sit with great software designers and architects, soak up their wisdom, and then effortlessly incorporate that wisdom into my own code. I would love to write lively code without breaking a sweat. But that hasn’t been my experience.

My first Smalltalk code wasn’t very good. I didn’t immediately get the shift from procedural thinking, where I had to worry about controlling every aspect of the call chain, to that flowing object-oriented style where learning how to delegate responsibility was key.

To understand how to make my Smalltalk code lively (because of stronger centers) took practice and experimentation, reflection, and more practice. And letting go of preconceived notions that no longer fit.

As I program in a yet another programming language, I can’t avoid bringing along techniques I learned earlier. Some fit. Some do not. (I keep re-framing my notions of how to implement a good design). And I keep adding useful programming techniques to my toolkit.

Techniques for constructing well designed code is programming-language specific, even though underlying good design principles seem universal.

It took a while for me to realize that to become a better Smalltalk programmer I had to let go of my incessant urge to understand and control every little detail (I had to do that as an 8086 assembly language programmer, my prior language). Trust in polymorphism. Delegate. Don’t try to do too much in any one method. Don’t pass in too many arguments. Let objects take responsibility for their actions.

Even as I learned to let go of details, I still made dumb mistakes.

Initially I didn’t understand the difference between elegant and overly clever code (I liked Smalltalk blocks—er, closures). I didn’t realize the overhead of lots of closures that held on to context. I thought it was clever that my font management code held blocks that could read fonts from the file system (embedding references to external files in them for goodness sakes).

Seasoned Smalltalkers don’t make these mistakes. See this wiki page for a short discussion of Smalltalk and Closures and this Stack Overflow posting.

Was I tone deaf when it came to using blocks? I don’t think so. I just wasn’t paying attention to the right details. And I wasn’t looking in the right places for inspiration or guidance.

Instead of performing my own experiments, ideally, I should’ve been studying and emulating good examples. Such as the Smalltalk collection hierarchy’s use of closures. There, code blocks are used elegantly to execute differential behavior. The Smalltalk collection hierarchy is one of the most beautiful set of classes I’ve ever seen.

Fortunately, I had people around me who took the time to rewrite my code and explain to me why they did what they did. Consequently, I learned to write simpler, less clever, less resource intensive, more maintainable Smalltalk code.

Recently I have been programming in JavaScript. I was motivated to develop JavaScript code to front-end our client-side Java reference app we developed and use in our Enterprise Application Design course. For that initial programming exercise I took the stance that I’d use pretty much “stock” JavaScript libraries (hence me learning about JQuery) and keep things pretty simple.

Since that first whiff of JavaScript programming, I’ve been honing my JavaScript by learning more libraries and plugins and improving my programming skills. I am no expert. Not yet.

I’ve learned effective techniques somewhat randomly because I am not surrounded by JavaScript experts who teach me their craft. Combing through the Internet for advice and inspiration is haphazard and compounded by the fact that our notion of good programming practices evolves over time as languages and tools and libraries grow and evolve.

But now, after more time and experience, I can appreciate several coding practices that contribute to maintainable JavaScript. Such as:

Modules. At first, the coding technique to define a module just seemed confusing. It is. But modularity, which helps to define and separate code “centers” is really important. Not only does it strengthen a “center” by making it more defined (and encapsulated), it makes it more easily integrated with other code.

Being aware of variable scope and limiting it.

Not constantly searching and mucking with DOM objects on every event. Initially I was content if my JQuery searches were “optimized”. Now I am thinking how to avoid DOM references by caching appropriate state in my own variables.

Not blindly nesting anonymous callbacks, but defining functions and then using them.

These techniques contribute to better-defined untangled code centers. But I want to caution you: don’t blindly follow coding best practices without knowing about and buying into the rationale behind them. Arguably your code might be better if you do. But you won’t learn how to exercise judgment until you know more about why you are doing what you are doing. Understanding how to write code that has strong, lively centers takes time, feedback, and the right kind of experience.

When I first started programming in JavaScript I could not have appreciated these techniques. I needed to gain more experience before I could see their value. With time writing more code, looking at more good and bad code, discussions with others, and reflection, I have gotten better at JavaScript. I’m not sure what steps I could leave out to shorten this process. It certainly is easier to learn how to write lively code if you work with others who care deeply about the code they write and who willingly point out and explain the good bits to you when you are ready to absorb them. If you are fortunate to have wise souls around you, take advantage of their wisdom…then put in the time you need to become better.

What Makes for Lively Centers?

In this blog I dig a bit deeper into what makes good, lively centers.

Let me introduce another property of lively centers: alternating repetition. Consider this photo I took of blooming plum trees in Kyoto.

The photo doesn’t do the scene justice. The flowering trees went on and on and on.

And on.

Forking off the thick trunks were ever thinner arching mossy green and dark branches covered with blossoms. Those blossoms seemed to float between branches forming a sea of pink. I could get lost in those trees.

Looking out over that landscape I felt peaceful, relaxed, and calm.

Earlier, walking the streets of Kyoto I snapped this photo (imagining the sign was inviting me to get with it, “chill out”, be calm and come inside to purchase whatever they were offering).

That sign made me laugh. It contrasts the difference between strong centers that reach out and grab me with the rather flat-affect of everyday more mundane centers. The sign made me curious, but not enough to go inside the shop.

Good alternating repetition doesn’t mean the same thing over and over again. It involves smaller sub-patterns of repeating structures. In preparing for our workshop on Alexander’s properties, I looked for an example of alternating repetition in my personal life. I jog. So it was easy to find alternating repetition in my running routine (Joe Yoder found it in his dancing).

I jog several times a week. I don’t do the same routine everyday. Once a week, typically on Thursdays, I do tempo training with my running coach. She makes me run harder than I’d like to normally do for either a specific distance or time, then has me run easily for a bit to recover. I repeat this hard run-easy jog recovery cycle 3 or 4 times a session. Other days I do my normal easy 3+ mile runs (outside when the weather permits) through town. On the weekends I do a longer run of an hour or more at a comfortable pace. I repeat this cycle each week, with variations due to the running season (winter is slower/less running than summertime) or whether I am recovering from an injury or getting back to running after traveling or recovering from a race.

Another property of strong centers is local symmetry. The photo of these shrines (again, taken in Kyoto) illustrates this.

The shapes of the rooflines, windows, and pedestals are similar, not identical. Slight variations make them more interesting.

Here is the welcoming Port wine and strawberry arrangement that my husband and I found in our Doro valley hotel room in Portugal.

Symmetrical. But berries are closer together on the right hand plate. The napkin folds differ. Perfect symmetry is less pleasing (at least to me) than near symmetry. Alexander claims that a hand-hewn quality strengthens centers (he calls this property “roughness”).

When I discover a strong system of centers I get an emotional kick. And there it is. You discover Alexander’s properties when you engage with the things in your life and form personal connections (rather than letting the scene just float by). Finding Alexander’s properties involves a bit of luck, developing a discriminating eye, and being on the alert for positive connections between what you are experiencing and/or making.

Making strong, lively centers is another matter altogether. Yet how hard can it be? Well…that is a topic for another blog post or two.

Discovering Lively Centers

Two weeks ago, Joe Yoder and I conducted a workshop on Discovering Alexander’s Properties in Your Life at AsianPLoP, the patterns conference held in Tokyo.

I’m still reeling from the many feelings that were stirred up as I prepared for this workshop. Inspired by the beauty we found in Kyoto, I included several photographs I took of that very beautiful place. Each property was illustrated with some image that resonated strongly resonated with us (whether taken in Kyoto or not, each photo had a strong personal connection).

Before I tell more about the workshop, I want to give a gentle introduction to Christopher Alexander’s ideas on properties of things that have life. Fundamental to Alexander’s ideas is the notion of “centers” arranged in space. According to Alexander, things that have life exhibit one or more of fifteen essential properties, which include, among other things, strong centers and boundaries.

Alexander’s notion of a “center” is simple to grasp—it is a coherent entity that exists in space. Individual centers are important (and they exist at different levels of scale), but more profound is how centers are arranged in space to form a more integral whole. Alexander writes,

“The system of these centers pays a vital role in determining what happens in the world. The system as a whole—that is to say, its pattern— is the thing which we generally think of when we speak about something as a whole. Although the system of centers is fluid, and changes from time to time as the configuration and arrangement and conditions all change. Still, at any given moment, these centers form a definite pattern. This pattern of all the centers appearing in a given part of space—constitutes the wholeness of that part of space. It is this structure, which is responsible for its degree of life.”

Here’s a photo I took in Hawaii for a talk I gave several years ago on the Nature of Order at another patterns conference. It illustrates the notion of a strong center:

I like this photo because it shows that the center of individual orchid flowers are accentuated and strengthened by the brown spots and five petals that form a star shape that surround. Not only is there a “center” to each flower (the stamen surrounding the pistil); there are several “centers” that surround that innermost center.

And here is the photo we showed at our Asian PLoP workshop to illustrate strong centers found on the roof line of an Imperial Palace building in Kyoto:

I leave it to you to find all the centers in this photo. The center cap on the top of the roofline accentuates the gold flower underneath. Underneath that is another circular center. Below that a symmetrical scroll. And there are centers (gold flowers) arranged along the roofline. Centers, when arranged in a pleasing fashion, complement and strengthen each other.

Centers are strengthened by boundaries that surround, enclose, separate, and connect them. Here’s a photo I took in Yellowstone Park of a crusty boundary at the edge of a bubbling hot springs:

The boundary between the hot spring and the land surrounding is fluid and ever changing (witnessed by the salty stains left from evaporation at the water’s edge).

The wood slats wrapped around this tree at the Imperial Palace in Kyoto, protect it from the wooden brace and form a boundary between the tree and the support:

After, explaining and illustrating Alexander’s fifteen properties, we asked attendees to form groups to brainstorm and discuss Alexandrian properties that they found in their own lives. One group focused on Alexandrian properties they found in the Tokyo metro and railway system; another on the properties of bento boxes; and a third on properties in education and learning. I was surprised by the diversity (and how profound some of the examples were, even though at first blush they seemed straightforward and simple).
But that is the topic of my next blog post.

To close this post I want to share two photos that whimsically illustrate “life” my camera eye unexpectedly caught in Kyoto. This first photo is obvious:

The second takes a little bit of searching to find the “owl-like” creature:

Is Kyoto a magical place? I think so. It was amazing to discover human-like or animal-like images in photos of trees. I had no idea that those shapes were there until I looked at my photographs. My eye must have been unconsciously drawn to those shapes (but truly, I didn’t see them until I looked at the photos). Even more startling to me is the liveliness of inanimate things—whether a hand crafted software module or a carefully placed garden pathway—that is more subtle and also profound. When we find strong centers surrounded by other strong centers in designed things, there is a pleasing sense of discovery and wonder.

Can’t I Just Be Reasonable?

“Don’t it always seem to go
That you don’t know what you’ve got
Till it’s gone” –Joni Mitchell, Big Yellow Taxi

My husband “loaned” my unused iPad to my father-in-law. I hadn’t used it in a year. He thought since it might expand his dad’s horizons and bring the Internet to him for the first time. But my father-in-law didn’t use the iPad either. Upon finding it stashed in a drawer with its power drained (after a couple of months), I demanded it back.

I proceeded to load it with some New Yorkers to read on a trip.

It was great…for a very short while.

But this past week I read a physical New Yorker instead of its e cousin. There was something extremely satisfying about shuffling and folding its pages. Sure, I can listen to poets read their poems and enjoy the extra photos on my iPad. But the video clips? Not that interesting.

My iPad remains underutilized. And I am feeling a bit guilty about asking for it back. Why did I want it back? Why did I react so strongly to “losing” my unused iPad?

Daniel Kahneman, in Thinking Fast and Slow, gives insights into how we react to perceived gains and losses. We respond to a loss far more strongly than we do to an equivalent gain. Take something away and we’ll pine for it even more than its perceived value. And we are driven more strongly to avoid losses than to achieve gains. No, that isn’t rational. But it’s how we are wired.

Sigh. So chalk up my reaction to my loaned iPad to petty possessiveness and an ingrained reaction to perceived loss.

Even more distressing, Kahneman points out that we take on extra risks when faced with a loss. We continue to press on in spite of mounting losses. Losing gamblers keep gambling. Homeowners are reluctant to sell a house that is underwater in value and move on. And additional time and resources get allocated to late, troubled software projects with little or no hope for success. It’s easier than deciding to pull the plug.

Not surprisingly, our aversion to loss increases as the stakes increase. But not dramatically. Only when things get really, really bad do we finally do pull back and stop taking avoidable risks. And to top that off, loaded, emotional words heighten our perception of risk (conjuring up scary imaginary risks that we then don’t react to rationally).

So knowing these things, how can I become a better decision-maker? Right now, I don’t see any easy fixes. Awareness is a first positive step. When I feel a pang of loss I’m going to try to dig deeper to see whether I need to shift my perspective (which might be hard to do in the heat of the moment, but nonetheless…). Especially when I suddenly become aware of a loss. Knowing about loaded, emotional words, I’m going to be sensitive to any emotional “negative talk” that could distort my perceptions of actual risks.

Still, I’m searching for more concrete actions to take that can help me react more rationally to perceived losses. Is this a hopeless cause? I’m interested in your thoughts.

Distinguishing between testing and checking

At Agile 2013 Matt Heusser presented a history of how agile testing ideas have evolved in “Twelve Years of Agile Testing: And What Do We Do Now?” The most intellectually challenging idea I came away from Matt’s talk was the notion that testing and checking are different. I’m still trying to wrap my head around this distinction.

Disclosure: I’m not a testing insider. However, along with effective design and architecture practices, pragmatic testing is a passion of mine. I have presented talks at Agile with my colleague Joe Yoder on pragmatic test driven design and quality scenarios.

Like most, I suspect, I have a hard time teasing out a meaningful distinction between checking and testing. When I looked up definitions for testing and checking there was significant overlap. Consider these two definitions:

Testing-the means by which the presence, quality, or genuineness of anything is determined

Testing-a particular process or method for trying or assessing.

And these for checking:

Checking-to investigate or verify as to correctness.

Checking-to make an inquiry into, search through, etc.

Using the first definition for testing, I can say, “By testing I determine what my software does.” For example, a test can determine the amount of interest calculated for a late payment or the number of transactions that are processed in an hour. Using the second meaning of testing, I can say that, “I perform unit testing by following the test first cycle of classic TDD” or that, “I write my test code to verify my class’ behavior after I’ve completed a first cut implementation that compiles.” Both are particular testing processes or methods.

I can say, “I check that my software correctly behaves according to some standard or specification (first meaning).” I can also perform a check (using the second definition) by writing code that measure how many transactions can be performed within a time period.

I can check my software by performing manual procedures and observing results.

I can check my software by writing test code and creating an automated test suite.

I might want to assess how my software works without necessarily verifying its correctness. When tests (or evaluations) are compared against a standard of expected behavior they also are checks. Testing is in some sense a larger concept or category that encompasses checking.

Confused by all this word play? I hope not.

Humans (and speakers of any native language) explore the dimensions and extent of categories by observing and learning from concrete examples. One thing that distinguishes a native speaker from a non-native speaker is that she knows the difference between similar categories, and uses the appropriate concept in context. To non-native speakers the edges and boundaries of categories seem arbitrary and unfathomable (meanings aren’t found by merely reading dictionary definitions).

I’ve been reading about categories and their nuances in Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences. (Just yesterday I read about subtle difference between the phrases, “Letting the cat out of the bag” and “Spilling the beans.”)

So what’s the big deal about making a distinction between testing and checking?

Matt pointed us to Michael Bolton’s blog entry, Testing vs. Checking. Along with James Bach, Michael has nudged the testing world to distinguish between automated “checks” that verify expected behaviors versus “testing” activities that require human guided investigation and intellect and aren’t automatable.

In James Bach’s blog, Testing and Checking Refined, they makee these distinctions:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.
(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)”

My first reaction was to throw up my hands and shout “Enough!” My reaction was that of a non-native speaker trying to understand a foreign idiom! But then I calmed down, let go of my urge to precisely know James and Michael’s meanings, accept some ambiguity, and looked for deeper insight.

When Michael explained,

“Checking is something that we do with the motivation of confirming existing beliefs” while, “Testing is something that we do with the motivation of finding new information.”

it suddenly became more clear. We might be doing what appears to be the same activity (writing code to probe our software), but if our intentions are different, we could either be checking or testing.

Why is this important?

The first time I write test code and execute it I learn something new (I also might confirm my expectations). When I repeatedly run that test code as part of a test suite, I am checking that my software continues to work as expected. I’m not really learning anything new. Still, it can be valuable to keep performing those checks. Especially when the code base is rapidly changing.

But I only need to execute checks repeated on code that has the potential to break. If my code is stable (and unchanging), perhaps I should question the value of (and false confidence gained by) repeatedly executing the same tired old automated tests. Maybe I should write new tests to probe even more corners of my software.

And if tests frequently break (even though the software is still working), perhaps I need to readjust my checks. I’m betting I’ll find test code that verifies details that should be hidden/aren’t really essential to my software’s behavior. Writing good checks that don’t break so easily makes it easier to change my software design. And that enables me to evolve my software with greater ease.

When test code becomes stale, it is precisely because it isn’t buying any new information. It might even be holding me back.

I have a long way to go to become a fluent native testing speaker. And I wish that James and Michael could have chosen different phrases to describe these two categories of “testing” (perhaps exploration and verification?).

But they didn’t.
Fair enough.

Architecture at Agile 2013

What a busy, intense week Agile 2013 was! It was a great opportunity to connect with old friends and meet folks who share common interests and energy. I also had a lot of fun spreading the word/exchanging ideas about two things I’m passionate about: software architecture and quality.

At the conference I presented “Why we need architecture (and architects) on Large-Scale Agile Projects”. I’ve presented this talk a few times. This time I added “Large Scale” to the title and submitted it to the enterprise agile track. I wanted to expose the audience to several ideas: that there are both small team/project architecture practices and larger project/program architectural practices that can work together and complement each other, what it means to be an architecture steward, and some practices (like Landing Zones, Architecture Spikes, and Bounded Experiments/prototyping, and options for making architecture-related tasks visible).

I spoke with several enthusiastic architects after my talk and throughout the week. They shared how they were developing their architecture. They also asked whether I thought what they were doing was made sense. In general, it did. But I want to be clear: One size doesn’t fit all. Sometimes, depending on risks and the business you are in, you need to invest effort in experimenting/noodling/prototyping before committing to certain architectural decisions. Sometimes, it is absolutely a waste of time. It depends on what you perceive as risky/unknown and how you want to deal with it. The key to being successful is to do what works for you and your organization.

Nonetheless, in my talk when I spoke about some decisions that are too important to wait until the last moment, someone interrupted to say that I had gotten it wrong: “It isn’t the last possible moment, but the last responsible moment”. I know that. Yet I’ve seen and heard too many stories about irresponsible technical decision-making at the last possible moment instead of the last responsible moment. People confuse the two. And they use agile epithets to justify their bad behaviors. Surprise, surprise. The “last responsible moment” can be misinterpreted by some to mean, “I don’t want to decide yet (which may be irresponsible)”. People rarely make good decisions when they are panicked, overworked, stressed out, exhausted or time-crunched.

Check out my blog posts on the Last Responsible Moment and decision-making under stress if you want to join in on that conversation.

But I digress. Back to architecture.

I was happy to see two architecture talks on the Development and Software Craftsmanship track. I attended Simon Brown’s “Simple Sketches for Diagramming your Software Architecture” and also had the pleasure of hanging out with Simon to chat about architecture and sketching. Simon presented information on how to draw views of a system’s structure that are relevant to developers, not too fussy or formal, yet convey vital information. This isn’t hardcore technical stuff, but it is surprising how many rough sketches are confusing and not at all helpful. Simon regularly teaches developers to draw practical informative architecture sketches. He collects sample sketches from students before and after they receive his sketching advice. Their improvement is remarkable. If you want to learn more, go to Simon’s website,

I shared with Simon the sketching exercises in my Agile Architecture and Developing and Communicating Software Architecture workshops…and pointed him to two books on I’ve drawn on for drawing inspiration: Nancy Duarte’s slide:ology and Matthew Frederick’s 101 Things I Learned in Architecture School. It’s all about becoming better communicators.

Scott Ambler talked about Continuous Architecture & Emergence Design. I was happy to see that he, too, advocated architecture spikes and envisioning (and proving the architecture with evidence/code). In his abstract he states: “Disciplined agile teams will perform architecture envisioning at the beginning of a project to ensure that they get going in the right direction. They will prove the architecture with working code early in construction so that they know their strategy is viable, evolving appropriately based on their learnings. They will explore new or complex technologies with small architecture spikes. They will explore the details of the architecture on a just-in-time (JIT) basis throughout construction, doing JIT modeling as they go and ideally taking a test-driven-development (TDD) approach at the design level.”

There are way too many concurrent sessions and too few hours in the day to get to all the talks I’d have liked to attend. I just wished I’d been able to attend Rachel Laycock and Tom Sulston’s talk on the DevOps track, “Architecture and Collaboration: Cornerstones of Continuous Delivery”…but instead I enjoyed Claire Moss’ “Big Visible Testing” experience report. Choices. Decisions.

If you’d like to continue the conversation about architecture on agile projects, I’d love to hear from you.

Architecture Patterns for the Rest of Us

Recently I have been reading and enjoying Robert Hanmer’s new book, Pattern-Oriented Software Architecture for Dummies. Disclaimer: In the past I have shied away from the books for dummy series, except when I wanted to learn about something clearly out of the realm of my expertise. Even then, just the notion of reading a book for dummies has stopped me from several dummy book purchases. Good grief! I didn’t know what I have been missing.

This book is not theoretical or dry. The prose is a pleasure to read. And it goes into way more depth than you might expect. Instead of simplifying the Patterns of Software Architecture book that is also on my bookshelf, I’d say it adds and complements it with clear explanation of benefits and liabilities of each pattern, step-by-step guides to implementing each architectural pattern and more. As an extra bonus, the first two parts of the book contain some of the clearest writing about what patterns are and how they can be used or discovered that I’ve seen.

I wish Bob Hanmer would write more patterns for the Dummy series. He knows his subject. He has good, solid examples. And he doesn’t insult your intelligence (In contrast, I find that the Head First books are definitely not my cup of tea…I don’t want to play games or have patterns trivialized). Bob has an easy, engaging style of writing. The graphics and illustrations are compelling In fact, I reproduced a couple of the graphics about finding patterns in a lecture, with credit to Bob of course, in a lecture I gave last week to my enterprise application design students.

This is a good book. If you’ve wondered about software architecture patterns and styles, read this book. Buy it. And tell your software buddies about it, too.