What we say versus what we do

I’ve been hunting design heuristics for a couple of years. I’ve had conversations with designers in order to draw out their “go to” heuristics. I’ve joined design and programming sessions with experienced designers and captured on-the-fly what we were doing. My goal is to learn ways to effectively find heuristics in the wild, distill them, and then share them broadly.

But lately, I’ve been thinking about how to deal with this puzzle: What people say they do isn’t what they really do.

Let me give you an example. I joined the Cucumber folks last summer for several remote mobbing sessions. One heuristic they shared with me was this:

Heuristic: the person who has the most to learn (or knows the least about how to solve the problem) should take on the role of driver.

In “classic” mob programming as initially described, the person who is the driver and has his or her hands on the keyboard follows guidance of navigators—other mobbers who ostensibly guide the driver on what to do in order to make progress.

“In this “Driver/Navigator” pattern, the Navigator is doing the thinking about the direction we want to go, and then verbally describes and discusses the next steps of what the code must do. The Driver is translating the spoken English into code. In other words, all code written goes from the brain and mouth of the Navigator through the ears and hands of the Driver into the computer.”

What I observed the Cucumber mob doing was somewhat different. Sometimes the driver had an initial design idea and was keen to try it out. In this case, they often actively navigated and drove at the same time. Occasionally others would comment and offer advice. But mostly they just watched the design and implementation unfold. Sometimes that eager driver asked the others, should we try this now? But instead of waiting an uncomfortable length of time for them to chime in, the driver often continued on without any discussion. And I don’t think that driver was asking a rhetorical question. They wanted feedback if someone had any.

At other times the driver would stop to collect their thoughts and force a discussion. In this case the driver became uncomfortable when they didn’t get enough feedback. And sometimes they took themselves out of the driver’s role, asking someone else to fill in. In short, while I observed that driver was often in control of the wheel (and forward progress), at the same time, they didn’t overly dominate. Drivers rotated. Every one got their turn. But how these switches happened was very dynamic.

In all fairness, the mob programming website did touch on drivers and their participation in discussions:

“The main work is Navigators “thinking , describing, discussing, and steering” what we are designing/developing. The coding done by the Driver is simply the mechanics of getting actual code into the computer. The Driver is also often involved in the discussions, but her main job is to translate the ideas into code.”

While the main job of the driver may be “mechanics,” the small fast moving Cucumber team didn’t insist that getting the code into the computer be the driver’s main function. Now mind you, I suspect being remote affected their style of communications. They also knew each other well and knew each others’ common design approaches and preferences.

So why did the Cucumber mob behave this way? Did they believe one way but consciously act in another way? Did they intentionally lie about their heuristics? Or were they deceiving themselves? Are people wired to explain what they do through some kind of distortion field? How often do people believe one thing (and hold it up as an ideal) but then choose alternative heuristics? If so, is this OK?

I’m not sure the team was aware that their ways of driving/navigating deviated from the conventional driver/navigator roles until I shared my observations with them. I suspect that when they first started mobbing they were more rigorous about following the “rules” for these roles. Over time they found their own ways of working. And so the heuristics they collectively use to decide what to do, what design approach to try next, and how they interact with each other are much more fluid and nuanced than the simple descriptions of drivers and navigators on the mob programming website. They don’t exactly go “by the book.” And I suspect their heuristics for how they work together are still evolving.

So how should I as a heuristics hunter reconcile my simple goal of distilling essential heuristics with the messy realities I find on the ground?

Should I plunge into a concerted effort to sort out and formulate more nuanced heuristics? The short answer is, yes. While I want to find and record both general and more particular heuristics, I’m not inclined to want to sort them out into tidy, neat categories. After all, as Billy Vaughn Koen says, there is more than one way to solve any design problem and more than one heuristic that can work. By recording these nuances, I hope to get richer insights into the different conditions and cases and situations that lead to choosing them.

This still leaves me with one nagging question: How can I reconcile what people say they do and believe with what they actually do? My (current) approach is that as I distill heuristics I also describe the context where I find them. Should it bother me that designers don’t do as they say they do all the time? Probably not. After all, we’re wonderfully creative problem solvers. And there are always options.

What’s going on here?

Question: What is more exasperating than reading design documentation that doesn’t synch up with the code?
One answer: Writing such useless design documentation.
Another answer: Writing documents that don’t stand a chance of being aligned with the code.
My answer: Reading design documentation that is at the wrong level of abstraction or detail to help me do the task at hand.

Courtesy xkcd https://xkcd.com/license.html


A few years ago I worked with someone, who, when asked for a high level overview of some complex bits of code we were going to refactor, produced a detailed Visio diagram that went on for 40+ pages! In that document were statements directly lifted from his code. The intent was to represent the control logic and branching structure of some very complicated algorithms (he copied conditional statements into decision diamonds and copied code that performed steps of the algorithm directly into action blocks). To him, the exercise of creating this documentation had really clarified his design. I was amazed he went to such an effort. To me, the control structure was evident by simply reading the code. Moreover, what I was looking for, and obviously didn’t communicate very well, was that I would appreciate some discussion at a higher level, explaining what algorithm variations there were and how code and data were structured in order to handle myriad typical and special cases.

I wanted to be oriented to the code and data and parts that were configurable and parameterized.

Later, I learned from one of his colleagues that “Bob” always explained things in great detail and wasn’t very good at “boiling things down.” If you wanted to ask “Bob” a question about the code you’d better plan on at least spending half an hour with him.

Another time a colleague created a sequence diagram explaining how data was created that was used by a nightly cron job. What was missing was any description of what that cron job did. Maybe it was so obvious to him that he didn’t think it was important. But I lacked the context to get the full picture. So I had to dig into the cron job script to understand what was really going on.

When I want to get oriented to a large body of code, I like to see a depiction of how it is structured and organized and the major responsibilities of each significant part. Not the package or file structure (that will be useful, eventually), but how it is organized into significant modules, functions, classes and/or call structures. I want to know what the major parts of the system are and why they are there and relationships between them. And at a high level, how they interact. And then I want to understand key aspects of the system dynamics. Just by staring at code I’ll never fathom all the ins and outs of its execution model (e.g. what threads or processes there are, what important information is consumed, produced, or passed around).

Simon Brown gave a talk at XP2016 on The Art of Visualizing Software Architecture. Simon travels the world giving advice and training and talks at conferences about how to do this. He has also build a tool called Structerizr which:

“ blend[s] together the best parts of the various approaches and allow software developers to easily create software architecture diagrams that remain up to date when the code changes. Structurizr allows you to take a hybrid “extract and supplement” approach. It provides you with a way to create software architecture models using code. This means you can extract information from your code using static analysis and reflection, supplementing the model where information isn’t readily available. The resulting diagrams can be visualized and shared via the web.”

I had the opportunity to spend an hour with Simon getting a personalized demo of Structurizr, and discussing the ins and outs of his tool works, how it has been used, and some aspirations for the future.

I was initially surprised about how you specify what your “model elements and relationships” are: you write code specifications for this. To me, it seemed that this could have more easily be accomplished by writing some parseable external description or using a diagramming tool which enabled me to link to the relevant code.

However, I know why Simon’s made this choice: He deeply believes that code should be the reference point for all structural architecture information. So he thinks it a natural extension that developers write a bit more code to specify the important elements of the models they want to see and how to depict them. To Simon, it isn’t sensical to just “add an element” to a diagram unless there is some attachment to actual code. For example, you may define a Hibernate specification to represent data access to a repository. Then that can be specified as a data source. With external descriptions that are not based on code, you have a greater chance of it becoming disconnected from what code actually does. So be it. This is where my values diverge a bit from Simon’s. I favor a richer or higher level documentation that isn’t code based if it tells me something I need to know to do a task (or something I shouldn’t do) or that gives me insights that I wouldn’t find otherwise. I also like the freedom to embellish my diagrams with extra annotations, colors, highlights, etc., etc. so I can focus viewers’ attention. And consequently, I also like to create a key that describes the elements of any diagram so casual readers can understand my notation, too.

After processing these model descriptions, Structurizr spits out JSON descriptions that represent aspects of a C4 model: Context, Containers, Components, and Class diagrams. These are then submitted to a service, which then create various model visualizations that can directly link to parts of your code source. There’s also a rudimentary ability to add additional bits of information to textually describe your architecture and architecture requirements. I’m glad that, recognizing the limits of what code can tell about architecture, the tool provides an easy entry way into developers telling more about the architecture in text.

In our conversation, Simon emphasized that one important benefit of linking architecture document to code is that it never gets out of synch. The code for your architecture documentation can be versioned, right along with your production code.

Fair enough. Useful even. But how helpful is the documentation that is generated in understanding what’s going on in your system?

I think this is one way (certainly not the only way, however) to provide an orientation into the overall structure of your system and major relationships between its parts. But there are definite limits to what you can describe by models generated from static analysis of code. Such models don’t tell me the responsibilities of a component. I have to glean that from supplementary text (which I didn’t see an easy way to attach to the model descriptions) or to abstract that by reading code. And they certainly won’t explain any dynamic system behavior or system qualities.

While I find Structurizr’s direct connection to the code intriguing, it is also its biggest limitation. Structurizr is intended to be used by developers who want to create some architecture documentation, which stays in synch with their code and that also can be shared via websites or wikis or embedding it into other documents.

Structurizr has succeeded with this fairly modest goal: assist developers who don’t ordinarily generate any useful architecture diagrams whatsoever to do something.

But this isn’t enough. As designers or architects, only you can tell me answers to my larger questions about why your code is structured the way it is, and what are the important bits about the system and runtime execution that are intentional and crucial to understand and preserve. No diagram generated by a tool will ever replace your insights and wisdom. No amount of code comments can convey all that either. I need to hear (and read) and see more of this kind of stuff from you.

Being Agile About Documenting and Communicating Architecture


Software architects and developers often need to defend, critique, define, or explain many different aspects of the design a complex system. Yet agile teams favor direct communications over documentation. Do we still need to document our designs?

Of course we do.

We won’t always be around to directly communicate our design to all current and future stakeholders. Personally, I’ve never found working code (or tests) to be the best expression of a software design. Tests express expectations about observable system behavior (not about the design choices we made in implementing that behavior). And the code doesn’t capture what we were thinking at the time we wrote that code or the ideas we considered and discarded as we got that code fully functioning. Neither tests nor code capture all our constraints and working assumptions or our hopes and aspirations for that code.

So what kind of design documentation should we create and how much documentation should we create, for whom, and when? And what is “good” enough documentation?
Eoin Woods gave a talk at XP 2016 titled, “Capturing Design (When You Really Have To)” that got me to revisit my own beliefs on the topic and to think about the state of the current agile practices on documenting architecture.

One take away from Eoin’s talk is to consider the primary purpose of any design description: is it primarily to immediately communicate or to be a long-term record? If your primary goal is to communicate on the fly, then Eoin claims that your documentation should be short lived, tailored to your audience, throwaway, and informal. On the other hand, design descriptions as records are likely to be long-lived, preserve information, be maintainable and organized, and more formal (or well-defined).

Since we are charged to deliver value with our working software, it is often hard to pay attention to any perceived efforts at “slowing down” to describe our systems as we build them. But if we’re building software that is expected to live long (and prosper), it makes sense to invest in documenting some aspects of that system—if nothing more to serve as breadcrumbs useful to those working on the system in the future or to our future selves.

So what can we do to keep design descriptions useful, relevant, unambiguous, and up-to-date? Eoin argues that to be palatable to agile projects, design documentation should be minimal, useful, and significant. It should explain what is important about the design and why it is important; what design decisions we made (and when), and what are the major system pieces, their responsibilities and key interactions. Because of my Responsibility-Driven Design values and roots, I like that he considers system elements and their responsibilities as being minimally useful information descriptions of a system. But to me this just a starting point to get an initial sense of what a system is and does and does not do. There certainly is room for more description, and more details when warranted.

And that gets us back to pragmatics. A design description isn’t the first thing that developers think of doing (not everyone is a visual thinker nor a writer). I know I’m atypical because, early in my engineering career, I enjoyed spending 3 weeks writing a document on my universal linker worked and how to extend it and its limitations. I was nearly as happy producing that document as I was in designing and implementing that linker. It pleased me that sustaining engineering found that document useful years after I had left for another job.

So for the rest of you who don’t find it natural to create documentation, here’s some advice from Eoin:

  • Do it sprint-by-sprint (or little by little). Don’t do it all at once.
  • Be aware of tradeoffs between fidelity and maintainability. The more precise it is, the harder descriptions will be to keep up to date.
  • Know the precision needed by your document’s users. If they need details, they need details. The more details, the more effort required to keep them up to date.
  • Consider linking design descriptions to your code (more on that in another blog post)
  • Create a “commons” where design descriptions are accessible and shared
  • Focus on the “gaps”—describe things that are poorly understood
  • Always ask what’s good enough. Don’t settle for less when more is needed or more when less is needed.

To this list I would add: if design descriptions are important to your company and your product or project, make it known. Explain why design documentation is important, respond to questions and challenges of that commitment, and then give people the support they need to create these kinds of descriptions. Let them perform experiments and build consensus around what is needed.

Be creative and incremental. One company I know made short video recordings of designers and architects giving short talks about why things worked the way they did. They were really short—five minutes or less. Another team created lightweight architecture documentation as they enhanced and made architectural improvements to the 300+ working applications they had to support. It was essential for them that there be more than just the code as the knowledge about these systems was getting lost and decaying over time. Rather than throw up their hands and give up, they just created enough design documentation using simple templates and only as new initiatives were started.

Find a willing documenter. Sometimes a new person (who is new to the system and to the company) is a good person to pair with another old hand to create a high-level description of the system as part of “getting their feet wet.” But don’t just stick them with the documentation. Have them write code and tests, too. From the start.

On Thinking

Daniel Kahneman, in Thinking Fast and Slow, introduces two “systems” of thinking: fast or type 1, and slow or type 2. We don’t actually have two different parts to our brains—but we do have two distinct types of thinking mechanisms, each with their respective strengths and drawbacks.

Fast thinking happens automatically with little or no effort or sense of voluntary control. Quick, what’s your first reaction to this photo?

My immediate visceral reaction: “That dog is ugly! It looks crazed, angry…. scary.”

The dog is Peanut, winner of the 2014 Ugliest Dog contest. His wild hair, bulging eyes and protruding teeth are at odds with his sweet personality. Holly Chandler of Greenville, North Carolina, Peanut’s owner says he was seriously burned as a puppy, resulting in bald patches all over his body. Peanut is healthy now and nothing like my first impression.

In a nutshell, fast thinking is automatic (we do not have to work at it), impulsive (we can’t easily control it) and emotional (once I’ve told you a little about Peanut you probably feel sorry and sad and more positive towards Peanut, despite your initial reaction). Fast thinking draws on our amazing abilities to quickly form associations between related things. When it works well, we easily come to conclusions, draw out inferences, and confidently decide without breaking a sweat.

It’s sweet when your associative memory clicks along in high gear providing ready answers. But fast thinking can be subtly and easily biased. The variety of cognitive biases we have is astonishing.

In contrast, slow or system 2 type thinking requires energy. We get tired when we do a lot of slow thinking. We use slow thinking to reason logically, compute, or, surprisingly, when paying attention to someone speaking to us in a crowded noisy room. Both writing code and writing prose are slow thinking tasks for me. Reasoning about the cause of a bug is too. The fast cycles of strict TDD help some with breaking up the necessary slow thinking into more manageable chunks. For me, deciding what the test should be and how to implement my design ideas are almost always slow thinking tasks. Writing code may or may not be, depending on how familiar I am with the programming language, tool set and design context.

It is too simplistic to say that our brains are wired to be either emotional or logical. We don’t always get to choose which form of thinking we use. Even more interesting is how the two systems interact. Our system 2 constantly monitors our system 1 thoughts, and kicks in when it notices an inconsistency, saying, “Hmm…that doesn’t seem right,” perhaps leading to some conscious system 2 thinking and problem solving.

But as this two-star Amazon reviewer observes, Kahneman’s book doesn’t contain easy-to-digest pop psychology advice for how to be a better thinking being:

“Peace to all…The idea of system 1 and 2 for the brain was the twist. (System 1 is impulsive autopilot, system 2 reflective and analysis oriented)
Other than that, the book is filled with too many experiments that i felt didnt add practical value to me. I was hoping for practical solutions to help make those two systems work together in harmony…the book did NOT deliver that which was disappointing.”

Kahnman won’t fill you with sound-bite sized advice on how to sharpen your thinking. So what can you do now that you are aware of how you think? Are there ways to counteract the bad things about these systems and amplify the good things? I think so.

Since reading Kahnman, I’ve observed how my environment affects my thinking and how my thinking demands shift from task to task. Sometimes fast thinking trips me up because what I assumed made it difficult for me to really see things as they were. I keep working to counteract cognitive biases of my own and those I work with. I’ve become more aware when others don’t react as I expect or aren’t making “logical” decisions. I’ve become more understanding and less impatient.

I am experimenting with ways to support and enhance system 2 thinking in others and myself. I find that I need to carve out a space where I can work without distraction and have enough time to get deeply engaged in a challenging system 2 type task. I am easily distracted by noise or visual stimulation. I also find the Pomodoro technique doesn’t help me get more work done. I don’t have a problem with focus. My problem is knowing when to take a break. And I don’t easily get back on task after a 5 minute break.

Your experience no doubt varies. Our brains don’t work the same way. Let’s continue exploring how our daily work practices enhance our thinking and where we need to take extra care and pay more attention.

Digging In

As it transitions from summer to fall here in Oregon, clouds are hanging around in the morning. It’s harder for me to get up early because sunrise is closing in on 7 a.m. I get up at the crack of dawn; it’s dawn that is dragging its feet. With cooling weather and shorter days I tangibly feel the lively, easy time of the year slipping away.

I have to remind myself to put in extra effort and energy to maintain the status quo: keep up my jogging schedule, get to work early enough on the hard stuff so that I can get done what I want to before my energy is spent… I’ve got to dig in and fight the urge to curl up in a ball and hibernate for the winter.

Agile teams have to fight inertia, and dig in to overcome challenges and to sustain their progress, too.

Agile experience report authors, if not prompted, tend to forget those little details, too. When basking in their current success it is easy to overlook those seemingly unimportant actions that over time contributed to that success.

That’s why I think any experience report worth its salt needs to be shepherded. A shepherd works with the author to draws out their insights. A shepherd’s job is to help the author find their voice and dig into their experiences and find those insights. Being a shepherd is not like being an editor or a reviewer (I’ve played all these roles over the years). A shepherd, like a reporter, wants to get to the bottom of the experience—to find the things that worked, and the things that didn’t. They want the reader understand the struggle. Most shepherds I know find it deeply rewarding to draw out little tidbits from the author and help the author weave a more nuanced narrative.

For example, I know that sustaining improvements requires concerted effort and constant attention.

So when Patkós Csaba proposed to write about how his team at Syneto achieved one bug a month, I was thrilled to be his shepherd. I wanted to get to the bottom line: how did they really get there? What did it take?

I was happy when he didn’t gloss over how his manager gradually and steadily introduced new practices into the team. In my role of shepherd I asked him to make the timeline more clear—hey, it really took months before they got to that rate of one bug a month. I also recognized that it was important for him to tell about how staying in that “zone” proved elusive. Indeed, he did have a story to share about how things slipped when they stopped using a physical board and went to an online tool. The team stopped monitoring so closely what was going on and what was blocked. (Was it really too difficult for team members to open up the online tool and check progress? No, but they weren’t in the habit. Out of sight proved to be out of mind.)

To fix this, they installed a large screen that visibly displayed their work in progress in the online tool. Once in sight again, progress (or lack thereof) was visible to the whole team. And things improved again.

As their team composition changed, they found that what were unstated good practices weren’t commonly known by new team members. They had to be explicitly introduced. They had to dig in and write down some things that previously were implicitly understood.

When I met Patkós at Agile 2015, he graciously shared with me that his team had recently regressed to a few bugs a month. Now he is one of the old timers on his team. He feels deeply responsible for his team’s continued progress. Indeed they continue on their journey.

So when you read about any experience, I urge you to appreciate the effort it takes to get to that, “smooth, steady, productive, exciting state.” And more important, the effort it takes to sustain it.

It is a precious and special time when you are in the flow, when work goes smoothly, when everything is clicking and it feels easy. That’s what people like hear. It is exciting and cool to have achieved something special.

But I am inspired even more when I read about the effort, energy, and persistence it takes to keep things working smoothly. I like it when I read experience reports where people reflect, readjust what they’re doing, perform experiments (sometimes unsuccessful ones), and work to get back on track. It’s all part of digging in and getting the job done.

Early Performance Testing at Agile 2015

I enjoyed Eric Proegler’s session on Performance Testing in Agile Contexts at Agile 2015. To get my technical fix, I split time between the development & software craftsmanship track and the testing & quality track. I wish these weren’t two separate tracks. I’d like to see more testers attending dev sessions and more devs at testing & quality sessions. Quality is a whole team effort; it helps if we can share our perspectives and expertise about what it takes to design, implement, test, and deploy quality software.

Eric telling us about Early Performance Testing at Agile 2015


Last year Eric wrote a paper on early performance testing which he presented at the 2014 Pacific Northwest Software Quality Conference. It covers most if not all of the points in his Agile 2015 talk in more depth. In it he challenges us to do performance testing earlier, when software is in state of flux and there’s an opportunity to notice performance glitches and improve them while we still remember what’s recently changed in our software and its environment:

“Most people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the “accuracy” of simulation and environment considered critical to the value of the data the test provides. But what can we do to provide actionable and timely information about performance and reliability when the software is not complete, when the system is not yet assembled, or when the software will be deployed in more than one environment?”

Early performance testing helps detect critical performance issues before they become show-stoppers. Run frequently, simple to-the-point performance tests exercise common pathways through software. Eric calls these “calibration tests”:

“The concept I’d like to introduce here is Calibration – between builds, between environments, and between changed configurations. It is more important to track improvement and degradation as the project proceeds, and less important to attempt a specific projection of production experience.”

A calibration test should be simple to write, inexpensive to run (not requiring near-production realism and fidelity). Running them regularly gives you information about how recent software changes have affected performance. Unless you are regularly measuring performance, you won’t notice when it degrades. What’s a good calibration test? Something that exercises common pathways. Two common calibration tests for a web-facing transactional system might be measuring the time it takes for session login or time to authorize the most common user business transaction.

For the past several years I’ve been on a crusade to raise awareness about system qualities and introduce simple practices and techniques to agile teams for specifying, measuring, monitoring, and delivering on the non-functional (or system qualities). This year I introduced a new workshop, Being Agile About System Qualities. I’ve blogged and spoken at conferences about simple, powerful techniques: agile quality scenarios, adding quality-related acceptance criteria to user stories, and specifying and monitoring quality characteristics using Agile Landing zones, among others. I’m in the middle of writing a pattern collection about Shifting from Quality Assurance to Agile Quality with longtime collaborator Joe Yoder.

Calibration tests is another important technique I’m going to add to my toolkit and add to our collection of quality-related patterns.

Intentionally incrementally delivering on system qualities requires awareness, application of practical tactical techniques, and ongoing attention. I’m always on the lookout for more practical techniques for measuring, testing, checking, specifying, and delivering on system qualities on agile projects.

Shift Left: Testing Earlier in Development

One candidate for best experience report at XP 2015 was “High Level Test Driven Development – Shift Left” by Kristian Bjerek-Gustuen, Emil Wiik Larsen, Tor Stålhane, and Torgeir Dingsøyr. Their report describes testing strategies and tactics used on a large-scale IT development project in Norway. These authors called it “shifting left” because they wanted to move testing to as early as possible in development.

Their project was complex along several dimensions.

Coordinated work among independent teams: Several vendors were simultaneously developing and delivering systems that communicated with each other via service interfaces. Vendors did detailed design and development including unit, integration and sprint system testing of their deliverables. This work had to be integrated by the company. They performed acceptance testing after every sprint, system integration testing which ran in parallel with the vendors’ sprint system tests, as well user acceptance testing.

Significant coding and testing effort: Over 100,000 hours of testing and development went into the release.

Time pressure and project criticality: The delivery date was fixed. The system was an extension of an existing system for processing payments. It had to be delivered on time and with high quality.

The duration of the project they described was roughly 40 weeks, if I read the report correctly: 12 3-week sprints followed by 6 weeks of system testing. Faced with a time crunch, an existing group of 3 maintenance scrum teams were scaled up to 11 teams over four months. The customer who was receiving the software system didn’t have fulltime resources to dedicate to the scrum teams who were comprised of these roles: scrum master, developer, functional architect, technical architect, and QA/tester. My guess is that the customer was called upon as needed to provide clarifications, offer feedback at sprint demos, and to do all necessary testing and integration testing.

To ensure that each team had efficient and sufficient testing and quality assurance, a dedicated QA person/tester was assigned to each team. This reminds me of Stephanie Savoia’s experience report at Marchex where they embedded testers into their dev teams.

In the case of this Shift left report, the dedicated QA person (I prefer the term quality advocate) seemed very busy and vital: they ensured the quality of design documentation; decided whether the implementation was testable; prepared test data; worked with devs and the Scrum master to ensure high quality code. They also made sure test activities were performed as early as possible and that the customer provided necessary clarifications to the dev team.

Bridge, go-between, quality advocate, tester extraordinaire!

During sprint demos, in addition to demonstrating new functionality, teams also provided information on how testing was performed and any issues they had encountered in testing the implemented functionality.

There is more in the report about how they managed testing and dependencies between teams, identified and tracked high-risk modules changes and defects to aid test and development planning, and introduced exploratory testing using interdisciplinary teams. But I digress.

Back to those busy QA advocates. Not surprisingly, the experience reporters mentioned in passing that the dedicated QA advocate became one of the most central resources for the teams.

It seems that they were deeply appreciated by the whole team. That’s important. Where they ever overwhelmed by their responsibilities? Were they overworked?

Any experience report never answers all questions I have. I still find them thought provoking. Even though I’d love to sit down and talk with experience reporters and ask them more questions.

One pattern Joe Yoder and I have written about in our Shifting From QA to AQ pattern collection is called Pair With A Quality Advocate. If you purposefully pair up devs or other folks with a QA advocate, their expertise can “rub off” on less skilled/experienced testers and developers. Steadily (and sometimes stealthily), a quality mindset gets infused into the entire team. You still need quality advocates, but everyone takes on more responsibility for quality. And that’s a good thing.

Future Commitment

In early April, I spent a fun weekend at Pace University with Allen Wirfs-Brock and Mary Lynn Manns speaking with students of Pace University’s Doctor of Professional Studies in Computing program.

Mary Lynn

Mary Lynn Manns

It was a good opportunity to reconnect with Mary Lynn and to hear about new patterns in her latest book, More Fearless Change: Strategies for Making Your Ideas Happen.

Mary Lynn and Linda Rising have been working on these patterns for over a dozen years. They aren’t finished (curating long-lived patterns is an ongoing task). They continue to collect and write patterns for those who want to initiate, inspire, and sustain change in organizations they are part of or in their personal or professional life.

There isn’t one magic thing to do to institute change. Mary Lynn and Linda advise,

“You are working with humans, often in complex organizations, so results are rarely straightforward and the emergent behavior might be totally unexpected. Therefore, upfront detailed planning is rarely effective. Instead, take one small step toward your goal and see what happens. You will inevitably encounter missteps and failures along the way….uneven progress can be discouraging but may also teach you about the idea, about the organization, and, most of all, about yourself.”

Significant change often includes performing an ongoing series of experiments.

One new pattern (or strategy) in their book is Future Commitment. This one has hooked me many times.

Instead of asking a busy person for immediate help, ask them do something you need later and then wait for them to commit. Don’t worry about trying to get them to agree right away. Be patient. Keep in touch. Provide them regular information that encourages them to become more interested in your change initiative. Don’t be a pest.

Once they agree to help, solidify their commitment by recording the date and sending it in writing (along with gentle reminders so they are kept aware of their commitment). Reminders can be annoying. So include an update of what is happening and how their contribution will fit as part of a reminder. Have an alternate lined up in case that busy person can’t commit. There’s more to this strategy and the psychology behind how people commit (along with 14 other new strategies) in their book.

Like Mary Lynn and Linda, I don’t view change patterns as evil or manipulative. They are simply tools, that once aware of, you can use to engage people and get them to help you make changes.

And so here’s a future commitment if you have an urge to write about your agile experiences: We need experience reports for Agile 2016. Submit a proposal to the Agile Experience Report program. Do it soon. Within the next 30 days.

You don’t have to start writing now, but if you send in a proposal to me, it will grab my attention. Even better, if you are attending Agile 2015, we can also talk over your proposal. We ask that you write up a proposal so you get in the practice of collecting and communicating your thoughts in written form.

You could wait and submit your experience report idea via the Agile conference submission system in November. If you do, your chance of it being selected is limited. The format of the submission system makes it difficult to include details or have an ongoing conversation to sharpen your ideas. This year we had maybe 100 submissions from which we selected 20.

If you wait to submit your proposal via the conference system, it will have to really stand out from the crowd to grab our attention.

Instead, if you submit your proposal to the Agile Experience Program, it will be carefully read as soon as we receive it. If selected, we will work with you throughout the coming year. You don’t have to start writing immediately. You will get help from a shepherd as you write. As soon as you finish, your work will be published on the Agile Alliance website. You will be also invited to present your report at a future Agile conference.

I’m ready to help you write about your experience if you are ready to make a future commitment to writing. It all starts with your proposal.

Mob Programming: The Unruly Experience

This year 11 experience reports were presented at XP 2015. As co-chair of Experience Reports, along with Ken Power, one of my last tasks was to select a best experience paper. That was hard. We had several good experiences to choose from (I’ll report on more of them in future blog posts).

Ken Power and I independently read and reviewed each paper. At the conference we compared notes and agreed on 3 papers we would consider for this award. We also reviewed the authors’ presentations before making our final determination.

The best experience paper was awarded to Alexander Wilson for his report, Mob Programming – What Works, What Doesn’t. One thing that made this paper stand out, was its balanced view. Alex is clear about things you need to watch out for when and if you try mobbing—don’t be lulled into groupthink and be aware of overly dominant personalities.

Alex Wilson telling us about Mob Programming

Alex Wilson telling us about Mob Programming at XP 2015


He also gave us a close look into his company’s mob programming experience.

Mob programming is where a team of programmers swarm on a problem. They work together instead of pairing or individually writing code. One person is at the keyboard, the rest of the team helps navigate. They pay attention and guide the person at the keyboard. Team members take turns at the keyboard. When mobbing, there are more eyes on the code and more minds focused on the problem you are trying to solve.

Folks at Unruly were inspired to try mobbing after Woody Zuill talk at JavaOne in late 2014. Woody also spoke about Mob Programming at Agile 2014. You can read Woody’s experience report here.

Developers at Unruly were seasoned XPers accustomed to pair programming and test-driven development. They are responsible for the entire lifecycle of their product, from research to operational support. They are in constant touch with problems and ongoing sustainability of code they write.

They decided to form a mob to work on performance enhancements to their existing product. This involved refactoring and reworking critical code. If they made mistakes, it could result in lost data and have serious financial impacts. Mobbing for them wasn’t just a casual experiment.

After mobbing one day per week for a couple of months they found, in general, that they were pretty happy with their results. After 5 months they concluded that they don’t prefer mob programming for all user stories. They do find mobbing beneficial for complex work (where there is the potential for errors) over complicated work (where the solution is known, but is merely time consuming). They also find that tasks that are dull or repetitive were likely to cause their mob to dissolve. They only mob one day a week, unlike Woody’s team who mobs every day.

Unruly settled on a rhythm of periodic mobbing that worked for them. That’s what I like about Alex’s report. He tells us: here is what worked, here is where we tripped up, here is how we adapted our practices, and here is what we’re doing now.

Teams in learning organizations perform ongoing experiments. While they settle on a core set of practices, they also try to build upon them. They keep innovating, improving, and reflecting. And how they work continues to evolve.

What does karaoke have to do with being agile?

Last week I attended XP 2015 in Helsinki, Finland. This is my second time there…I hope to go next year, too. It is a unique blend of researchers and students along with people working in big and small companies doing all kinds and flavors of agile development from all over the world. 37 countries were represented. We had 11 experience reports (Ken Power and I were track co-chairs of experience reports. More about them in another post).

Lots of learning. We had fun, too.

One of the highlights of the conference banquet was Presentation Karaoke. 6 brave souls gave 3 minute impromptu speeches. Every 30 seconds a new, unknown slide appeared.

Llewellyn Falco’s PowerPoint karaoke at XP 2015


Here’s video of parts of a presentation by Llewellyn Falco.

Avraham Poupko won the competition by popular vote. Pardon the shaky camera…I was laughing and not holding my cell phone very steady ….

So what’s connection between PowerPoint karaoke and agile development? Each speaker ostensibly spoke on a conference theme. Mostly, they had fun. High performing agile teams, like these speakers, know how to roll with the punches. Sure there’s a theme, a plan, a backlog of work items. But sometimes requirements change and you need to adapt on the spot. Rather than throw up your hands or panic, you need to make things work. It is impossible to go with the flow if you are in a panic. Like improv, agility demands that you accept change. Unanticipated things happen. You accept them and you adjust.

Some speakers made connections between slides. Avraham spotted what he thought were mangoes on the head of someone on one slide and incorporated them throughout his talk. Giovanni summarized all the key points at the end (it is a wonder he remembered them…but again, he wasn’t in a panic). Llewellyn was master at dramatic pauses before finding something to say. Their presentations were great.

Next time you are faced with unexpected change, take some cues from these improvisers: pause, gather yourself, take stock of the situation and then figure out what you need to do. Don’t panic. You’ll get through it. And maybe with a more relaxed attitude you’ll even have some fun.