Why Process Matters

I’ve been working on a talk for Smalltalks 2014 about Discovering Alexander’s Properties in Your Code and Life.

I don’t want it to be an esoteric review of Alexander’s properties.

That won’t satisfy my audience or me.

I want to impart information about how Alexander’s physical properties might translate to properties of our software code as well as illustrate poignant personal examples in the physical world.

But equally important, I want impress upon my audience that process is vital to making lively things (software and physical things). In his, The Process of Creating Life: Nature of Order, Book 2, Alexander states,

“Processes play a more fundamental role in determining the life or death of the building than does the ‘design’.”

Traditionally, building architects hand off their designs as a set of formal drawings for others others to build. Does this remind you of waterfall software development? There isn’t anything inherently wrong with constructing formal architectural drawings…but they never end up reflecting accurately what was built. Due to errors in design, situational decisions based on new discoveries made as things are built, better construction techniques, changing requirements, limitations in tools or materials, a building is never exactly constructed as an architect draws it up.

Builders know that. Good ones exercise their judgment as they make on the spot tactical re-design decisions. Architects who are deeply involved in the building process know that.

Alexander is rather unhappy with how buildings are typically created and suggests that any “living” process (whether it be for building design or software or any other complex process) incorporate the following ten characteristics.

He challenges us software makers to do better, too:

“The way forward in the next decades, towards programs with highly adapted human performance, will be through programs which are generated through unfolding, in some fashion comparable to what I have described for buildings.”

As software designers and implementers we know that nothing is ever built exactly as initially conceived. Not even close. Over the past decade or so we have made significant strides our processes and our tools that enable us to be more effective at adaptively and incrementally building software. My thoughts on some ways we have tackled these characteristics are interspersed in italics, below.

Characteristics of Living Processes

1.Step-by-step adaptive. Small increments with opportunity for feedback and correction.
Incremental delivery, retrospectives, stakeholder reviews
Repetitive incremental design cycles:
Design a little– implement–refactor rework refine–design…
Design/test cycles: Write specifications of behavior, write some code that correctly works according to the specification, test and adapt…
Tests and production code equally valued

2. Whatever the greater whole is always the main focus of attention and the driving force.
Working deployable software, minimally-marketable features

3. The entire process is governed and guided by the formation of living centers (that help each other)
Code with defined boundaries, separate responsibilities, and planned for interconnections

4. Steps take place in a specific sequence to control the unfolding.
We have a rhythm to our work. Whether it is test-first or test-frequent development, conversations with customers to define behavioral “specifications”, or other specific actions. In order to control unfolding we need to understand what we need to build, build it, then refine as we go. And we have tools that let us manage and incrementally build and record our changes.

5. Parts created must become locally unique.
Build the next thing so it fits with and expands the wholeness of what we are building. Consider our options. Refactor and rework our design. Make functions/classes/code cohesive. Bust up things that are too big into smaller elements. Revise.

6. The formation of generic centers is guided by patterns.
We have in mind a high-level software architecture that guides our design and implementation.

7. Congruent with feeling and governed by feeling.
Instead of just making a test pass, see if what you just wrote feels right (or if it feels like an ugly hack). Reflect on how and what we are building. Don’t be merely satisfied with making your code work. How do you feel about what you’ve just built? How do those using your software react to it? How do those who have to maintain and live with your code feel about it?

8. For buildings, the formation of structure is guided by the emergence of an aperiodic grid, which brings coherent geometric order
Software is structured, too…we’ve got to be aware of how we are structuring our code.

9.Oriented by a form language that provides concrete methods of implemented adapted structure through simple combinatory rules
We use accepted “schemas” to create coherent software systems. We have software architecture styles, framework support, and even pattern languages emerging…

10. Oriented by the simplicity transformation, and is pruned steadily
We can consistently refactor and rework our code with the goal of simplifying in order to enable building more functionality. We rebuild to create sustainable software structures. Even if we come back to some old working code and see how to simplify it, we can rework it taking into consideration what we’ve learned in the meantime.

Yet, let’s not be complacent. Agile or Lean or Clean Code or Scrum practices don’t address every process characteristic Alexander mentions. I am not sure that all these characteristics are important for building lively software. Alexander is not a builder of software systems, although he spent a lot of time talking with some pioneers and leaders of the software patterns movement.

Some process ideas of Alexander sound expensive and time consuming. Do we always need to reflect on how we feel about what we code? Sometimes we need to build quickly, not painstakingly. We need to prove its worth, and then refine our software. Our main thought may be on just simply making it work, not how it makes us or others feel. So how do we add liveliness to this quickly fashioned software? What’s a good process for that? Mike Feathers wrote about Working Effectively With Legacy Code, but there is a lot more to consider. Maybe that quickly fashioned software has tests, maybe it doesn’t, maybe some parts have a reasonable structure, and maybe other parts should be tossed.

We often build disposable and hopefully short-lived software. Problems crop up when that code gets rudely hacked to extend its capabilities and live past its expiration date.

There are most likely different processes for creating lively software, based on where you start, where you think you are headed, and how lively it needs to be (not everything needs to be fashioned with such care).

People are continually building new and better tools and libraries. There is a rich and growing ecosystem of innovative open source software. Process matters. I think we have a lot still to learn about building lively software. It is a heady time to be building complex software systems.

When in Rome…

I attended my first XP conference in Rome in May. As they say, “when in Rome, do as the Romans do.” The actual quote attributed to St. Ambrose is, “si fueris Romae, Romano vivito more; si fueris alibi, vivito sicut ib,” or “if you should be in Rome, live in the Roman manner; if you should be elsewhere, live as they do there.”

As Italians do, I enjoyed good food, good company, and great wine.

I gave a workshop on Understanding Design Complexity (using commonality-variability analysis) and a tutorial on Agile Architecture Values and Practices.

I also sampled research AND non-research sessions in equal measure. Unlike other agile conferences I’ve attended, research is a prominent part of this conference. I listened to several research presentations and volunteered to be a commenteer for one research paper.

The XP 2014 paper acceptance rate was somewhat selective, with over 50% of the submissions rejected. Research topics were wide-ranging including a case study on UX Design, a survey of user story size and estimation accuracy, another on agile development practices, a case study on visualizing testing, another on agile and lean values, and one comparing scripted with exploratory testing. Short papers touched on agile organizational transformations, Randoori Coding Dojos, and how expertise is located on agile projects. In addition, four experience reports were published (in contrast, 27 experience reports will be published and presented this year at the Agile Conference).

If the research papers I sampled are an indicator, PhD students seem to be busy doing empirical studies on agile practices, processes, and values. If you go to the Open University’s website you’ll see these topics listed under their Empirical Studies PhD program: The emergence of Agile software development, the role of physicality and co-location in agile software development, and XP and end user development

Agile software development is being studied and data being collected. The paper I commenteered, “Why We Need a Granularity Concept for User Stories” by Olga Liskin and her colleagues, reported on results gleaned from surveying developers (who self-selected themselves as agile developers) working on both private and open source projects on GitHub. I had three short minutes after the presentation to carry on a dialog with Olga about their findings. Fortunately we also had more time to discuss her work over lunch.

This paper raised as many questions in my mind as it answered. On small projects (10 people or less), 55% of the respondents said they did not estimate their stories on a daily basis. Is this an affirmation of the No Estimates movement, or just how people work on certain kinds of projects? Open source projects are quite different from product development. (Not all of the GitHub projects were open source ones, but still…). Depending on your project, you may simply work off a backlog, not necessarily do any estimates to forecast how much you can accomplish. Several developers I know only \break down their work into identifiable tasks. Effort estimates (as long as they know how to do the work) aren’t that important. They’re done when they are done. And if you build shippable, workable software each sprint, well, you always have something potentially useful to deliver.

Here’s just a sampling of what I’d like to know: For those who estimated, how did story size correlate to estimation accuracy? And what happens when stories are split? Are estimates for split stories more accurate? And just how important is estimation accuracy to those who make estimates? It just something they did to get a rough idea, was it “required” of them, or did they effectively use estimates to plan and make forecasts? Did people who estimate improve their accuracy over time? It seems that if you are learning how to use a new technology, once you’ve spun up, estimates should be more accurate. But how long does it take to get up to speed on your estimates?

Not surprisingly, the authors found that the smaller the story size and the better known the technology was, the more accurate estimates were. Research results often confirm the obvious (but it is still it is nice to have some empirical evidence to back up our intuitions).

It their conclusions the authors don’t recommend a “best story size”. I’m happy they didn’t. They didn’t have enough evidence. Conventional wisdom says stories should fit into sprints (comfortably). Makes me want to know more about the accuracy of larger stories. It seems reasonable, that the more work involved, the more possibility you’ll miss something that may influence the accuracy of your estimate. And you also may fudge on your estimates just so you make sure a story fits inside a sprint (because you don’t want to split it). But estimates are just estimates. They shouldn’t be expected to be 100% accurate. How do people behave differently when estimation accuracy is rewarded (or worse yet, punished)? Estimates are just estimates. You learn to recalibrate your efforts when a task is harder or easier than expected.

The authors cautioned that developers’ views about story size and estimates need to be balanced by others’ concerns. Too many stories can be burden a product owner. Developers in dysfunctional organizations might pad estimates, just so they have some slack (does anyone knowingly pad estimates? I’d like to hear from you.).

So is it better to bundle up small, related stories and estimate them as a single unit? Maybe. Back in the days when I had to estimate work, I didn’t like tasks being too small. If they were, my manager would look at them more closely (I’m not sure why). I remember once telling a manager, you can ask what we will be doing every day and how long each task will take, or we will guarantee that we will deliver all the features above the cut line in the next two weeks. But you can’t have both daily accuracy and predictability. She backed off on knowing exactly what we were doing every day (as long as we weren’t stressed out). This was long before the Agile software development movement, but it still seems relevant. Our small team worked off a prioritized list of features. It didn’t matter who did what task, whenever a task was finished, the next one was picked up. And we finished our work on schedule…because we wanted to make our commitment.

Here’s one parting thought about empirical studies. I’m very wary of biases that work their way into them. Those who answered the agile estimation survey might be very different from those who did not. Self-reporting of any thing we do (whether it be software estimation, the amount of food we eat, or how much we exercise) is notoriously inaccurate. We underestimate our weight, overestimate our capabilities, and don’t remember accurately. More accurate evidence is obtained through field studies where people are observed working. I wish more software empirical researchers could have opportunities to work directly agile teams and spend significant time getting to know them and how they work (in addition to just asking them questions).

Distinguishing between testing and checking

At Agile 2013 Matt Heusser presented a history of how agile testing ideas have evolved in “Twelve Years of Agile Testing: And What Do We Do Now?” The most intellectually challenging idea I came away from Matt’s talk was the notion that testing and checking are different. I’m still trying to wrap my head around this distinction.

Disclosure: I’m not a testing insider. However, along with effective design and architecture practices, pragmatic testing is a passion of mine. I have presented talks at Agile with my colleague Joe Yoder on pragmatic test driven design and quality scenarios.

Like most, I suspect, I have a hard time teasing out a meaningful distinction between checking and testing. When I looked up definitions for testing and checking there was significant overlap. Consider these two definitions:

Testing-the means by which the presence, quality, or genuineness of anything is determined

Testing-a particular process or method for trying or assessing.

And these for checking:

Checking-to investigate or verify as to correctness.

Checking-to make an inquiry into, search through, etc.

Using the first definition for testing, I can say, “By testing I determine what my software does.” For example, a test can determine the amount of interest calculated for a late payment or the number of transactions that are processed in an hour. Using the second meaning of testing, I can say that, “I perform unit testing by following the test first cycle of classic TDD” or that, “I write my test code to verify my class’ behavior after I’ve completed a first cut implementation that compiles.” Both are particular testing processes or methods.

I can say, “I check that my software correctly behaves according to some standard or specification (first meaning).” I can also perform a check (using the second definition) by writing code that measure how many transactions can be performed within a time period.

I can check my software by performing manual procedures and observing results.

I can check my software by writing test code and creating an automated test suite.

I might want to assess how my software works without necessarily verifying its correctness. When tests (or evaluations) are compared against a standard of expected behavior they also are checks. Testing is in some sense a larger concept or category that encompasses checking.

Confused by all this word play? I hope not.

Humans (and speakers of any native language) explore the dimensions and extent of categories by observing and learning from concrete examples. One thing that distinguishes a native speaker from a non-native speaker is that she knows the difference between similar categories, and uses the appropriate concept in context. To non-native speakers the edges and boundaries of categories seem arbitrary and unfathomable (meanings aren’t found by merely reading dictionary definitions).

I’ve been reading about categories and their nuances in Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences. (Just yesterday I read about subtle difference between the phrases, “Letting the cat out of the bag” and “Spilling the beans.”)

So what’s the big deal about making a distinction between testing and checking?

Matt pointed us to Michael Bolton’s blog entry, Testing vs. Checking. Along with James Bach, Michael has nudged the testing world to distinguish between automated “checks” that verify expected behaviors versus “testing” activities that require human guided investigation and intellect and aren’t automatable.

In James Bach’s blog, Testing and Checking Refined, they makee these distinctions:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.
(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)”

My first reaction was to throw up my hands and shout “Enough!” My reaction was that of a non-native speaker trying to understand a foreign idiom! But then I calmed down, let go of my urge to precisely know James and Michael’s meanings, accept some ambiguity, and looked for deeper insight.

When Michael explained,

“Checking is something that we do with the motivation of confirming existing beliefs” while, “Testing is something that we do with the motivation of finding new information.”

it suddenly became more clear. We might be doing what appears to be the same activity (writing code to probe our software), but if our intentions are different, we could either be checking or testing.

Why is this important?

The first time I write test code and execute it I learn something new (I also might confirm my expectations). When I repeatedly run that test code as part of a test suite, I am checking that my software continues to work as expected. I’m not really learning anything new. Still, it can be valuable to keep performing those checks. Especially when the code base is rapidly changing.

But I only need to execute checks repeated on code that has the potential to break. If my code is stable (and unchanging), perhaps I should question the value of (and false confidence gained by) repeatedly executing the same tired old automated tests. Maybe I should write new tests to probe even more corners of my software.

And if tests frequently break (even though the software is still working), perhaps I need to readjust my checks. I’m betting I’ll find test code that verifies details that should be hidden/aren’t really essential to my software’s behavior. Writing good checks that don’t break so easily makes it easier to change my software design. And that enables me to evolve my software with greater ease.

When test code becomes stale, it is precisely because it isn’t buying any new information. It might even be holding me back.

I have a long way to go to become a fluent native testing speaker. And I wish that James and Michael could have chosen different phrases to describe these two categories of “testing” (perhaps exploration and verification?).

But they didn’t.
Fair enough.

Architecture at Agile 2013

What a busy, intense week Agile 2013 was! It was a great opportunity to connect with old friends and meet folks who share common interests and energy. I also had a lot of fun spreading the word/exchanging ideas about two things I’m passionate about: software architecture and quality.

At the conference I presented “Why we need architecture (and architects) on Large-Scale Agile Projects”. I’ve presented this talk a few times. This time I added “Large Scale” to the title and submitted it to the enterprise agile track. I wanted to expose the audience to several ideas: that there are both small team/project architecture practices and larger project/program architectural practices that can work together and complement each other, what it means to be an architecture steward, and some practices (like Landing Zones, Architecture Spikes, and Bounded Experiments/prototyping, and options for making architecture-related tasks visible).

I spoke with several enthusiastic architects after my talk and throughout the week. They shared how they were developing their architecture. They also asked whether I thought what they were doing was made sense. In general, it did. But I want to be clear: One size doesn’t fit all. Sometimes, depending on risks and the business you are in, you need to invest effort in experimenting/noodling/prototyping before committing to certain architectural decisions. Sometimes, it is absolutely a waste of time. It depends on what you perceive as risky/unknown and how you want to deal with it. The key to being successful is to do what works for you and your organization.

Nonetheless, in my talk when I spoke about some decisions that are too important to wait until the last moment, someone interrupted to say that I had gotten it wrong: “It isn’t the last possible moment, but the last responsible moment”. I know that. Yet I’ve seen and heard too many stories about irresponsible technical decision-making at the last possible moment instead of the last responsible moment. People confuse the two. And they use agile epithets to justify their bad behaviors. Surprise, surprise. The “last responsible moment” can be misinterpreted by some to mean, “I don’t want to decide yet (which may be irresponsible)”. People rarely make good decisions when they are panicked, overworked, stressed out, exhausted or time-crunched.

Check out my blog posts on the Last Responsible Moment and decision-making under stress if you want to join in on that conversation.

But I digress. Back to architecture.

I was happy to see two architecture talks on the Development and Software Craftsmanship track. I attended Simon Brown’s “Simple Sketches for Diagramming your Software Architecture” and also had the pleasure of hanging out with Simon to chat about architecture and sketching. Simon presented information on how to draw views of a system’s structure that are relevant to developers, not too fussy or formal, yet convey vital information. This isn’t hardcore technical stuff, but it is surprising how many rough sketches are confusing and not at all helpful. Simon regularly teaches developers to draw practical informative architecture sketches. He collects sample sketches from students before and after they receive his sketching advice. Their improvement is remarkable. If you want to learn more, go to Simon’s website, CodingTheArchitecture.com

I shared with Simon the sketching exercises in my Agile Architecture and Developing and Communicating Software Architecture workshops…and pointed him to two books on I’ve drawn on for drawing inspiration: Nancy Duarte’s slide:ology and Matthew Frederick’s 101 Things I Learned in Architecture School. It’s all about becoming better communicators.

Scott Ambler talked about Continuous Architecture & Emergence Design. I was happy to see that he, too, advocated architecture spikes and envisioning (and proving the architecture with evidence/code). In his abstract he states: “Disciplined agile teams will perform architecture envisioning at the beginning of a project to ensure that they get going in the right direction. They will prove the architecture with working code early in construction so that they know their strategy is viable, evolving appropriately based on their learnings. They will explore new or complex technologies with small architecture spikes. They will explore the details of the architecture on a just-in-time (JIT) basis throughout construction, doing JIT modeling as they go and ideally taking a test-driven-development (TDD) approach at the design level.”

There are way too many concurrent sessions and too few hours in the day to get to all the talks I’d have liked to attend. I just wished I’d been able to attend Rachel Laycock and Tom Sulston’s talk on the DevOps track, “Architecture and Collaboration: Cornerstones of Continuous Delivery”…but instead I enjoyed Claire Moss’ “Big Visible Testing” experience report. Choices. Decisions.

If you’d like to continue the conversation about architecture on agile projects, I’d love to hear from you.

Evangelizing New (Software Architecture) Ideas and Practices

Last December I spoke at the YOW Conferences in Australia on Why We Need Architects (and Architecture) on Agile Projects.

I wanted convince agile developers that emergent design doesn’t guarantee good software architecture. And often, you need to pay extra attention to architecture, especially if you are working on a large project.

There can be many reasons for paying attention to architecture: Meaningful progress may be blocked by architectural flaws. Some intricate tricky technical stuff may need to be worked out before you can implement functionality that relies on it. Critical components outside your control may need to be integrated. Achieving performance targets may be difficult and you need to explore what you can do before choosing among expensive or hard to reverse alternatives. Many other developers may depend on some new technical bit working well (whether it be infrastructure, that particular NoSQL database, or that unproven framework). Some design conventions need to be established before a large number of developers start whacking away at gobs of user stories.

I characterized the role of an agile architect as being hands-on; a steward of sustainable development, and someone who balances technical concerns with other perspectives. One difference between large agile and small agile projects is that you often need to do more significant wayfinding and architectural risk mitigation on larger projects.

I hoped to inspire agile developers to care more about sustainable architecture and to consider picking up some more architecture practices.

Unfortunately, my talk sorely disappointed a thoughtful architect who faces an entirely different dilemma: He needs to convince non-agile architects to adapt agile architectural practices. And my talk didn’t give him any arguments that would persuade them.

My first reaction to his rant was to want to shout: Give up! It is impossible to convince anyone to adopt new way of working that conflict with his or her deeply held values.

But then again, how do new ways of working ever take hold in an organization? By having some buzz around them. By being brought in (naively or conscientiously) by leaders and instigators who know how to build organizational support for new ideas. By being new and sexy instead of dull and boring. By demonstrating results. By capturing people’s imagination or assuaging their fears. Surreptitiously, quietly replacing older practices when reasons for doing them are no longer remembered. When the old guard dies out or gives up.

Attitudes rarely change through compelling discussions or persuasive argumentation. I look to Mary Lynn Manns and Linda Rising’s Fearless Change: Patterns for Introducing New Ideas for inspiration.

I take stock of how much energy I want to invest in changing attitudes and how much investment people have in the way they are doing things now. I don’t think of myself as a professional change agent. Yet, as a consultant I am often brought in when organizations (not necessarily every person, mind you) want to do things differently.
People generally aren’t receptive to new ideas or practices or technologies when they feel threatened, dismissed, disrespected, underappreciated, or misunderstood. I am successful at introducing new techniques when they are presented as ways to reduce or manage risks or increase productivity or reliability or improve performance or whatever hot button the people who I am exposing these new ways of working are receptive to. Labeling techniques as “agile” or “lean” may create a buzz in those that already receptive. But the reaction can be almost allergic in those who are not. The last thing I want do is to foster divisiveness. Labels do that. If I get people comfortable taking just that next small step, that is often enough for them to carry on and make even more significant changes. Changing practices takes patience and persistence. At the end of the day I can only convince, demonstrate and empathize; I cannot compel people to make changes.

How far should you look ahead?

I’m a consultant. So an easy answer would be, “It depends.”

My real answer is more nuanced: only look ahead far enough to see some significant issue and a plausible way to resolve it. If you want to get better, practice looking ahead in the context of your work and its established rhythms and natural cycles.

If your organization doesn’t value planning and you want to look ahead, start by taking small steps. The reason to look ahead is to check for some pre-work or preparation that is likely to save time or money.

If you are a tech lead, try looking ahead in the context of release planning. See what architecture risks you spot (or new design requirements) as new features are scoped. Take notes on what you are concerned about. Spend some time speculating on what you should be doing architecturally to support the next release. Have a conversation with a trusted colleague about your concerns. Identify concrete follow-on activities specifically targeted at addressing your biggest concerns.

If you are an agile developer, you may not want to look further ahead than the current sprint. If you aren’t already doing so, try some iteration modeling for a couple of hours at the beginning of an iteration. Grab a handful of related story cards for a new feature and sketch out some design ideas. Draw sequence diagrams on a white board. Jot down class or component responsibilities on CRC cards. Noodle around. Give yourself the luxury of exploring with your co-workers how things might work. Don’t expect a final design, just a plausible one that will be fleshed out and revised as you implement those stories.

If your organization tends to look too far ahead, take actions to personally shift your focus away from endless planning. Winston Churchill said, “It is always wise to look ahead, but difficult to look further than you can see.” Looking too far ahead is also a big waste of time. Remedy this time suck by limiting the scope of what you plan for. Find opportunities to build and measure things.Learn to perform small experiments, make small decisions, communicate your results and move things along.

Colleagues and co-workers I’d classify as over-eager forward-looking designers typically build up comfortable infrastructure they think will be needed as a matter of course before they settle in to the programming task at hand. A good indicator that they’ve gone overboard is how others react to their work. One unhappy forward-looking architect remarked recently, “Not many people appreciate what I am trying to accomplish.”

You may not be aware that you look too far ahead. One indicator: you view most programming tasks as an opportunity to build or try out a cool new framework or create your own special DSL. (If you are part of an infrastructure team, then, fine, that is your job. Don’t embellish your creations. And make sure others can actually use your stuff.) If you are an application developer, nudge yourself towards refactoring and finding abstractions through concrete scenarios. Hold yourself back from writing speculative code.

I look to my colleague, Joe Yoder, as someone who takes a balanced approach when creating infrastructure or suggesting novel architectures. Joe likes to build flexible, adaptable systems, in particular Adaptive Object-Model architectures. He has years of experience doing this. He’s enthusiastic when opportunities come up to build these kinds of systems, but knows that this architecture style isn’t a good fit for every rapidly changing system. And he doesn’t build every part of such a system this way; only the parts that require extensive, ongoing customizations.

How far do you look ahead? Do you think you’ve found a good balance?

Agile Architecture Myth #5: Never Look Ahead

Stop it! Stop anticipating! Get coding. Get that next user story working. Don’t. Ever. Anticipate.

This seems to be a common agile mantra these days. Since we allegedly aren’t good at anticipating how code will evolve, well frack it, let’s just stop all speculation.

Instead, just do your level best to code for what you know right now. Let the design emerge. Keep it clean. Period.

But is that the best we can do?

Underlying these beliefs is the assumption that you will get burned if you look ahead because you are bad at anticipating change or the need for design flexibility. Since you never get it right, why ever bother looking ahead?

Well, who says we are bad at looking ahead? Is it really true?

The fact is we all make design mistakes, whether we look ahead or not. But that doesn’t mean looking ahead is the cause of more mistakes. (Sometimes it even prevents mistakes).

I believe that you can and should look ahead. And that most developers, given half a chance, are pretty good at incorporating past experiences and making anticipatory design choices. We might even say that an “experienced developer” is one who has the ability to use past experiences to help make good forward-looking decisions.

When you refactor code to reduce dependencies you are making an on-the-spot judgment call about what part of your code is more stable and what is more likely to change. You make these decisions based on concrete evidence. You are only speculating a little bit (things keep changing here, so I’ll try this).

Experienced designers developers look ahead at a larger scale. And the longer they’ve been at it, the more they incorporate past experiences into daily looking just a little bit ahead. You learn to anticipate where the design wrinkles will be.

If you’re less experienced, this seems like magic. How can you ever know when you’ve made an appropriate anticipatory design decision when the payoff only comes sometime in the future? Well, you don’t ever know for certain.

But you get better at looking ahead by practicing it in a safe, supportive development environment. One where you are encouraged to look around and look ahead. You get to see what more senior people do. And, if you are lucky, when you ask them why they are doing something they take the time to explain their reasoning.

Looking ahead isn’t antithetical to refactoring. In fact, it might even help you avoid some rework. Over time, we learn to make even more reasonable judgment calls based on experience, wisdom, and gut instinct. (And sometimes people think that we aren’t thinking ahead because we can fluidly make on on-the the-spot design changes).

If you never practice looking ahead, well, you might just be doomed to refactoring your way to a good design. For me, that simply doesn’t cut it. And it can simply be too painful and inefficient for large, complex projects.

Recently, a thoughtful agile coach and architect told me:

“When I start a project, I try to figure out what is the minimum architecture we need to do for that context. I start from a few key things:
What are the requirements in terms of performance, scalability, reliability, security, localization, functionality etc.?

What deployment structure best fits these requirements?

What platforms and frameworks facilitate the development of the application?

What don’t we know how to do at this point? How can we find out?”

Asking these questions helps him, “define a basic but good architecture. One that ensures the right [architectural] features, minimizes time/costs and reduces risks.” That’s just enough upfront speculation to establish a baseline architecture.

But being ever thoughtful, he also asked, “I am wondering however: Should I do more? In what circumstances?”

Under certain circumstances it may be appropriate to do even more upfront design work.

Agile heresy?

I think not.

When we tackle really big unknowns it makes sense to explore more before making costly investments. Experimentation backed up by evidence-based decision making makes our designs better. Looking ahead can save us time.

But still, too many are deceived into believing in those vividly scary look-ahead failure stories. Then they overreact, pulling back from any speculative design.

My advice: Don’t overlook the many times when you have successfully looked ahead and saved time and effort. Not looking ahead is symptomatic of inexperience.

Agile Architecture Myths #4 Because you are agile you can change your system fast!

Agile designers embrace change. But that doesn’t mean change is always easy. Some things are harder to change than others. So it is good to know how to explain this to impatient product stakeholders, program managers, or product owners when they ask you to handle a new requirement that to them appears to be easy but isn’t.

Joe Yoder and Brian Foote, of the Big Ball of Mud fame, provide insights into ways systems can change without too much friction. They drew inspiration from Stuart Brand’s How Buildings Learn. Brand explains that buildings are made of components organized into shearing layers. He identifies six layers: the site, the structure, the skin, the services, the space plan, and physical stuff in the building.

Each shearing layer has its own value and speed of change, or pace. According to Brand, buildings are able to adapt because faster changing layers (e.g. the services layers and spaces) are purposefully designed so to not be obstructed by slower changing layers. If you design your building well, it is fairly easy to change the plumbing. Much easier than revising the foundation. And it is even easier to rearrange the furniture. Sometimes designers go to extra efforts to make a component easier to change. For example, most conference centers are designed so that sliding panels form walls that allow inside space to be quickly modified.

Brand’s ideas should’t be surprising to software developers who follow good design practices that enable us to adapt our software: keep systems modular, remove unnecessary dependencies between components, and hide implementation details behind stable interfaces.

Foote and Yoder’s advice for avoiding tangled, hard-to-change software is to, “Factor your system so that artifacts that change at similar rates are together.” They also present a chart of typical layers in a software system and their rates of change:

Frequently, we are asked to support new functionality that requires us to make changes deep in our system. We are asked to tinker with the underlying (supposedly slower changing) layers that the rest of our software relies upon. And often, we do achieve this miraculous feat of engineering because interfaces between layers were stable and performed adequately. We got away with tinkering with the foundations without serious disruption. But sometimes we aren’t so lucky. A new requirement might demand significantly more capabilities of our underlying layers. These types of changes require significant architectural rework. And no matter how matter how agile we are, major rework requires more effort.

Because we are agile, we recognize that change is inevitable. But embracing change doesn’t make it easier, just expected. I’d be interested in hearing your thoughts about Foote and Yoder’s shearing layers and ways you’ve found to ease the pain of making significant software changes.

Re-thinking Thinking and Planning

In the tutorial, Hooray We’re Agile Testers! What’s Next?, Janet Gregory apologized a couple of times for saying upfront thinking or planning. I know Janet wanted to let the audience know that she isn’t a fan of massive test plans or documents written way ahead. But her remarks got me wondering. Why in the agile community is it a taboo to recommend or admit to doing any upfront thinking or planning?

When you incrementally build production code and tests you do come to a deeper understanding about your software’s capabilities and what your stakeholders want. As a consequence, if you are thoughtful and reactive, it’s natural to adjust and adapt to feedback. But it’s also natural to do some upfront thinking [there, I went and said “upfront thinking”, not just thinking or speculation, and I was cringing ever so slightly as I wrote those words] before expending a lot of time and effort. Sometimes you need to think about and discuss what you should be doing so you don’t waste time doing the wrong things.

As someone who embraces agile values, I expect to readjust my ideas and plans as I learn more. I get it that too much upfront anything results in much wasted effort. But there’s a distinction I’d like to make between too much and enough thinking and preparation.

If you have an agile mindset, you recognize that plans have limits. You let go of any illusion that you’re in control of your destiny simply because you have a plan. You are open to change. But being responsive to change doesn’t obviate the benefits of planning. Especially if your project has to mesh together the work of several teams.

I’m tired of having to apologize for upfront thinking, effort expended in creating a project or product roadmap, defining an initial product landing zones, or exploring options. Give thinking a chance. And find the right balance.

Who Defines (or Redefines) Landing Zone Criteria?

Who should be in on discussions that set landing zone criteria? Because most landing zone have architectural implications, someone knowledgeable about the system architecture, in addition to the product owner and other key stakeholders should have a lot to say in vetting a landing zone.

Someone who has depth, breadth, and vision, is an ideal candidate for crafting an initial cut. But even if you are brilliant, I suggest you fine-tune your landing zone with a small, informed group. If you have lots of stakeholders who want to chime in, give each stakeholder group a voice in identifying qualities and values they find particularly relevant. And ask a representative from each stakeholder group to join in on a landing zone discussion. At a landing zone review, expect healthy discussion. Experts are usually highly opinionated as well as passionate.

You might even want to facilitate your discussions.

I find it much more effective to have an informed facilitator guide landing zone discussions, than a dispassionate, uniformed professional facilitator. An ideal landing zone meeting facilitator should know about the program or product but need not be the “authority” or definitive “expert”. It’s more important that they know the landscape and they are good at gaining consensus and getting the best out of individuals who hold strong opinions. Possibilities: chief business architects, quality leads, the program or product manager, yes, even a software architect.

Sometimes a facilitator needs to step out of that role and offer informed opinions. I find this highly desirable, as long as this shift is made clear: “Hang on, do you mind if I take a stab at explaining what I think are more reasonable targets?”

Minimum, target and acceptable values should be agreed upon by the group and it might take some discussion to reach mutual understanding and consensus. For example, someone might initially propose a set of landing zone values based on historical trends and extrapolation. The software architect could push back with values based on prototyping experiments and new benchmark data. The group might end up adjusting targets because that evidence was compelling. Or, they might agree on tentative values that need to be firmed by an expert. Hammering out numbers just to finish the landing zone isn’t the goal. Instead, you want to shape ideas for what you think will make your product a success based on the best evidence you have, backed up by experience and tempered by group wisdom. To effectively do this, people need to come to the discussion with mutual respect, trust and no hidden agendas.

And if you are agile, recognize that your landing zone can and should recalibrated once you learn more about what’s possible.