What’s going on here?

Question: What is more exasperating than reading design documentation that doesn’t synch up with the code?
One answer: Writing such useless design documentation.
Another answer: Writing documents that don’t stand a chance of being aligned with the code.
My answer: Reading design documentation that is at the wrong level of abstraction or detail to help me do the task at hand.

Courtesy xkcd https://xkcd.com/license.html


A few years ago I worked with someone, who, when asked for a high level overview of some complex bits of code we were going to refactor, produced a detailed Visio diagram that went on for 40+ pages! In that document were statements directly lifted from his code. The intent was to represent the control logic and branching structure of some very complicated algorithms (he copied conditional statements into decision diamonds and copied code that performed steps of the algorithm directly into action blocks). To him, the exercise of creating this documentation had really clarified his design. I was amazed he went to such an effort. To me, the control structure was evident by simply reading the code. Moreover, what I was looking for, and obviously didn’t communicate very well, was that I would appreciate some discussion at a higher level, explaining what algorithm variations there were and how code and data were structured in order to handle myriad typical and special cases.

I wanted to be oriented to the code and data and parts that were configurable and parameterized.

Later, I learned from one of his colleagues that “Bob” always explained things in great detail and wasn’t very good at “boiling things down.” If you wanted to ask “Bob” a question about the code you’d better plan on at least spending half an hour with him.

Another time a colleague created a sequence diagram explaining how data was created that was used by a nightly cron job. What was missing was any description of what that cron job did. Maybe it was so obvious to him that he didn’t think it was important. But I lacked the context to get the full picture. So I had to dig into the cron job script to understand what was really going on.

When I want to get oriented to a large body of code, I like to see a depiction of how it is structured and organized and the major responsibilities of each significant part. Not the package or file structure (that will be useful, eventually), but how it is organized into significant modules, functions, classes and/or call structures. I want to know what the major parts of the system are and why they are there and relationships between them. And at a high level, how they interact. And then I want to understand key aspects of the system dynamics. Just by staring at code I’ll never fathom all the ins and outs of its execution model (e.g. what threads or processes there are, what important information is consumed, produced, or passed around).

Simon Brown gave a talk at XP2016 on The Art of Visualizing Software Architecture. Simon travels the world giving advice and training and talks at conferences about how to do this. He has also build a tool called Structerizr which:

“ blend[s] together the best parts of the various approaches and allow software developers to easily create software architecture diagrams that remain up to date when the code changes. Structurizr allows you to take a hybrid “extract and supplement” approach. It provides you with a way to create software architecture models using code. This means you can extract information from your code using static analysis and reflection, supplementing the model where information isn’t readily available. The resulting diagrams can be visualized and shared via the web.”

I had the opportunity to spend an hour with Simon getting a personalized demo of Structurizr, and discussing the ins and outs of his tool works, how it has been used, and some aspirations for the future.

I was initially surprised about how you specify what your “model elements and relationships” are: you write code specifications for this. To me, it seemed that this could have more easily be accomplished by writing some parseable external description or using a diagramming tool which enabled me to link to the relevant code.

However, I know why Simon’s made this choice: He deeply believes that code should be the reference point for all structural architecture information. So he thinks it a natural extension that developers write a bit more code to specify the important elements of the models they want to see and how to depict them. To Simon, it isn’t sensical to just “add an element” to a diagram unless there is some attachment to actual code. For example, you may define a Hibernate specification to represent data access to a repository. Then that can be specified as a data source. With external descriptions that are not based on code, you have a greater chance of it becoming disconnected from what code actually does. So be it. This is where my values diverge a bit from Simon’s. I favor a richer or higher level documentation that isn’t code based if it tells me something I need to know to do a task (or something I shouldn’t do) or that gives me insights that I wouldn’t find otherwise. I also like the freedom to embellish my diagrams with extra annotations, colors, highlights, etc., etc. so I can focus viewers’ attention. And consequently, I also like to create a key that describes the elements of any diagram so casual readers can understand my notation, too.

After processing these model descriptions, Structurizr spits out JSON descriptions that represent aspects of a C4 model: Context, Containers, Components, and Class diagrams. These are then submitted to a service, which then create various model visualizations that can directly link to parts of your code source. There’s also a rudimentary ability to add additional bits of information to textually describe your architecture and architecture requirements. I’m glad that, recognizing the limits of what code can tell about architecture, the tool provides an easy entry way into developers telling more about the architecture in text.

In our conversation, Simon emphasized that one important benefit of linking architecture document to code is that it never gets out of synch. The code for your architecture documentation can be versioned, right along with your production code.

Fair enough. Useful even. But how helpful is the documentation that is generated in understanding what’s going on in your system?

I think this is one way (certainly not the only way, however) to provide an orientation into the overall structure of your system and major relationships between its parts. But there are definite limits to what you can describe by models generated from static analysis of code. Such models don’t tell me the responsibilities of a component. I have to glean that from supplementary text (which I didn’t see an easy way to attach to the model descriptions) or to abstract that by reading code. And they certainly won’t explain any dynamic system behavior or system qualities.

While I find Structurizr’s direct connection to the code intriguing, it is also its biggest limitation. Structurizr is intended to be used by developers who want to create some architecture documentation, which stays in synch with their code and that also can be shared via websites or wikis or embedding it into other documents.

Structurizr has succeeded with this fairly modest goal: assist developers who don’t ordinarily generate any useful architecture diagrams whatsoever to do something.

But this isn’t enough. As designers or architects, only you can tell me answers to my larger questions about why your code is structured the way it is, and what are the important bits about the system and runtime execution that are intentional and crucial to understand and preserve. No diagram generated by a tool will ever replace your insights and wisdom. No amount of code comments can convey all that either. I need to hear (and read) and see more of this kind of stuff from you.

Being Agile About Documenting and Communicating Architecture


Software architects and developers often need to defend, critique, define, or explain many different aspects of the design a complex system. Yet agile teams favor direct communications over documentation. Do we still need to document our designs?

Of course we do.

We won’t always be around to directly communicate our design to all current and future stakeholders. Personally, I’ve never found working code (or tests) to be the best expression of a software design. Tests express expectations about observable system behavior (not about the design choices we made in implementing that behavior). And the code doesn’t capture what we were thinking at the time we wrote that code or the ideas we considered and discarded as we got that code fully functioning. Neither tests nor code capture all our constraints and working assumptions or our hopes and aspirations for that code.

So what kind of design documentation should we create and how much documentation should we create, for whom, and when? And what is “good” enough documentation?
Eoin Woods gave a talk at XP 2016 titled, “Capturing Design (When You Really Have To)” that got me to revisit my own beliefs on the topic and to think about the state of the current agile practices on documenting architecture.

One take away from Eoin’s talk is to consider the primary purpose of any design description: is it primarily to immediately communicate or to be a long-term record? If your primary goal is to communicate on the fly, then Eoin claims that your documentation should be short lived, tailored to your audience, throwaway, and informal. On the other hand, design descriptions as records are likely to be long-lived, preserve information, be maintainable and organized, and more formal (or well-defined).

Since we are charged to deliver value with our working software, it is often hard to pay attention to any perceived efforts at “slowing down” to describe our systems as we build them. But if we’re building software that is expected to live long (and prosper), it makes sense to invest in documenting some aspects of that system—if nothing more to serve as breadcrumbs useful to those working on the system in the future or to our future selves.

So what can we do to keep design descriptions useful, relevant, unambiguous, and up-to-date? Eoin argues that to be palatable to agile projects, design documentation should be minimal, useful, and significant. It should explain what is important about the design and why it is important; what design decisions we made (and when), and what are the major system pieces, their responsibilities and key interactions. Because of my Responsibility-Driven Design values and roots, I like that he considers system elements and their responsibilities as being minimally useful information descriptions of a system. But to me this just a starting point to get an initial sense of what a system is and does and does not do. There certainly is room for more description, and more details when warranted.

And that gets us back to pragmatics. A design description isn’t the first thing that developers think of doing (not everyone is a visual thinker nor a writer). I know I’m atypical because, early in my engineering career, I enjoyed spending 3 weeks writing a document on my universal linker worked and how to extend it and its limitations. I was nearly as happy producing that document as I was in designing and implementing that linker. It pleased me that sustaining engineering found that document useful years after I had left for another job.

So for the rest of you who don’t find it natural to create documentation, here’s some advice from Eoin:

  • Do it sprint-by-sprint (or little by little). Don’t do it all at once.
  • Be aware of tradeoffs between fidelity and maintainability. The more precise it is, the harder descriptions will be to keep up to date.
  • Know the precision needed by your document’s users. If they need details, they need details. The more details, the more effort required to keep them up to date.
  • Consider linking design descriptions to your code (more on that in another blog post)
  • Create a “commons” where design descriptions are accessible and shared
  • Focus on the “gaps”—describe things that are poorly understood
  • Always ask what’s good enough. Don’t settle for less when more is needed or more when less is needed.

To this list I would add: if design descriptions are important to your company and your product or project, make it known. Explain why design documentation is important, respond to questions and challenges of that commitment, and then give people the support they need to create these kinds of descriptions. Let them perform experiments and build consensus around what is needed.

Be creative and incremental. One company I know made short video recordings of designers and architects giving short talks about why things worked the way they did. They were really short—five minutes or less. Another team created lightweight architecture documentation as they enhanced and made architectural improvements to the 300+ working applications they had to support. It was essential for them that there be more than just the code as the knowledge about these systems was getting lost and decaying over time. Rather than throw up their hands and give up, they just created enough design documentation using simple templates and only as new initiatives were started.

Find a willing documenter. Sometimes a new person (who is new to the system and to the company) is a good person to pair with another old hand to create a high-level description of the system as part of “getting their feet wet.” But don’t just stick them with the documentation. Have them write code and tests, too. From the start.

Architecture at Agile 2013

What a busy, intense week Agile 2013 was! It was a great opportunity to connect with old friends and meet folks who share common interests and energy. I also had a lot of fun spreading the word/exchanging ideas about two things I’m passionate about: software architecture and quality.

At the conference I presented “Why we need architecture (and architects) on Large-Scale Agile Projects”. I’ve presented this talk a few times. This time I added “Large Scale” to the title and submitted it to the enterprise agile track. I wanted to expose the audience to several ideas: that there are both small team/project architecture practices and larger project/program architectural practices that can work together and complement each other, what it means to be an architecture steward, and some practices (like Landing Zones, Architecture Spikes, and Bounded Experiments/prototyping, and options for making architecture-related tasks visible).

I spoke with several enthusiastic architects after my talk and throughout the week. They shared how they were developing their architecture. They also asked whether I thought what they were doing was made sense. In general, it did. But I want to be clear: One size doesn’t fit all. Sometimes, depending on risks and the business you are in, you need to invest effort in experimenting/noodling/prototyping before committing to certain architectural decisions. Sometimes, it is absolutely a waste of time. It depends on what you perceive as risky/unknown and how you want to deal with it. The key to being successful is to do what works for you and your organization.

Nonetheless, in my talk when I spoke about some decisions that are too important to wait until the last moment, someone interrupted to say that I had gotten it wrong: “It isn’t the last possible moment, but the last responsible moment”. I know that. Yet I’ve seen and heard too many stories about irresponsible technical decision-making at the last possible moment instead of the last responsible moment. People confuse the two. And they use agile epithets to justify their bad behaviors. Surprise, surprise. The “last responsible moment” can be misinterpreted by some to mean, “I don’t want to decide yet (which may be irresponsible)”. People rarely make good decisions when they are panicked, overworked, stressed out, exhausted or time-crunched.

Check out my blog posts on the Last Responsible Moment and decision-making under stress if you want to join in on that conversation.

But I digress. Back to architecture.

I was happy to see two architecture talks on the Development and Software Craftsmanship track. I attended Simon Brown’s “Simple Sketches for Diagramming your Software Architecture” and also had the pleasure of hanging out with Simon to chat about architecture and sketching. Simon presented information on how to draw views of a system’s structure that are relevant to developers, not too fussy or formal, yet convey vital information. This isn’t hardcore technical stuff, but it is surprising how many rough sketches are confusing and not at all helpful. Simon regularly teaches developers to draw practical informative architecture sketches. He collects sample sketches from students before and after they receive his sketching advice. Their improvement is remarkable. If you want to learn more, go to Simon’s website, CodingTheArchitecture.com

I shared with Simon the sketching exercises in my Agile Architecture and Developing and Communicating Software Architecture workshops…and pointed him to two books on I’ve drawn on for drawing inspiration: Nancy Duarte’s slide:ology and Matthew Frederick’s 101 Things I Learned in Architecture School. It’s all about becoming better communicators.

Scott Ambler talked about Continuous Architecture & Emergence Design. I was happy to see that he, too, advocated architecture spikes and envisioning (and proving the architecture with evidence/code). In his abstract he states: “Disciplined agile teams will perform architecture envisioning at the beginning of a project to ensure that they get going in the right direction. They will prove the architecture with working code early in construction so that they know their strategy is viable, evolving appropriately based on their learnings. They will explore new or complex technologies with small architecture spikes. They will explore the details of the architecture on a just-in-time (JIT) basis throughout construction, doing JIT modeling as they go and ideally taking a test-driven-development (TDD) approach at the design level.”

There are way too many concurrent sessions and too few hours in the day to get to all the talks I’d have liked to attend. I just wished I’d been able to attend Rachel Laycock and Tom Sulston’s talk on the DevOps track, “Architecture and Collaboration: Cornerstones of Continuous Delivery”…but instead I enjoyed Claire Moss’ “Big Visible Testing” experience report. Choices. Decisions.

If you’d like to continue the conversation about architecture on agile projects, I’d love to hear from you.

Architecture Patterns for the Rest of Us

Recently I have been reading and enjoying Robert Hanmer’s new book, Pattern-Oriented Software Architecture for Dummies. Disclaimer: In the past I have shied away from the books for dummy series, except when I wanted to learn about something clearly out of the realm of my expertise. Even then, just the notion of reading a book for dummies has stopped me from several dummy book purchases. Good grief! I didn’t know what I have been missing.

This book is not theoretical or dry. The prose is a pleasure to read. And it goes into way more depth than you might expect. Instead of simplifying the Patterns of Software Architecture book that is also on my bookshelf, I’d say it adds and complements it with clear explanation of benefits and liabilities of each pattern, step-by-step guides to implementing each architectural pattern and more. As an extra bonus, the first two parts of the book contain some of the clearest writing about what patterns are and how they can be used or discovered that I’ve seen.

I wish Bob Hanmer would write more patterns for the Dummy series. He knows his subject. He has good, solid examples. And he doesn’t insult your intelligence (In contrast, I find that the Head First books are definitely not my cup of tea…I don’t want to play games or have patterns trivialized). Bob has an easy, engaging style of writing. The graphics and illustrations are compelling In fact, I reproduced a couple of the graphics about finding patterns in a lecture, with credit to Bob of course, in a lecture I gave last week to my enterprise application design students.

This is a good book. If you’ve wondered about software architecture patterns and styles, read this book. Buy it. And tell your software buddies about it, too.

Why Domain Modeling?

One barrier to considering rich domain model architectures is a misconception about the value or purpose of a domain model. To some, creating a domain model seems a throwback to earlier days where design and modeling were perceived to be discrete, lengthy, and mostly unproductive activities.

When object technology was young, several notable authors made a strong distinction between object-oriented analysis and object-oriented design and programming. Ostensibly, during object-oriented analysis you analyzed a task that you wanted to automate and developed an underlying conceptual (object) model of that domain. You produced a set of task descriptions and a model of objects that included representations of domain concepts and showed how these objects interacted to accomplish some work. But you couldn’t directly implement these objects. During object-oriented design you refined this analysis model to consider implementation and technology constraints. Only then, after finishing design, would you write your program. The implication was that any model you produced during analysis or design needed extensive manipulation and refinement before you could write your program.

But even in those early days, many of us blurred the lines between object-oriented analysis, design and programming. In practice, how we worked was often quite different than suggested by the popular literature of the time. We might analyze the problem, quickly sketch out some design ideas and then implement then. We might use CRC cards to model our objects (which we would then discard). There weren’t distinct gaps between analyzing the problem and designing and implementing a solution. Sometimes different people did analysis while others did design and programming; but many times a developer would do all these activities. Sometimes, we created permanent representations of some models (in addition to our code). It depended on the situation and the need.

These days, I rarely see anyone produce detailed object analysis or design models. In fact, design and modeling have gotten somewhat of a bad wrap. Good object design is deemed too difficult for “average” programmers, and there isn’t time apart from coding to think about the domain and come up with any models.

The most common object models I see are created for one of two purposes: small conceptual models constructed to gain an understanding of significant new functionality; or informal design sketches intended to provide a quick overview to newcomers or non-code reading folks who need to “know more” about the software. A lack of modeling (unless you consider code or tests to be models—I don’t) is prevalent whether or not the team is following agile practices. The most common models I do see are very detailed E-R models that are more implementation specifications than models. They don’t leave out any details making it hard find the important bits.

Understanding and describing a domain and creating any model of it in any form takes a back seat to most development activities.

But if your software is complex, rapidly changing, and strategic and you aren’t doing any domain modeling, you may be missing out on something really important. If your software is complex enough, you can greatly benefit from domain modeling and thoughtfully doing some Domain-Driven Design activities.

For example, Domain Driven Design’s strategic design is a conscientious effort to create common understanding between business visionaries, domain experts and developers. Initial high-level domain discussions lead to understanding what is central to the problem (the core domain) and the relationships between all the important parts (sub-domains) that it interacts with. Gaining such consensus helps you focus your best design efforts and structure (or restructure) your software to enable it to sustainably grow and evolve.

But it doesn’t stop there. If you buy into domain modeling, you also commit to developing a deep shared understanding of the problem domain along with your code. Your mission isn’t to just deliver working functionality, but to embed domain knowledge in your working solution. Your code will have objects that represent domain concepts. You’ll be picky about how you name classes and methods so they accurately reflect the language of the domain. You will have ongoing discussions with domain experts and jointly discuss and refine your understanding of the domain. Along the way you may sketch and revise domain models. You’ll strive to identify, preserve, strengthen and make explicit the connections between the business problem and your code. When you refactor your design as you gain deeper understanding, you won’t forget to reflect the domain in your code. Your domain model lives and evolves along with the code.

As a consequence, there isn’t that big disconnect between what you code and what the business talks about. And that can be a powerful force for even closer collaborations between developers and domain experts.

Domain-Driven Design Applied

Vaughn Vernon’s new book, Implementing Domain-Driven Design, is full of advice for those who want to apply design principles and patterns to implementing rich domain model applications.

Check out my interview with Vaughn for InformIT. We discuss why he wrote this book and some important new Domain-Driven Design patterns and their architectural impact.

There is much to learn about good design of complex software from this book. In future blog posts I plan to unpack some of the meatier domain-driven design concepts and reflect on how best to apply them to designing enterprise applications.

Evangelizing New (Software Architecture) Ideas and Practices

Last December I spoke at the YOW Conferences in Australia on Why We Need Architects (and Architecture) on Agile Projects.

I wanted convince agile developers that emergent design doesn’t guarantee good software architecture. And often, you need to pay extra attention to architecture, especially if you are working on a large project.

There can be many reasons for paying attention to architecture: Meaningful progress may be blocked by architectural flaws. Some intricate tricky technical stuff may need to be worked out before you can implement functionality that relies on it. Critical components outside your control may need to be integrated. Achieving performance targets may be difficult and you need to explore what you can do before choosing among expensive or hard to reverse alternatives. Many other developers may depend on some new technical bit working well (whether it be infrastructure, that particular NoSQL database, or that unproven framework). Some design conventions need to be established before a large number of developers start whacking away at gobs of user stories.

I characterized the role of an agile architect as being hands-on; a steward of sustainable development, and someone who balances technical concerns with other perspectives. One difference between large agile and small agile projects is that you often need to do more significant wayfinding and architectural risk mitigation on larger projects.

I hoped to inspire agile developers to care more about sustainable architecture and to consider picking up some more architecture practices.

Unfortunately, my talk sorely disappointed a thoughtful architect who faces an entirely different dilemma: He needs to convince non-agile architects to adapt agile architectural practices. And my talk didn’t give him any arguments that would persuade them.

My first reaction to his rant was to want to shout: Give up! It is impossible to convince anyone to adopt new way of working that conflict with his or her deeply held values.

But then again, how do new ways of working ever take hold in an organization? By having some buzz around them. By being brought in (naively or conscientiously) by leaders and instigators who know how to build organizational support for new ideas. By being new and sexy instead of dull and boring. By demonstrating results. By capturing people’s imagination or assuaging their fears. Surreptitiously, quietly replacing older practices when reasons for doing them are no longer remembered. When the old guard dies out or gives up.

Attitudes rarely change through compelling discussions or persuasive argumentation. I look to Mary Lynn Manns and Linda Rising’s Fearless Change: Patterns for Introducing New Ideas for inspiration.

I take stock of how much energy I want to invest in changing attitudes and how much investment people have in the way they are doing things now. I don’t think of myself as a professional change agent. Yet, as a consultant I am often brought in when organizations (not necessarily every person, mind you) want to do things differently.
People generally aren’t receptive to new ideas or practices or technologies when they feel threatened, dismissed, disrespected, underappreciated, or misunderstood. I am successful at introducing new techniques when they are presented as ways to reduce or manage risks or increase productivity or reliability or improve performance or whatever hot button the people who I am exposing these new ways of working are receptive to. Labeling techniques as “agile” or “lean” may create a buzz in those that already receptive. But the reaction can be almost allergic in those who are not. The last thing I want do is to foster divisiveness. Labels do that. If I get people comfortable taking just that next small step, that is often enough for them to carry on and make even more significant changes. Changing practices takes patience and persistence. At the end of the day I can only convince, demonstrate and empathize; I cannot compel people to make changes.

How far should you look ahead?

I’m a consultant. So an easy answer would be, “It depends.”

My real answer is more nuanced: only look ahead far enough to see some significant issue and a plausible way to resolve it. If you want to get better, practice looking ahead in the context of your work and its established rhythms and natural cycles.

If your organization doesn’t value planning and you want to look ahead, start by taking small steps. The reason to look ahead is to check for some pre-work or preparation that is likely to save time or money.

If you are a tech lead, try looking ahead in the context of release planning. See what architecture risks you spot (or new design requirements) as new features are scoped. Take notes on what you are concerned about. Spend some time speculating on what you should be doing architecturally to support the next release. Have a conversation with a trusted colleague about your concerns. Identify concrete follow-on activities specifically targeted at addressing your biggest concerns.

If you are an agile developer, you may not want to look further ahead than the current sprint. If you aren’t already doing so, try some iteration modeling for a couple of hours at the beginning of an iteration. Grab a handful of related story cards for a new feature and sketch out some design ideas. Draw sequence diagrams on a white board. Jot down class or component responsibilities on CRC cards. Noodle around. Give yourself the luxury of exploring with your co-workers how things might work. Don’t expect a final design, just a plausible one that will be fleshed out and revised as you implement those stories.

If your organization tends to look too far ahead, take actions to personally shift your focus away from endless planning. Winston Churchill said, “It is always wise to look ahead, but difficult to look further than you can see.” Looking too far ahead is also a big waste of time. Remedy this time suck by limiting the scope of what you plan for. Find opportunities to build and measure things.Learn to perform small experiments, make small decisions, communicate your results and move things along.

Colleagues and co-workers I’d classify as over-eager forward-looking designers typically build up comfortable infrastructure they think will be needed as a matter of course before they settle in to the programming task at hand. A good indicator that they’ve gone overboard is how others react to their work. One unhappy forward-looking architect remarked recently, “Not many people appreciate what I am trying to accomplish.”

You may not be aware that you look too far ahead. One indicator: you view most programming tasks as an opportunity to build or try out a cool new framework or create your own special DSL. (If you are part of an infrastructure team, then, fine, that is your job. Don’t embellish your creations. And make sure others can actually use your stuff.) If you are an application developer, nudge yourself towards refactoring and finding abstractions through concrete scenarios. Hold yourself back from writing speculative code.

I look to my colleague, Joe Yoder, as someone who takes a balanced approach when creating infrastructure or suggesting novel architectures. Joe likes to build flexible, adaptable systems, in particular Adaptive Object-Model architectures. He has years of experience doing this. He’s enthusiastic when opportunities come up to build these kinds of systems, but knows that this architecture style isn’t a good fit for every rapidly changing system. And he doesn’t build every part of such a system this way; only the parts that require extensive, ongoing customizations.

How far do you look ahead? Do you think you’ve found a good balance?

Agile Architecture Myth #5: Never Look Ahead

Stop it! Stop anticipating! Get coding. Get that next user story working. Don’t. Ever. Anticipate.

This seems to be a common agile mantra these days. Since we allegedly aren’t good at anticipating how code will evolve, well frack it, let’s just stop all speculation.

Instead, just do your level best to code for what you know right now. Let the design emerge. Keep it clean. Period.

But is that the best we can do?

Underlying these beliefs is the assumption that you will get burned if you look ahead because you are bad at anticipating change or the need for design flexibility. Since you never get it right, why ever bother looking ahead?

Well, who says we are bad at looking ahead? Is it really true?

The fact is we all make design mistakes, whether we look ahead or not. But that doesn’t mean looking ahead is the cause of more mistakes. (Sometimes it even prevents mistakes).

I believe that you can and should look ahead. And that most developers, given half a chance, are pretty good at incorporating past experiences and making anticipatory design choices. We might even say that an “experienced developer” is one who has the ability to use past experiences to help make good forward-looking decisions.

When you refactor code to reduce dependencies you are making an on-the-spot judgment call about what part of your code is more stable and what is more likely to change. You make these decisions based on concrete evidence. You are only speculating a little bit (things keep changing here, so I’ll try this).

Experienced designers developers look ahead at a larger scale. And the longer they’ve been at it, the more they incorporate past experiences into daily looking just a little bit ahead. You learn to anticipate where the design wrinkles will be.

If you’re less experienced, this seems like magic. How can you ever know when you’ve made an appropriate anticipatory design decision when the payoff only comes sometime in the future? Well, you don’t ever know for certain.

But you get better at looking ahead by practicing it in a safe, supportive development environment. One where you are encouraged to look around and look ahead. You get to see what more senior people do. And, if you are lucky, when you ask them why they are doing something they take the time to explain their reasoning.

Looking ahead isn’t antithetical to refactoring. In fact, it might even help you avoid some rework. Over time, we learn to make even more reasonable judgment calls based on experience, wisdom, and gut instinct. (And sometimes people think that we aren’t thinking ahead because we can fluidly make on on-the the-spot design changes).

If you never practice looking ahead, well, you might just be doomed to refactoring your way to a good design. For me, that simply doesn’t cut it. And it can simply be too painful and inefficient for large, complex projects.

Recently, a thoughtful agile coach and architect told me:

“When I start a project, I try to figure out what is the minimum architecture we need to do for that context. I start from a few key things:
What are the requirements in terms of performance, scalability, reliability, security, localization, functionality etc.?

What deployment structure best fits these requirements?

What platforms and frameworks facilitate the development of the application?

What don’t we know how to do at this point? How can we find out?”

Asking these questions helps him, “define a basic but good architecture. One that ensures the right [architectural] features, minimizes time/costs and reduces risks.” That’s just enough upfront speculation to establish a baseline architecture.

But being ever thoughtful, he also asked, “I am wondering however: Should I do more? In what circumstances?”

Under certain circumstances it may be appropriate to do even more upfront design work.

Agile heresy?

I think not.

When we tackle really big unknowns it makes sense to explore more before making costly investments. Experimentation backed up by evidence-based decision making makes our designs better. Looking ahead can save us time.

But still, too many are deceived into believing in those vividly scary look-ahead failure stories. Then they overreact, pulling back from any speculative design.

My advice: Don’t overlook the many times when you have successfully looked ahead and saved time and effort. Not looking ahead is symptomatic of inexperience.

An Architect’s Dilemna: Should I Rework or Exploit Legacy Architecture?

I recently spoke with an architect has been tuning up a legacy system that is built out of a patchwork quilt of technologies. As a consequence of its age and lack of common design approaches, the system is difficult to maintain. Error and event logs are written (in fact, many are), but they are inconsistent and scattered. It is extremely hard to collect data from and troubleshoot the system when things go wrong.

The architect has instigated many architectural improvements to this system, but one that to me was absolutely brilliant was to not insist that the system be reworked to use a single common logging mechanism. Instead, logs were redirected to a NoSQL database that could then be intelligently queried to troubleshoot problems as they arose.

Rather than dive in and “fix” legacy code to be consistent, this was a “splice and intelligently interpret” solution that had minimal impact on working code. Yet this fairly simple fix made the lives of those troubleshooting the system much easier. No longer did they have to dig through various logs by hand. They could stare and compare a stream of correlated event data.

Early in my career I was often frustrated by discrepancies in systems I worked on. I envisioned a better world where the design conventions were consistently followed. I took pride in cleaning up crufty code. And in the spirit of redesigning for that new, improved world, I’d fix any inconsistencies that were under my control.

At a large scale, my individual clean up efforts would be entirely impractical. Complex software isnâ’t the byproduct of a single mind. Often, it simply isn’t practical to rework large systems make things consistent. It is far easier to spot and fix system warts early in their life than later after myriad cowpaths have been paved and initial good design ideas have become warped and obsfucated. Making significant changes in legacy systems requires skill, tenacity, and courage. But sometimes you can avoid making significant changes if you twist the way you think about the problem.

If your infrastructure causes problems, find ways to fix it. Better yet (and here’s the twist): find ways to avoid or exploit its limitations. Solving a problem by avoiding major rework is equally as rewarding as cleaning up cruft. Even if it leaves a poor design intact. Such fixes breathe life into systems that by all measures should have been scrapped long ago. Fashioning fixes that don’t force the core of a fragile architecture to be revised is a real engineering accomplishment. In an ideal world I’d like time to clean up crufty systems and make them better. But not if I can get significant improvement with far less effort. Engineering, after all, is the art of making intelligent tradeoffs.