An Architect’s Dilemna: Should I Rework or Exploit Legacy Architecture?

I recently spoke with an architect has been tuning up a legacy system that is built out of a patchwork quilt of technologies. As a consequence of its age and lack of common design approaches, the system is difficult to maintain. Error and event logs are written (in fact, many are), but they are inconsistent and scattered. It is extremely hard to collect data from and troubleshoot the system when things go wrong.

The architect has instigated many architectural improvements to this system, but one that to me was absolutely brilliant was to not insist that the system be reworked to use a single common logging mechanism. Instead, logs were redirected to a NoSQL database that could then be intelligently queried to troubleshoot problems as they arose.

Rather than dive in and “fix” legacy code to be consistent, this was a “splice and intelligently interpret” solution that had minimal impact on working code. Yet this fairly simple fix made the lives of those troubleshooting the system much easier. No longer did they have to dig through various logs by hand. They could stare and compare a stream of correlated event data.

Early in my career I was often frustrated by discrepancies in systems I worked on. I envisioned a better world where the design conventions were consistently followed. I took pride in cleaning up crufty code. And in the spirit of redesigning for that new, improved world, I’d fix any inconsistencies that were under my control.

At a large scale, my individual clean up efforts would be entirely impractical. Complex software isnâ’t the byproduct of a single mind. Often, it simply isn’t practical to rework large systems make things consistent. It is far easier to spot and fix system warts early in their life than later after myriad cowpaths have been paved and initial good design ideas have become warped and obsfucated. Making significant changes in legacy systems requires skill, tenacity, and courage. But sometimes you can avoid making significant changes if you twist the way you think about the problem.

If your infrastructure causes problems, find ways to fix it. Better yet (and here’s the twist): find ways to avoid or exploit its limitations. Solving a problem by avoiding major rework is equally as rewarding as cleaning up cruft. Even if it leaves a poor design intact. Such fixes breathe life into systems that by all measures should have been scrapped long ago. Fashioning fixes that don’t force the core of a fragile architecture to be revised is a real engineering accomplishment. In an ideal world I’d like time to clean up crufty systems and make them better. But not if I can get significant improvement with far less effort. Engineering, after all, is the art of making intelligent tradeoffs.

Agile Architecture Myths #4 Because you are agile you can change your system fast!

Agile designers embrace change. But that doesn’t mean change is always easy. Some things are harder to change than others. So it is good to know how to explain this to impatient product stakeholders, program managers, or product owners when they ask you to handle a new requirement that to them appears to be easy but isn’t.

Joe Yoder and Brian Foote, of the Big Ball of Mud fame, provide insights into ways systems can change without too much friction. They drew inspiration from Stuart Brand’s How Buildings Learn. Brand explains that buildings are made of components organized into shearing layers. He identifies six layers: the site, the structure, the skin, the services, the space plan, and physical stuff in the building.

Each shearing layer has its own value and speed of change, or pace. According to Brand, buildings are able to adapt because faster changing layers (e.g. the services layers and spaces) are purposefully designed so to not be obstructed by slower changing layers. If you design your building well, it is fairly easy to change the plumbing. Much easier than revising the foundation. And it is even easier to rearrange the furniture. Sometimes designers go to extra efforts to make a component easier to change. For example, most conference centers are designed so that sliding panels form walls that allow inside space to be quickly modified.

Brand’s ideas should’t be surprising to software developers who follow good design practices that enable us to adapt our software: keep systems modular, remove unnecessary dependencies between components, and hide implementation details behind stable interfaces.

Foote and Yoder’s advice for avoiding tangled, hard-to-change software is to, “Factor your system so that artifacts that change at similar rates are together.” They also present a chart of typical layers in a software system and their rates of change:

Frequently, we are asked to support new functionality that requires us to make changes deep in our system. We are asked to tinker with the underlying (supposedly slower changing) layers that the rest of our software relies upon. And often, we do achieve this miraculous feat of engineering because interfaces between layers were stable and performed adequately. We got away with tinkering with the foundations without serious disruption. But sometimes we aren’t so lucky. A new requirement might demand significantly more capabilities of our underlying layers. These types of changes require significant architectural rework. And no matter how matter how agile we are, major rework requires more effort.

Because we are agile, we recognize that change is inevitable. But embracing change doesn’t make it easier, just expected. I’d be interested in hearing your thoughts about Foote and Yoder’s shearing layers and ways you’ve found to ease the pain of making significant software changes.

Who Defines (or Redefines) Landing Zone Criteria?

Who should be in on discussions that set landing zone criteria? Because most landing zone have architectural implications, someone knowledgeable about the system architecture, in addition to the product owner and other key stakeholders should have a lot to say in vetting a landing zone.

Someone who has depth, breadth, and vision, is an ideal candidate for crafting an initial cut. But even if you are brilliant, I suggest you fine-tune your landing zone with a small, informed group. If you have lots of stakeholders who want to chime in, give each stakeholder group a voice in identifying qualities and values they find particularly relevant. And ask a representative from each stakeholder group to join in on a landing zone discussion. At a landing zone review, expect healthy discussion. Experts are usually highly opinionated as well as passionate.

You might even want to facilitate your discussions.

I find it much more effective to have an informed facilitator guide landing zone discussions, than a dispassionate, uniformed professional facilitator. An ideal landing zone meeting facilitator should know about the program or product but need not be the “authority” or definitive “expert”. It’s more important that they know the landscape and they are good at gaining consensus and getting the best out of individuals who hold strong opinions. Possibilities: chief business architects, quality leads, the program or product manager, yes, even a software architect.

Sometimes a facilitator needs to step out of that role and offer informed opinions. I find this highly desirable, as long as this shift is made clear: “Hang on, do you mind if I take a stab at explaining what I think are more reasonable targets?”

Minimum, target and acceptable values should be agreed upon by the group and it might take some discussion to reach mutual understanding and consensus. For example, someone might initially propose a set of landing zone values based on historical trends and extrapolation. The software architect could push back with values based on prototyping experiments and new benchmark data. The group might end up adjusting targets because that evidence was compelling. Or, they might agree on tentative values that need to be firmed by an expert. Hammering out numbers just to finish the landing zone isn’t the goal. Instead, you want to shape ideas for what you think will make your product a success based on the best evidence you have, backed up by experience and tempered by group wisdom. To effectively do this, people need to come to the discussion with mutual respect, trust and no hidden agendas.

And if you are agile, recognize that your landing zone can and should recalibrated once you learn more about what’s possible.

Landing Zone Targets: Precision, Specificity, and Wiggle Room

A landing zone is a set of criteria used to monitor and characterize the “releasability” of a product. Landing zones allow you to take product features and system qualities and trade them off against each other to determine what an acceptable product has to be. Almost always these tradeoffs have architectural implications. If you’ve done something similar in the past, the criteria you should use to define your landing zone may be obvious. But for first time landing zone builders, I recommend you task someone who knows about the product to take a first cut at establishing landing zone criteria that is then reviewed and vetted by a small, informed group.

A business architect, product owner, or lead engineer might prepare a “proposed landing zone” of reasonable values for landing zone criteria that are questioned, challenged, and then reviewed by a small group. On one program I was involved with, the chief business architect made this initial cut. He was a former techno geek who knew his technical limits. More important, he had deep business knowledge, product vision, and had a keen sense about where to be precise and where there should be a lot of flexibility in the landing zone values.

Some transaction criteria were very precise. Since they were in the business of processing a lot of transactions, they knew their past and knew were they needed to improve (based on projected increases in transaction volumes). For example, that transaction throughput target for a particular business process was based on extrapolations from the existing implementation (taking into account the new architecture and system deployment capabilities). This is a purposefully obfuscated example:

Example Landing Zone Attribute
Characteristic Minimum Target Outstanding
Payment processing transactions per day 3,250,000 4,000,000 5,500,000

Some targets for explicit user tasks were very specific (one had a target of less than 4 hours with no errors, and an outstanding goal of 1 business day). On the other hand, many other landing zone criteria were only generally categorized as requiring either a patch, a new system release, or online update support. The definitions for what was a patch, a release or an online update were nailed down so that there was no ambiguity in what they meant.

For example, a patch was defined as a localized solution that took a month or less to implement and deploy. The goal was eventually to get closer to a week than a month, but they started out modestly. On the other hand, a release required coordination among several teams and an entire system redeployment. An online update was something a user could accomplish via an appropriate tool.

So, for example, the landing zone criteria for reconfiguring a workflow associated with a specific data update stream had minimal and target values of “release” and an outstanding value of “online update”.

When defining a landing zone for an agile product or program, carefully consider how precise you need to be and how many criteria are in your zone. Less precision allows for more wiggle room. Without enough constraints, however, it’s hard to know what is good enough. The more precise landing zone criteria are, the easier it is to tell whether you are on track to meet them. But if those landing zone criteria are too narrowly defined, there’s a danger of ignoring broader architecture and design concerns in order to focus only on specifically achieving targets.

We live in a world where there needs to be a balance. I’ll write more about who might be best suited to defining and redefining landing zones in another post.

Introducing Landing Zones

On an aircraft carrier, the landing zone describes a small section of deck that a pilot must touch down in to land the plane safely. By analogy, a landing zone for a product describes a range of measurable attributes that your product must deliver to achieve the product vision.

Landing zones are useful for products as well as complex projects and programs. For such complex systems it can be difficult to define “good enough to ship” without considering a lot of different factors and making tradeoffs between them. Recently I have been introducing landing zones as one technique for getting a bigger picture on agile projects and programs. I also find landing zones helpful in identifying architecture and design risks and potential required innovations.

I first learned about landing zones from Erik Simmons. Erik is responsible for creating, implementing, and spreading requirements engineering practices at Intel. Long ago, he and his colleagues taught requirements classes at Oregon Graduate Institute. That’s where I met Erik and learned about landing zones. Since then I’ve helped several clients use them for managing complex programs and projects. They are particularly useful when many requirements needed to be considered and it is important not to lose sight of make or break requirements.

Online information about product or program landing zones is scarce (the only other references I found were a brief glossary definition in Tom Gilb’s Competitive Engineering: A Handbook for Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage.

At first glance a landing zone seems nothing more than a glorified table. Each row in the landing zone represents a measurable requirement. Each requirement has a range of acceptable values labeled Minimum, Target, and Outstanding. The goal is to have each requirement within this range at the end of development. Inside the range is the desired value, labeled Target. Minimum, Target, and Outstanding are relative to your budget and timeframe.

Here’s an example of a landing zone for a loan processing system (all the examples I am using are concocted for simplicity’s sake; any relation to landing zones for real projects is coincidental):

Landing Zone for a Loan System
Attribute Minimum Target Outstanding
Adding new loan agreement 2 weeks 24 hours 12 hours
Add new product 3 weeks 2 weeks 1 week
Adjust loan terms 4 days 2 days 1 day
Access loan risk 1 day 6 hours 10 minutes
Assign loan servicer 1 month 1 week 1 day

And another for a smart phone (this one was cooked up by comparing competitive benchmark data):

Landing Zone for Smart Phone
Attribute Minimum Target Outstanding
Battery life – standby 300 hours 320 hours 420 hours
Battery life – in use 270 minutes 300 minutes 380 minutes
Footprint in inches 2.5 x 4.8 x .57 2.4 x 4.6 x .4 2.31 x 4.5 x .37
Screen size (pixels) 600 x 400 600 x 400 960 x 640
Digital camera resolution 8 MP 8 MP 9 MP
Weight 5 oz. 4.8 oz. 4.4 oz.

A landing zone is similar to release criteria, except it allows for tolerances in acceptable values. There isn’t one number you are aiming for; you have a range of values for each product attribute or characteristic you are targeting. This gives you some flexibility in defining what’s “good enough.” You’ll note on the smart phone that the minimum acceptable screen size is exactly the same as the target. This is not uncommon. It just means that there is no margin between your target and what is minimally acceptable. Sometimes the variance between minimum, target and outstanding can be small (this is when you know what your target is and how to achieve it, and are willing to accept only marginally less).

When you have little wiggle room in meeting requirements, you might simply want to define acceptance criteria with hard and fast numbers that simply must be met. I find it helpful to define landing zones for those product attributes that have some degree of flexibility in their outcome.

I like the way landing zones can help bring focus to a lot of complexity. If you are building something really big, you can roll up your product’s success to a few dozen things to monitor (instead of hundreds). In contrast to a list of release criteria, a landing zone also allows you to see a bigger picture and make sense of it: When one attribute is edging below its minimum, what is happening with the others? Are they trending below minimum, too? If so, you have a big problem with achieving your overall product goals. No, and you have a landing zone which allows you to achieve a successful product/system launch even if every requirement isn’t exactly on target.

Most important, landing zones allow you to make tradeoffs in multiple dimensions. The art is in understanding the tolerance for those attributes that define your landing zone, and then selecting reasonable values for minimum, target and outstanding.
If you are defining a landing zone for a new system or product, it may require you to do some research, experimentation, and prototyping to determine appropriate attributes and their values. If you are replacing an existing system, you probably know what capabilities need to be improved (and your minimum values are likely at least as good as the current system you are replacing).

If you have competitors, you will most likely benchmark their products as part of investigating what you can reasonably expect to achieve. For inspiration, look at comparative product reviews. For example see Simple criteria, if met, are checked; other cells have explicit values.

Yet what’s good enough? A good landing zone allows for some flexibility in meeting goals without forcing you to accept unreasonable compromises. If all your landing zone attributes fall in the minimum acceptable category, do you still have a viable product? By definition, yes, your product is minimally acceptable. But that doesn’t guarantee its success. It means you have “landed”. You didn’t miss the aircraft carrier, but you are perilously close to the edge. You’d like to be in the target zone for most of the attributes.

In my next post I’ll write about using landing zones on agile projects and how architects can and should be involved in defining, vetting, and recalibrating landing zones.
If you’ve had experiences with landing zones I’d like to hear from you.

Software Architecture Stewardship

On agile teams, architects do more than design and implement the interesting tricky bits. They typically balance a wide range of concerns: short-term goals, overall system design integrity, risks versus efforts, design expediency.

The successful agile architects I know aren’t ivory tower experts.

They take a leadership role in defining how the system is structured, organized and implemented, as well as how it evolves. They make sure there isn’t a hairball of component dependencies. They are hands-on and engaged in day-to-day development work. They actively ensure that the system is designed in a way that will sustain ongoing, incremental development. They are comfortable writing and refactoring code and figuring out how to fix things and improve the architecture. When things are “broken”, they often step in to help.

An example of this that stands out for me is the story an architect told of how he helped improve the performance of an underperforming website. It was way too slow, and doomed to be even slower unless they reworked the architecture. Over the period of two weeks he worked with a small team to analyze the performance and then refactor the design. He wasn’t a superhero, he just applied his know how, working with the team who knew the deep technical bits. He succeeded in turning this design around because he knew they could improve things once they measured performance, found out where the real bottlenecks were and worked to clean them up (using good design principles and practices). After finding out where bottlenecks were, they first restructured some of the JavaScript code to eliminate several extraneous trips to the server. Then, they cleaned up a couple of service interfaces, cached some data on the service, and eliminated some of the more complex queries. Same functionality; only now the website performed up to snuff. He didn’t view himself as a superhero; just as someone who did what was needed get things back on track. He believed that they could clean things up if he spurred this activity with the right mindset and a fresh pair of eyes; and they did. (Not every attempt at architecture rework has such immediate payoffs….I recognize this, yet still admire his attitude and skill).

The definition of a steward is “someone who manages another’s property or financial affairs; one who administers anything as the agent of another or others”. I like this definition. To me, stewardship has deep implications about the role an architect ideally plays on a team. It means you pay attention, take care, and are responsible for the creation and upkeep of the software. You are responsible for safekeeping the architecture, yet you don’t own the architecture. Sure, you may work out key design details of the architecture, but you are a team player. The system’s success and sustainable development is more important than your own individual technical contributions. While being a technical leader, you also value teamwork. You don’t expect to be the only one coding or designing the challenging or interesting parts. You do what you can each and every day to sustain the system’s architecture.

Agile Architecture Myths #3 Good Architecture Emerges

Last time I left the cap off of the toothpaste, a small blob of toothpaste flowed onto the counter. No planning; it just emerged.

Now I know that emergent software architecture is another thing entirely. We can’t anticipate everything about our software’s architecture before we start building it. It is impossible to get design “right” the first or the second or the third time. That’s why it is so important to iterate. Yet I don’t like to think that good software architecture simply emerges. It’s a bit more complicated than that.

Several synonyms for emerge leave me feeling uncomfortable. I’d rather not have my architecture materialize, loom or just crop up! Emergent behaviors are a reality with complex software. But emergence doesn’t equate to goodness, no matter how hard you wish it did.

Yet I’m not a fan of big upfront architecture theoretical architecture, either. Emergent behavior can be an opportunity. I like the sense of learning and anticipation in these synonyms: become known, become visible and come into view.

The architecture of complex systems simply has to unfold. We make architecturally relevant decisions based on what we know at any given point in time. And when we don’t know how to proceed we experiment (don’t tell your boss that, call it a design spike instead).

Our architectural ideas should change as we write more code and build more of a system. The many micro-decisions we make along the way should lead us to change our minds and our architecture. We shouldn’t have to live forever with bad choices. That’s the beauty of iterative and agile practices. We can fix and repair things. So this is my way of thinking about how good software architecture comes into being:

Good architecture doesn’t emerge; it evolves.

It’s deceptive to say, good architecture emerges. I find that good architecture rarely œemerges. We aren’t magicians who materialize good architecture. Good architecture involves hard work, reflection, and paying attention to details. Ideas for architectural improvements emerge from coding. And as long as you have the skills and chops and know-how to make significant changes when they are needed you can improve your architecture. It takes skill to keep your software’s architecture on a good path. Refactoring and redesign is essential. Only when you have the courage (and permission) to refactor and rework your design will your architecture keep pace with your software

Agile Architecture Myths #2 Architecture Decisions Should Be Made At the Last Responsible Moment

In Lean Software Development: An Agile Toolkit, Mary and Tom Poppendieck describe “the last responsible moment” for making decisions:

Concurrent development makes it possible to delay commitment until the last responsible moment, that is, the moment at which failing to make a decision eliminates an important alternative.

And Jeff Atwood, in a thought-provoking blog argues that “we should resist our natural tendency to prepare too far in advance” especially in software development. Rather than carry along too many unused tools and excess baggage, Jeff admonishes,

Deciding too late is dangerous, but deciding too early in the rapidly changing world of software development is arguably even more dangerous. Let the principle of Last Responsible Moment be your guide.

And yet, something about the principle of the last responsible moment has always made me feel slightly uneasy. I’ve blogged about a related topic (just barely enough design) before. And be aware that in both my personal and professional life that I am not known as someone who plans things far out in advance. As a consequence I rarely use frequent flyer miles because I don’t anticipate vacation plans far enough in advance. I am not known to get to the airport hours ahead of my flight either. My just-in-time decision-making and actions have been known to make my travelling companions a bit uneasy. They prefer a not so tight timeline.

But what about software development? Well, if I find an approach that seems worth pursuing, I’ll typically go for it. I like to knock off decisions that have architecturally relevant impacts early so I can get on to the grunt work. A lot of code follows after certain architectural choices are made and common approaches are agreed upon. Let’s make a rational decision and then vet it, not constantly revisit it, is my ideal.

Yet I know too-early architecture decisions are particularly troublesome as they may have to be undone (or if not, result in less-than-optimal architecture).

So what is it about forcing decision-making to be just-in-time at the last responsible moment that bugs me, the notorious non-planner? Well, one thing I’ve observed on complex projects is that it takes time to disseminate decisions. And decisions that initially appear to be localized (and not to impact others who are working in other areas) can and frequently do have ripple affects outside their initially perceived sphere of influence. And, sometimes, in the thick of development, it can be hard to consciously make any decisions whatsoever. How I’ve coded up something for one story may inadvertently dictate the preferred style for implementing future stories, even though it turns out to be wrongheaded. The last responsible moment mindset can at times lull me (erroneously) into thinking that I’ll always have time to change my mind if I need to. I’m ever the optimist. Yet in order to work well with others and to produce habitable software I sometimes need a little more forethought.

And so, I think I operate more effectively if I make decisions at the “most responsible moment” instead of the “last responsible moment”.

I’m not a good enough of a designer (or maybe I am too much of an optimist) to know when the last responsible moment is. Just having a last-responsible moment mindset leaves me open to making late decisions. I’m sure this is not what Mary and Tom intended at all.

So I prefer to make decisions when they have positive impacts. Making decisions early that are going to have huge implications isn’t bad or always wasteful. Just be sure they are vetted and revisited if need be. Deferring decisions until you know more is OK, too. Just don’t dawdle or keep changing your mind. And don’t just make decisions only to eliminate alternatives, but make them to keep others from being delayed or bogged down waiting for you to get your act together. Remember you are collaborating with others. Delaying decisions may put others in a bind.

In short: make decisions when the time is right. Which can be hard to figure out sometimes. That’s what makes development challenging. Decisions shouldn’t be forced or delayed, but taken up when the time is right. And to help me find the right times, I prefer the mindset of “the most responsible moment” not the “last responsible one.”

Agile Architecture Myths #1 Simple Design is Always Better

Over the next few weeks I plan to blog about some agile software architecture and design beliefs that can cause confusion and dissent on agile teams (and angst for project and program managers). Johanna Rothman and I have jointly drawn up a list of misconceptions we’ve encountered as we’ve been working on our new agile architecture workshop. However, I take full responsibility for any ramblings and rants about them on my blog.

The first belief I want to challenge is this: simple designs are always better designs. If you want to quibble, you might say that I am being too strict in my wording. Perhaps I should hedge this claim by stating, “simple design is almost always better”. The corollary: more complex designs are never better. Complex design solutions aren’t as good as a simpler solutions because they are (pick one): harder to test, harder to extend, harder to understand, or harder to maintain.

To break down the old bad habits of doing overly speculative design (and wasting a lot of time and effort over-engineering), keep designs simple. Who can argue against simplicity?

I can and will. My problem with an overly narrow “keep it simple” mindset is that it fosters the practice of “keeping designs stupidly simple. Never allowing time to explore design alternatives before jumping in and coding something that works can lead to underperforming, bulky code. Never allowing developers to rework code that already works to handle more nuances only serves to reinforce ill-formed management decisions to continually push for writing more code at the expense of designing better solutions. What we’ve done with this overemphasis on simplicity is to replace speculation with hasty coding.

Development may appear to go full throttle for a while with thit absurdly simple practice, but for more complex projects, eventually any lack of concerted design effort can cause things to falter. Sometimes more complex solutions lead to increased design flexibility and far less code. But you will never know until you try to design and build them.

One of the hardest things for agile developers is to achieve an appropriate balance between programming for today and anticipating tomorrow. The more speculative any solution is, the more chance it has of being impacted by changing requirements. But sometimes, spending time looking at that queue of user stories and other acceptance criteria can lead you to consider more complex, scalable solutions earlier, rather than way too late. And therein lies the challenge: Doing enough design thinking, coding and experimentation at opportune times.

Slicing and Dicing Complex Projects…

In a recent post, Johanna Rothman asked the question, should agile teams Develop by Feature, Develop by Component, or Some Combination? Well, in a nutshell, my answer is, it depends.

I have seen teams try different approaches to this problem. And there is ample experience out there to draw upon. As Julian Sammy points out in his remarks, “My experience is that the feature and component layers have several many to many relationships, depending on the level of detail you’re working at…If you are building a new feature… then you should be able to factor this [into several components.]” Yep, a big feature needs to be broken down into smaller, bite-sized chunks. And sometimes, those components can stand in relative isolation from each other. But if you are basing a feature on a common domain model, you may want to develop in a way that feature code interacts with a common domain model. That’s how followers of domain driven design approaches tackle this problem. So that one way to split up the work…one group works on building up a “core” of domain objects that other components that comprise the feature use. And in more complex projects there are even more layers of complexity. Your code that implements a feature may need to interact with other systems and pre-existing components and services. And there may be parts of a system that have “upstream” and “downstream” relationships with your stuff. It can get all quite complictated. It can get hard to keep a clear picture of relations between things in your head.

So it isn’t always the case that a group of people developing a feature are in control of their own destiny (in fact, they often are relying on components or other systems developed by people who are working far away).

One thing that Bernd, another commenter points out, is that managing the dependencies between can get quite complicated. My experience matches up with Bernd’s. Even though you may have a timeline with expected deliveries between components, it still can be difficult to manage. No matter how well you specify an interface for what you need, it is always open to interpretation. Even with tests and exemplary code to illustrate what you want…or what you are providing, the devil is in the details. That’s why people like to encourage developing code that exercises functionality that cuts across the systems and components you are trying to integrate. And if those systems or components aren’t ready yet, mocking is one way to get you ready to integrate. But still. It isn’t easy.

So while it is ideal to have control of a feature, on complex projects there will always be dependencies on other things that not totally under your control. That’s what makes life so interesting, and the role of agile architecture and looking at what lies ahead while keeping yourself firmly planted in today’s realities so challenging. And if you are working in an agile world you take every opportunity to test out the difficult bits before you lock them down. That’s both a challenge and an opportunity.