Making Strong, Lively Centers

Making things with lively, cohesive centers (whether software, buildings, landscapes, educational experiences, or artfully designed bento boxes) involves hard work, practice, skill, reflection, and the development of a discriminating eye.

One great example of hard work over a long period of time was this bonsai boat tree I saw in Kyoto. This tree is over 600 years old!

Can you imagine the effort and attention the bonsai gardeners spent over the centuries to create, grow, and maintain this beautiful shape with its many centers?

I wish I could sit with great software designers and architects, soak up their wisdom, and then effortlessly incorporate that wisdom into my own code. I would love to write lively code without breaking a sweat. But that hasn’t been my experience.

My first Smalltalk code wasn’t very good. I didn’t immediately get the shift from procedural thinking, where I had to worry about controlling every aspect of the call chain, to that flowing object-oriented style where learning how to delegate responsibility was key.

To understand how to make my Smalltalk code lively (because of stronger centers) took practice and experimentation, reflection, and more practice. And letting go of preconceived notions that no longer fit.

As I program in a yet another programming language, I can’t avoid bringing along techniques I learned earlier. Some fit. Some do not. (I keep re-framing my notions of how to implement a good design). And I keep adding useful programming techniques to my toolkit.

Techniques for constructing well designed code is programming-language specific, even though underlying good design principles seem universal.

It took a while for me to realize that to become a better Smalltalk programmer I had to let go of my incessant urge to understand and control every little detail (I had to do that as an 8086 assembly language programmer, my prior language). Trust in polymorphism. Delegate. Don’t try to do too much in any one method. Don’t pass in too many arguments. Let objects take responsibility for their actions.

Even as I learned to let go of details, I still made dumb mistakes.

Initially I didn’t understand the difference between elegant and overly clever code (I liked Smalltalk blocks—er, closures). I didn’t realize the overhead of lots of closures that held on to context. I thought it was clever that my font management code held blocks that could read fonts from the file system (embedding references to external files in them for goodness sakes).

Seasoned Smalltalkers don’t make these mistakes. See this wiki page for a short discussion of Smalltalk and Closures and this Stack Overflow posting.

Was I tone deaf when it came to using blocks? I don’t think so. I just wasn’t paying attention to the right details. And I wasn’t looking in the right places for inspiration or guidance.

Instead of performing my own experiments, ideally, I should’ve been studying and emulating good examples. Such as the Smalltalk collection hierarchy’s use of closures. There, code blocks are used elegantly to execute differential behavior. The Smalltalk collection hierarchy is one of the most beautiful set of classes I’ve ever seen.

Fortunately, I had people around me who took the time to rewrite my code and explain to me why they did what they did. Consequently, I learned to write simpler, less clever, less resource intensive, more maintainable Smalltalk code.

Recently I have been programming in JavaScript. I was motivated to develop JavaScript code to front-end our client-side Java reference app we developed and use in our Enterprise Application Design course. For that initial programming exercise I took the stance that I’d use pretty much “stock” JavaScript libraries (hence me learning about JQuery) and keep things pretty simple.

Since that first whiff of JavaScript programming, I’ve been honing my JavaScript by learning more libraries and plugins and improving my programming skills. I am no expert. Not yet.

I’ve learned effective techniques somewhat randomly because I am not surrounded by JavaScript experts who teach me their craft. Combing through the Internet for advice and inspiration is haphazard and compounded by the fact that our notion of good programming practices evolves over time as languages and tools and libraries grow and evolve.

But now, after more time and experience, I can appreciate several coding practices that contribute to maintainable JavaScript. Such as:

Modules. At first, the coding technique to define a module just seemed confusing. It is. But modularity, which helps to define and separate code “centers” is really important. Not only does it strengthen a “center” by making it more defined (and encapsulated), it makes it more easily integrated with other code.

Being aware of variable scope and limiting it.

Not constantly searching and mucking with DOM objects on every event. Initially I was content if my JQuery searches were “optimized”. Now I am thinking how to avoid DOM references by caching appropriate state in my own variables.

Not blindly nesting anonymous callbacks, but defining functions and then using them.

These techniques contribute to better-defined untangled code centers. But I want to caution you: don’t blindly follow coding best practices without knowing about and buying into the rationale behind them. Arguably your code might be better if you do. But you won’t learn how to exercise judgment until you know more about why you are doing what you are doing. Understanding how to write code that has strong, lively centers takes time, feedback, and the right kind of experience.

When I first started programming in JavaScript I could not have appreciated these techniques. I needed to gain more experience before I could see their value. With time writing more code, looking at more good and bad code, discussions with others, and reflection, I have gotten better at JavaScript. I’m not sure what steps I could leave out to shorten this process. It certainly is easier to learn how to write lively code if you work with others who care deeply about the code they write and who willingly point out and explain the good bits to you when you are ready to absorb them. If you are fortunate to have wise souls around you, take advantage of their wisdom…then put in the time you need to become better.

What Makes for Lively Centers?

In this blog I dig a bit deeper into what makes good, lively centers.

Let me introduce another property of lively centers: alternating repetition. Consider this photo I took of blooming plum trees in Kyoto.

The photo doesn’t do the scene justice. The flowering trees went on and on and on.

And on.

Forking off the thick trunks were ever thinner arching mossy green and dark branches covered with blossoms. Those blossoms seemed to float between branches forming a sea of pink. I could get lost in those trees.

Looking out over that landscape I felt peaceful, relaxed, and calm.

Earlier, walking the streets of Kyoto I snapped this photo (imagining the sign was inviting me to get with it, “chill out”, be calm and come inside to purchase whatever they were offering).

That sign made me laugh. It contrasts the difference between strong centers that reach out and grab me with the rather flat-affect of everyday more mundane centers. The sign made me curious, but not enough to go inside the shop.

Good alternating repetition doesn’t mean the same thing over and over again. It involves smaller sub-patterns of repeating structures. In preparing for our workshop on Alexander’s properties, I looked for an example of alternating repetition in my personal life. I jog. So it was easy to find alternating repetition in my running routine (Joe Yoder found it in his dancing).

I jog several times a week. I don’t do the same routine everyday. Once a week, typically on Thursdays, I do tempo training with my running coach. She makes me run harder than I’d like to normally do for either a specific distance or time, then has me run easily for a bit to recover. I repeat this hard run-easy jog recovery cycle 3 or 4 times a session. Other days I do my normal easy 3+ mile runs (outside when the weather permits) through town. On the weekends I do a longer run of an hour or more at a comfortable pace. I repeat this cycle each week, with variations due to the running season (winter is slower/less running than summertime) or whether I am recovering from an injury or getting back to running after traveling or recovering from a race.

Another property of strong centers is local symmetry. The photo of these shrines (again, taken in Kyoto) illustrates this.

The shapes of the rooflines, windows, and pedestals are similar, not identical. Slight variations make them more interesting.

Here is the welcoming Port wine and strawberry arrangement that my husband and I found in our Doro valley hotel room in Portugal.

Symmetrical. But berries are closer together on the right hand plate. The napkin folds differ. Perfect symmetry is less pleasing (at least to me) than near symmetry. Alexander claims that a hand-hewn quality strengthens centers (he calls this property “roughness”).

When I discover a strong system of centers I get an emotional kick. And there it is. You discover Alexander’s properties when you engage with the things in your life and form personal connections (rather than letting the scene just float by). Finding Alexander’s properties involves a bit of luck, developing a discriminating eye, and being on the alert for positive connections between what you are experiencing and/or making.

Making strong, lively centers is another matter altogether. Yet how hard can it be? Well…that is a topic for another blog post or two.

Discovering Lively Centers

Two weeks ago, Joe Yoder and I conducted a workshop on Discovering Alexander’s Properties in Your Life at AsianPLoP, the patterns conference held in Tokyo.

I’m still reeling from the many feelings that were stirred up as I prepared for this workshop. Inspired by the beauty we found in Kyoto, I included several photographs I took of that very beautiful place. Each property was illustrated with some image that resonated strongly resonated with us (whether taken in Kyoto or not, each photo had a strong personal connection).

Before I tell more about the workshop, I want to give a gentle introduction to Christopher Alexander’s ideas on properties of things that have life. Fundamental to Alexander’s ideas is the notion of “centers” arranged in space. According to Alexander, things that have life exhibit one or more of fifteen essential properties, which include, among other things, strong centers and boundaries.

Alexander’s notion of a “center” is simple to grasp—it is a coherent entity that exists in space. Individual centers are important (and they exist at different levels of scale), but more profound is how centers are arranged in space to form a more integral whole. Alexander writes,

“The system of these centers pays a vital role in determining what happens in the world. The system as a whole—that is to say, its pattern— is the thing which we generally think of when we speak about something as a whole. Although the system of centers is fluid, and changes from time to time as the configuration and arrangement and conditions all change. Still, at any given moment, these centers form a definite pattern. This pattern of all the centers appearing in a given part of space—constitutes the wholeness of that part of space. It is this structure, which is responsible for its degree of life.”

Here’s a photo I took in Hawaii for a talk I gave several years ago on the Nature of Order at another patterns conference. It illustrates the notion of a strong center:

I like this photo because it shows that the center of individual orchid flowers are accentuated and strengthened by the brown spots and five petals that form a star shape that surround. Not only is there a “center” to each flower (the stamen surrounding the pistil); there are several “centers” that surround that innermost center.

And here is the photo we showed at our Asian PLoP workshop to illustrate strong centers found on the roof line of an Imperial Palace building in Kyoto:

I leave it to you to find all the centers in this photo. The center cap on the top of the roofline accentuates the gold flower underneath. Underneath that is another circular center. Below that a symmetrical scroll. And there are centers (gold flowers) arranged along the roofline. Centers, when arranged in a pleasing fashion, complement and strengthen each other.

Centers are strengthened by boundaries that surround, enclose, separate, and connect them. Here’s a photo I took in Yellowstone Park of a crusty boundary at the edge of a bubbling hot springs:

The boundary between the hot spring and the land surrounding is fluid and ever changing (witnessed by the salty stains left from evaporation at the water’s edge).

The wood slats wrapped around this tree at the Imperial Palace in Kyoto, protect it from the wooden brace and form a boundary between the tree and the support:

After, explaining and illustrating Alexander’s fifteen properties, we asked attendees to form groups to brainstorm and discuss Alexandrian properties that they found in their own lives. One group focused on Alexandrian properties they found in the Tokyo metro and railway system; another on the properties of bento boxes; and a third on properties in education and learning. I was surprised by the diversity (and how profound some of the examples were, even though at first blush they seemed straightforward and simple).
But that is the topic of my next blog post.

To close this post I want to share two photos that whimsically illustrate “life” my camera eye unexpectedly caught in Kyoto. This first photo is obvious:

The second takes a little bit of searching to find the “owl-like” creature:

Is Kyoto a magical place? I think so. It was amazing to discover human-like or animal-like images in photos of trees. I had no idea that those shapes were there until I looked at my photographs. My eye must have been unconsciously drawn to those shapes (but truly, I didn’t see them until I looked at the photos). Even more startling to me is the liveliness of inanimate things—whether a hand crafted software module or a carefully placed garden pathway—that is more subtle and also profound. When we find strong centers surrounded by other strong centers in designed things, there is a pleasing sense of discovery and wonder.

Can’t I Just Be Reasonable?

“Don’t it always seem to go
That you don’t know what you’ve got
Till it’s gone” –Joni Mitchell, Big Yellow Taxi

My husband “loaned” my unused iPad to my father-in-law. I hadn’t used it in a year. He thought since it might expand his dad’s horizons and bring the Internet to him for the first time. But my father-in-law didn’t use the iPad either. Upon finding it stashed in a drawer with its power drained (after a couple of months), I demanded it back.

I proceeded to load it with some New Yorkers to read on a trip.

It was great…for a very short while.

But this past week I read a physical New Yorker instead of its e cousin. There was something extremely satisfying about shuffling and folding its pages. Sure, I can listen to poets read their poems and enjoy the extra photos on my iPad. But the video clips? Not that interesting.

My iPad remains underutilized. And I am feeling a bit guilty about asking for it back. Why did I want it back? Why did I react so strongly to “losing” my unused iPad?

Daniel Kahneman, in Thinking Fast and Slow, gives insights into how we react to perceived gains and losses. We respond to a loss far more strongly than we do to an equivalent gain. Take something away and we’ll pine for it even more than its perceived value. And we are driven more strongly to avoid losses than to achieve gains. No, that isn’t rational. But it’s how we are wired.

Sigh. So chalk up my reaction to my loaned iPad to petty possessiveness and an ingrained reaction to perceived loss.

Even more distressing, Kahneman points out that we take on extra risks when faced with a loss. We continue to press on in spite of mounting losses. Losing gamblers keep gambling. Homeowners are reluctant to sell a house that is underwater in value and move on. And additional time and resources get allocated to late, troubled software projects with little or no hope for success. It’s easier than deciding to pull the plug.

Not surprisingly, our aversion to loss increases as the stakes increase. But not dramatically. Only when things get really, really bad do we finally do pull back and stop taking avoidable risks. And to top that off, loaded, emotional words heighten our perception of risk (conjuring up scary imaginary risks that we then don’t react to rationally).

So knowing these things, how can I become a better decision-maker? Right now, I don’t see any easy fixes. Awareness is a first positive step. When I feel a pang of loss I’m going to try to dig deeper to see whether I need to shift my perspective (which might be hard to do in the heat of the moment, but nonetheless…). Especially when I suddenly become aware of a loss. Knowing about loaded, emotional words, I’m going to be sensitive to any emotional “negative talk” that could distort my perceptions of actual risks.

Still, I’m searching for more concrete actions to take that can help me react more rationally to perceived losses. Is this a hopeless cause? I’m interested in your thoughts.

Distinguishing between testing and checking

At Agile 2013 Matt Heusser presented a history of how agile testing ideas have evolved in “Twelve Years of Agile Testing: And What Do We Do Now?” The most intellectually challenging idea I came away from Matt’s talk was the notion that testing and checking are different. I’m still trying to wrap my head around this distinction.

Disclosure: I’m not a testing insider. However, along with effective design and architecture practices, pragmatic testing is a passion of mine. I have presented talks at Agile with my colleague Joe Yoder on pragmatic test driven design and quality scenarios.

Like most, I suspect, I have a hard time teasing out a meaningful distinction between checking and testing. When I looked up definitions for testing and checking there was significant overlap. Consider these two definitions:

Testing-the means by which the presence, quality, or genuineness of anything is determined

Testing-a particular process or method for trying or assessing.

And these for checking:

Checking-to investigate or verify as to correctness.

Checking-to make an inquiry into, search through, etc.

Using the first definition for testing, I can say, “By testing I determine what my software does.” For example, a test can determine the amount of interest calculated for a late payment or the number of transactions that are processed in an hour. Using the second meaning of testing, I can say that, “I perform unit testing by following the test first cycle of classic TDD” or that, “I write my test code to verify my class’ behavior after I’ve completed a first cut implementation that compiles.” Both are particular testing processes or methods.

I can say, “I check that my software correctly behaves according to some standard or specification (first meaning).” I can also perform a check (using the second definition) by writing code that measure how many transactions can be performed within a time period.

I can check my software by performing manual procedures and observing results.

I can check my software by writing test code and creating an automated test suite.

I might want to assess how my software works without necessarily verifying its correctness. When tests (or evaluations) are compared against a standard of expected behavior they also are checks. Testing is in some sense a larger concept or category that encompasses checking.

Confused by all this word play? I hope not.

Humans (and speakers of any native language) explore the dimensions and extent of categories by observing and learning from concrete examples. One thing that distinguishes a native speaker from a non-native speaker is that she knows the difference between similar categories, and uses the appropriate concept in context. To non-native speakers the edges and boundaries of categories seem arbitrary and unfathomable (meanings aren’t found by merely reading dictionary definitions).

I’ve been reading about categories and their nuances in Douglas Hofstadter and Emmanuel Sander’s Surfaces and Essences. (Just yesterday I read about subtle difference between the phrases, “Letting the cat out of the bag” and “Spilling the beans.”)

So what’s the big deal about making a distinction between testing and checking?

Matt pointed us to Michael Bolton’s blog entry, Testing vs. Checking. Along with James Bach, Michael has nudged the testing world to distinguish between automated “checks” that verify expected behaviors versus “testing” activities that require human guided investigation and intellect and aren’t automatable.

In James Bach’s blog, Testing and Checking Refined, they makee these distinctions:

“Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.
(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)”

My first reaction was to throw up my hands and shout “Enough!” My reaction was that of a non-native speaker trying to understand a foreign idiom! But then I calmed down, let go of my urge to precisely know James and Michael’s meanings, accept some ambiguity, and looked for deeper insight.

When Michael explained,

“Checking is something that we do with the motivation of confirming existing beliefs” while, “Testing is something that we do with the motivation of finding new information.”

it suddenly became more clear. We might be doing what appears to be the same activity (writing code to probe our software), but if our intentions are different, we could either be checking or testing.

Why is this important?

The first time I write test code and execute it I learn something new (I also might confirm my expectations). When I repeatedly run that test code as part of a test suite, I am checking that my software continues to work as expected. I’m not really learning anything new. Still, it can be valuable to keep performing those checks. Especially when the code base is rapidly changing.

But I only need to execute checks repeated on code that has the potential to break. If my code is stable (and unchanging), perhaps I should question the value of (and false confidence gained by) repeatedly executing the same tired old automated tests. Maybe I should write new tests to probe even more corners of my software.

And if tests frequently break (even though the software is still working), perhaps I need to readjust my checks. I’m betting I’ll find test code that verifies details that should be hidden/aren’t really essential to my software’s behavior. Writing good checks that don’t break so easily makes it easier to change my software design. And that enables me to evolve my software with greater ease.

When test code becomes stale, it is precisely because it isn’t buying any new information. It might even be holding me back.

I have a long way to go to become a fluent native testing speaker. And I wish that James and Michael could have chosen different phrases to describe these two categories of “testing” (perhaps exploration and verification?).

But they didn’t.
Fair enough.

Architecture at Agile 2013

What a busy, intense week Agile 2013 was! It was a great opportunity to connect with old friends and meet folks who share common interests and energy. I also had a lot of fun spreading the word/exchanging ideas about two things I’m passionate about: software architecture and quality.

At the conference I presented “Why we need architecture (and architects) on Large-Scale Agile Projects”. I’ve presented this talk a few times. This time I added “Large Scale” to the title and submitted it to the enterprise agile track. I wanted to expose the audience to several ideas: that there are both small team/project architecture practices and larger project/program architectural practices that can work together and complement each other, what it means to be an architecture steward, and some practices (like Landing Zones, Architecture Spikes, and Bounded Experiments/prototyping, and options for making architecture-related tasks visible).

I spoke with several enthusiastic architects after my talk and throughout the week. They shared how they were developing their architecture. They also asked whether I thought what they were doing was made sense. In general, it did. But I want to be clear: One size doesn’t fit all. Sometimes, depending on risks and the business you are in, you need to invest effort in experimenting/noodling/prototyping before committing to certain architectural decisions. Sometimes, it is absolutely a waste of time. It depends on what you perceive as risky/unknown and how you want to deal with it. The key to being successful is to do what works for you and your organization.

Nonetheless, in my talk when I spoke about some decisions that are too important to wait until the last moment, someone interrupted to say that I had gotten it wrong: “It isn’t the last possible moment, but the last responsible moment”. I know that. Yet I’ve seen and heard too many stories about irresponsible technical decision-making at the last possible moment instead of the last responsible moment. People confuse the two. And they use agile epithets to justify their bad behaviors. Surprise, surprise. The “last responsible moment” can be misinterpreted by some to mean, “I don’t want to decide yet (which may be irresponsible)”. People rarely make good decisions when they are panicked, overworked, stressed out, exhausted or time-crunched.

Check out my blog posts on the Last Responsible Moment and decision-making under stress if you want to join in on that conversation.

But I digress. Back to architecture.

I was happy to see two architecture talks on the Development and Software Craftsmanship track. I attended Simon Brown’s “Simple Sketches for Diagramming your Software Architecture” and also had the pleasure of hanging out with Simon to chat about architecture and sketching. Simon presented information on how to draw views of a system’s structure that are relevant to developers, not too fussy or formal, yet convey vital information. This isn’t hardcore technical stuff, but it is surprising how many rough sketches are confusing and not at all helpful. Simon regularly teaches developers to draw practical informative architecture sketches. He collects sample sketches from students before and after they receive his sketching advice. Their improvement is remarkable. If you want to learn more, go to Simon’s website, CodingTheArchitecture.com

I shared with Simon the sketching exercises in my Agile Architecture and Developing and Communicating Software Architecture workshops…and pointed him to two books on I’ve drawn on for drawing inspiration: Nancy Duarte’s slide:ology and Matthew Frederick’s 101 Things I Learned in Architecture School. It’s all about becoming better communicators.

Scott Ambler talked about Continuous Architecture & Emergence Design. I was happy to see that he, too, advocated architecture spikes and envisioning (and proving the architecture with evidence/code). In his abstract he states: “Disciplined agile teams will perform architecture envisioning at the beginning of a project to ensure that they get going in the right direction. They will prove the architecture with working code early in construction so that they know their strategy is viable, evolving appropriately based on their learnings. They will explore new or complex technologies with small architecture spikes. They will explore the details of the architecture on a just-in-time (JIT) basis throughout construction, doing JIT modeling as they go and ideally taking a test-driven-development (TDD) approach at the design level.”

There are way too many concurrent sessions and too few hours in the day to get to all the talks I’d have liked to attend. I just wished I’d been able to attend Rachel Laycock and Tom Sulston’s talk on the DevOps track, “Architecture and Collaboration: Cornerstones of Continuous Delivery”…but instead I enjoyed Claire Moss’ “Big Visible Testing” experience report. Choices. Decisions.

If you’d like to continue the conversation about architecture on agile projects, I’d love to hear from you.

Architecture Patterns for the Rest of Us

Recently I have been reading and enjoying Robert Hanmer’s new book, Pattern-Oriented Software Architecture for Dummies. Disclaimer: In the past I have shied away from the books for dummy series, except when I wanted to learn about something clearly out of the realm of my expertise. Even then, just the notion of reading a book for dummies has stopped me from several dummy book purchases. Good grief! I didn’t know what I have been missing.

This book is not theoretical or dry. The prose is a pleasure to read. And it goes into way more depth than you might expect. Instead of simplifying the Patterns of Software Architecture book that is also on my bookshelf, I’d say it adds and complements it with clear explanation of benefits and liabilities of each pattern, step-by-step guides to implementing each architectural pattern and more. As an extra bonus, the first two parts of the book contain some of the clearest writing about what patterns are and how they can be used or discovered that I’ve seen.

I wish Bob Hanmer would write more patterns for the Dummy series. He knows his subject. He has good, solid examples. And he doesn’t insult your intelligence (In contrast, I find that the Head First books are definitely not my cup of tea…I don’t want to play games or have patterns trivialized). Bob has an easy, engaging style of writing. The graphics and illustrations are compelling In fact, I reproduced a couple of the graphics about finding patterns in a lecture, with credit to Bob of course, in a lecture I gave last week to my enterprise application design students.

This is a good book. If you’ve wondered about software architecture patterns and styles, read this book. Buy it. And tell your software buddies about it, too.

Why Domain Modeling?

One barrier to considering rich domain model architectures is a misconception about the value or purpose of a domain model. To some, creating a domain model seems a throwback to earlier days where design and modeling were perceived to be discrete, lengthy, and mostly unproductive activities.

When object technology was young, several notable authors made a strong distinction between object-oriented analysis and object-oriented design and programming. Ostensibly, during object-oriented analysis you analyzed a task that you wanted to automate and developed an underlying conceptual (object) model of that domain. You produced a set of task descriptions and a model of objects that included representations of domain concepts and showed how these objects interacted to accomplish some work. But you couldn’t directly implement these objects. During object-oriented design you refined this analysis model to consider implementation and technology constraints. Only then, after finishing design, would you write your program. The implication was that any model you produced during analysis or design needed extensive manipulation and refinement before you could write your program.

But even in those early days, many of us blurred the lines between object-oriented analysis, design and programming. In practice, how we worked was often quite different than suggested by the popular literature of the time. We might analyze the problem, quickly sketch out some design ideas and then implement then. We might use CRC cards to model our objects (which we would then discard). There weren’t distinct gaps between analyzing the problem and designing and implementing a solution. Sometimes different people did analysis while others did design and programming; but many times a developer would do all these activities. Sometimes, we created permanent representations of some models (in addition to our code). It depended on the situation and the need.

These days, I rarely see anyone produce detailed object analysis or design models. In fact, design and modeling have gotten somewhat of a bad wrap. Good object design is deemed too difficult for “average” programmers, and there isn’t time apart from coding to think about the domain and come up with any models.

The most common object models I see are created for one of two purposes: small conceptual models constructed to gain an understanding of significant new functionality; or informal design sketches intended to provide a quick overview to newcomers or non-code reading folks who need to “know more” about the software. A lack of modeling (unless you consider code or tests to be models—I don’t) is prevalent whether or not the team is following agile practices. The most common models I do see are very detailed E-R models that are more implementation specifications than models. They don’t leave out any details making it hard find the important bits.

Understanding and describing a domain and creating any model of it in any form takes a back seat to most development activities.

But if your software is complex, rapidly changing, and strategic and you aren’t doing any domain modeling, you may be missing out on something really important. If your software is complex enough, you can greatly benefit from domain modeling and thoughtfully doing some Domain-Driven Design activities.

For example, Domain Driven Design’s strategic design is a conscientious effort to create common understanding between business visionaries, domain experts and developers. Initial high-level domain discussions lead to understanding what is central to the problem (the core domain) and the relationships between all the important parts (sub-domains) that it interacts with. Gaining such consensus helps you focus your best design efforts and structure (or restructure) your software to enable it to sustainably grow and evolve.

But it doesn’t stop there. If you buy into domain modeling, you also commit to developing a deep shared understanding of the problem domain along with your code. Your mission isn’t to just deliver working functionality, but to embed domain knowledge in your working solution. Your code will have objects that represent domain concepts. You’ll be picky about how you name classes and methods so they accurately reflect the language of the domain. You will have ongoing discussions with domain experts and jointly discuss and refine your understanding of the domain. Along the way you may sketch and revise domain models. You’ll strive to identify, preserve, strengthen and make explicit the connections between the business problem and your code. When you refactor your design as you gain deeper understanding, you won’t forget to reflect the domain in your code. Your domain model lives and evolves along with the code.

As a consequence, there isn’t that big disconnect between what you code and what the business talks about. And that can be a powerful force for even closer collaborations between developers and domain experts.

Domain-Driven Design Applied

Vaughn Vernon’s new book, Implementing Domain-Driven Design, is full of advice for those who want to apply design principles and patterns to implementing rich domain model applications.

Check out my interview with Vaughn for InformIT. We discuss why he wrote this book and some important new Domain-Driven Design patterns and their architectural impact.

There is much to learn about good design of complex software from this book. In future blog posts I plan to unpack some of the meatier domain-driven design concepts and reflect on how best to apply them to designing enterprise applications.

Evangelizing New Software Architecture Ideas and Practices

Last December I spoke at the YOW Conferences in Australia on Why We Need Architects (and Architecture) on Agile Projects.

I wanted convince agile developers that emergent design doesn’t guarantee good software architecture. And often, you need to pay extra attention to architecture, especially if you are working on a large project.

There can be many reasons for paying attention to architecture: Meaningful progress may be blocked by architectural flaws. Some intricate tricky technical stuff may need to be worked out before you can implement functionality that relies on it. Critical components outside your control may need to be integrated. Achieving performance targets may be difficult and you need to explore what you can do before choosing among expensive or hard to reverse alternatives. Many other developers may depend on some new technical bit working well (whether it be infrastructure, that particular NoSQL database, or that unproven framework). Some design conventions need to be established before a large number of developers start whacking away at gobs of user stories.
I characterized the role of an agile architect as being hands-on; a steward of sustainable development, and someone who balances technical concerns with other perspectives. One difference between large agile and small agile projects is that you often need to do more significant wayfinding and architectural risk mitigation on larger projects.

I hoped to inspire agile developers to care more about sustainable architecture and to consider picking up some more architecture practices.

Unfortunately, my talk sorely disappointed a thoughtful architect who faces an entirely different dilemma: He needs to convince non-agile architects to adapt agile architectural practices. And my talk didn’t give him any arguments that would persuade them.

My first reaction to his rant was to want to shout: Give up! It is impossible to convince anyone to adopt new way of working that conflict with his or her deeply held values.

But then again, how do new ways of working ever take hold in an organization? By having some buzz around them. By being brought in (naively or conscientiously) by leaders and instigators who know how to build organizational support for new ideas. By being new and sexy instead of dull and boring. By demonstrating results. By capturing people’s imagination or assuaging their fears. Surreptitiously, quietly replacing older practices when reasons for doing them are no longer remembered. When the old guard dies out or gives up.

Attitudes rarely change through compelling discussions or persuasive argumentation. I look to Mary Lynn Manns and Linda Rising’s Fearless Change: Patterns for Introducing New Ideas for inspiration.

I take stock of how much energy I want to invest in changing attitudes and how much investment people have in the way they are doing things now. I don’t think of myself as a professional change agent. Yet, as a consultant I am often brought in when organizations (not necessarily every person, mind you) want to do things differently.
People generally aren’t receptive to new ideas or practices or technologies when they feel threatened, dismissed, disrespected, underappreciated, or misunderstood. I am successful at introducing new techniques when they are presented as ways to reduce or manage risks or increase productivity or reliability or improve performance or whatever hot button the people who I am exposing these new ways of working are receptive to. Labeling techniques as “agile” or “lean” may create a buzz in those that already receptive. But the reaction can be almost allergic in those who are not. The last thing I want do is to foster divisiveness. Labels do that. If I get people comfortable taking just that next small step, that is often enough for them to carry on and make even more significant changes. Changing practices takes patience and persistence. At the end of the day I can only convince, demonstrate and empathize; I cannot compel people to make changes.