Working–share your experience

One of my all time favorite books is Studs Terkel’s Working. In it, he captured people talking about what they do and how they feel about it. People tell amazing stories about hard work. The book is about, “ulcers as well as accidents, about shouting matches as well as fistfights, about nervous breakdowns as well as kicking the dog around.” I hope you don’t find software development so grim (I don’t ). But maybe you have an intriguing story to tell.

OOPSLA is soliciting Practitioner Reports. The submission deadline is just a short time away–March 19th. If you submit a report and it is accepted, Ralph Johnson or one on his committee will help you tell your story. There are many more interesting stories to be told and if you have one, please consider telling it at OOPSLA. Last year there was a really great story about how software objects finally made it to space in a JPL satellite through the persistence of a guy who really, really believed in the benefits of oo programming. A grad student at Portland State presented his experience exploring Traits–when he was an undergrad. I remember another report where a developer recounted his use of refactoring tools to write transformation rules to transmogrify a massive Smalltalk legacy application’s database access layer. And I recall another telling how a framework for scientific applications evolved. There are always interesting stories to tell; and OOPSLA is a place where you can tell them, whether they be about your latest adventures with web services, domain-driven design, aspects, agile or open source development, or the challenges of developing applications in distributed teams.

Martin Fowler is no Kent Beck

I know the difference between those two….When authors make mistakes, readers notice. In my latest IEEE Software Design Column, Driven…to Discovering Your Design Values, I quoted Martin Fowler as claiming that test-driven development,

“gives you this sense of keeping just one ball in the air at once, so you can concentrate on that ball properly and do a really good job with it.”

Kent quoted Martin in his book Test Driven Development by Example.
It gets a little tricky when you cite someone quoting someone else. Originally, I had the reference to the book in line with the quote attributed to Martin (but my citation only listed the book title, not the author). My editor moved that citation to the back of my article and undstandably filled in Martin as the author. Easy mistake to make and I didn’t double check the references when it was refactored. I assumed my editor would fill in the author appropriately and double check citation.

My fault. You heard it from me first. When anything is refactored, whether it is code or citations or comments, you need to check twice. Since I don’t have an xUnit tool to write tests for citations and quotations, this has to be by visual inspection. I’ll know better next time. Thanks Mirko and Shinobu for caring enough to email me about this mistake.

Translating Object Design

Recently I received a copy of the Chinese translation of my book, Object Design: Roles, Responsibilties and Collaborations. It took nearly three years to translate. I hope it sells well, although authors only get a mere pittance for selling translation rights (think $100 or less–and I have to split that with my co-author). So I won’t be making money even if becines a best seller in China. Creating and marketing a best selling book can be dicey as this interesting blog posting for someone in the book publishing business points out (I didn’t write this book to retire on, I wrote it to get the word out about role-based object design).

I found it fascinating to compare the English with the Chinese edition. As I thumbed through the book several features were missing that we thought essential when we originally worked with our editor (in hindsight, our requests added to the book’s cost, but probably made little difference in how well it has sold). We convinced our editor to produce a two-color book so we could annotate drawings using blue. The editors then figured out ways to incorporate blue into subsection titles, boxed in examples, etc. The Chinese edition was black and white and had no index. Margin comments had been boxed off and inserted along with the main text. That certainly made formatting a lot easier.

It was interesting to see the text sprinkled with English words and pseudo-English class names: e.g. Rebecca Wirfs-Brock, Brian Wilkerson, Rational Unified Process, Kay, GuessingLettersOnly, aMessageBuilder,Michael Jackson (the UK problem frames inventor, not THE singer). The format of the Chinese edition numbered subsections, where in the English language version we did not. On the cover, our names were in a smaller font than the text, forewards by Ivar Jacobson and John Vlissides. I’m guessing this was to sell more books. The cover included the same cover art but the lettering and colors were much simpler. On the cover, the book’s title was in both Chinese and English. The paper is thin decreasing the thickness of the book by 50%. Printing quality wasn’t great. The translated book was 313 pages vs. 390 for the English version. I wonder whether Chinese writing is more compact or whether parts have been omitted. It is hard to tell.

I’m meeting with one of the translators of our book to Japanese at next week’s Architecture and Design World. I really look forward to meeting him in person. Translators who take their job seriously ponder words and meanings. And because of the differences of words in different languages, they can uncover nuances than I ever thought about. For example, I received this query from the Japanese translator about this sentence from our book: “As we conceive our design, we must constantly consider each object’s value to its immediate neighborhood.” He asked, “Does the meaning of neighborhood in your book correspond to the definition ‘objects who live near one another’ or ‘in a particular district or area’? If your answer is yes, could you make selection between ‘live near one another’ or ‘in a particular district or area’ because we have different Japanese words for each.”

One of my favorite books is Douglas Hofstadter’s Le Ton Beau De Marot: In Praise of the Music of Language. This book is all about the nuances of translation. Hofstadter, who also wrote the best-selling Gödel, Escher, Bach: An Eternal Golden Braid, explores the myriad constraints that must be chosen to be relaxed (or tightened) when translating, as well as the choices the translator makes to preserve the original author’s context or to translate it to a more familiar one of the reader. As a vehicle to explore these ideas, he used a 500 year old French poem that he gave to many friends and colleagues to translate (and he, himself wrote dozens of translations). It’s fascinating reading. As a designer and requirements analyst, I often find myself discriminating very fine shades of meaning. There’s nothing straightforward or mechanical about this process. But to me, translating poetry seems much more challenging. And translators of technical books deserve credit and recognition for their own contributions.

Can you really estimate complexity with use cases?

I visited with some folks last week who failed to get as much leverage from writing use cases as they’d hoped. In the spirit of being more agile, at the same time they adopted use cases, they also streamlined their other traditional development practices. So they didn’t gather and analyze other requirements as thoroughly as they had in the past. Their use cases were high level (sometimes these are called essential use cases) and lacked technical details or detailed descriptions of process variations or complex information that needed to be managed by the system. But their problem domain is complex and varied, prickly, and downright difficult to implement in a straightforward way (and use cases written at this level of detail failed to reveal this complexity). Because of the level of detail, they found it difficult to use these use cases to estimate the work involved to implement them. In short, these use cases didn’t live up to their expectations.

Were these folks hoodwinked by use case zealots with an agile bent? In Writing Effective Use Cases, Alistair Cockburn illustrates a “hub-and-spoke” model of requirements. A figure in his book puts use cases in the center of a “requirements wheel” with other requirements being spokes. Cockburn states that, “people seem to consider use cases to be the central element of the requirements or even the central element of the project’s development process.”

Putting use cases in the center of all requirements can lull folks into believing that if they have limited time (or if they are trying to “go agile”) they’ll get a bigger payoff by only focusing on the center. And indeed, if you adopt this view of “use cases as center”, it’s easy to discount other requirements perspectives as being less important. If you only have so much time, why not focus on the center and hope the rest will somehow fall into place? If you’re adopting agile practices, why not rely upon open communications between customers (or product owners or analysts) and the development team to fill in the details? Isn’t this enough? Maybe, maybe not. Don’t expect to get early accurate estimates by looking only at essential use cases. You’d be just as well off reading tea leaves.

Cockburn proposes that, “use cases create value when they are named as user goals and collected into a list that announces what the system will do, revealing the scope of a system and its purpose.” He goes on to state that, “an initial list of goals will be examined by user representatives, executives, expert developers, and project managers, who will estimate the cost and complexity of the system starting from it.” But if the real complexities aren’t revealed by essential use cases, naive estimates based on them are bound to be inaccurate. The fault isn’t with use cases. It’s in the hidden complexity (or perhaps naive optimism or dismissal of suspected complexity). A lot of special case handling and a deep, complex information model makes high-level use cases descriptions a deceptive tool for estimation. That is unless everyone on the project team is brutally honest about them as just being a touchpoint for further discussion and investigation. If the devil is in the details, the only way to make reasonable estimates is to figure out some of those details and then extrapolate estimates based on what is found. So domain experts that know those details had better be involved in estimating complexity. And if technical details are going introduce complexity, estimates that don’t take those into account will also be flawed. Realistically, better estimates can be had if you implement a few core use cases (those that are mutually agreed upon as being representative and prove out the complexities of the system) and extrapolate from there. But if details aren’t explained or if you don’t perform some prototyping in order to make better estimates, you won’t discover the real complexities until you are further along in development.

I’m sure there are other reasons for their disappointment with use cases but one big reason was a misguided belief that high-level use cases provide answers instead of just being a good vehicle for exploring and integrating other requirements. In my view, use cases can certainly link to other requirements, but they just represent a usage view of a system. One important requirement for many systems, but not the only one. If they are a center, they are just one of many “centers” and sources of requirements.

Just Enough Structured Analysis

Today I happened upon a notable source. Ed Yourdon is writing once again about structured analysis. According to Ed,

“This is an update, condensation, and pragmatic revision of my 1989 tome, Modern Structured Analysis, which is still employed by malicious professors to torture innocent students in universities around the world—the decision to update the material, and to rewrite what was probably far too ponderous a tome (672 pages) even in the days when people actually had enough time to read books printed on dead trees—[is based on the fact that] today, we’re too busy to spend much time thinking about anything, and we’re also far too busy to read more than a couple hundred pages of the bare essentials on any topic. What we want is just enough … “

Ed plans on completing his book in 2007. There are a handful of chapters available now including one on Data Flow Diagrams and another on Process Specifications (which shows many different ways to represent what’s going inside a bubble on a data flow diagram). At OOPSLA last year I had the pleasure of hearing stories from Ed including how he’d been recently asked, “Aren’t you dead?” Ed’s very much alive. I’m not sure when I’ll next create any of these models, but I want to know about them from the source.

Pattern drift

When I first reviewed Design Patterns, I recommended it be published as a loose-leaf notebook. I suggested that authors provide regular updates (this was before the Internet was readily available!). I anticipated frequent updates and many more additions. 23 patterns didnâ’t seem like nearly enough.

Fast forward to 2006. Design Patterns has been in print unchanged for 12 years. Although recognized as a landmark book, it needs refreshing. In fairness, the book was so popular that there was little motivation to do so. An update is supposedly in the works.

I give my students the original text, but they struggle to read it and find relevance (C++ and graphics examples are a stretch for most). I always point them to other sources, both online and in print, to fill in the gaps. Many have written their own take on specific patterns. That is a good thing. In 1998, in a C++ Report article, John Vlissides acknowledged that pattern definitions aren’t cast in stone.

It seems you can’t overemphasize that a pattern’s structure diagram (class diagram) is just an example, not a specification. It portrays the implementation we see most often. As such, the Structure diagram will probably have a lot in common with your own implementation, but differences are inevitable and actually desirable. At the very least you will rename the participants as appropriate for your domain. Vary the implementation trade-offs, and your implementation might start looking a lot different from the Structure diagram.

In Refactoring to Patterns, Josh Kerievsky quotes Vlissides and then after illustrating the original Composite pattern:

Gives his own take on a single-class implementation:

Hmm. A single concrete class that could support either leaf or composite behaviors. Now that’s a thought—but is it still recognizable as a composite pattern? Sure, but what is it that makes a composite a composite and not just another structuring mechanism? Is it just a fancy name for a “tree structuring mechanism” or is there something more?

Bob Martin, in Agile Software Development, presents another variation. He illustrates composite with a Shape interface which defines a single method-draw(). That interface is realized by classes that are either primitive shapes or composed of other shape objects.

In 1994 when Design Patterns was published, interfaces weren’t present in popular programming language. The authors relied on abstract class definitions instead. Since I spend my time with C# and Java, today I’d likely recast many GOF patterns using interfaces instead of abstract classes, especially when there isn’t any common meaningful behavior to inherit.

But Bob’s example illustrates another important design choice. He didn’t force-fit composite behavior into a common abstraction shared by both composite and leaf objects. He isn’t alone in making this tradeoff. Many of my students after struggling to define meaningful operations common to both, throw up their hands and grumble that the GOF authors made the wrong tradeoffs when they specified the composite pattern. This sentiment is echoed on a RICE webpage:

In Design Patterns, the abstract component AComponent is shown as having accessor methods for child AComponents. They are not shown here because it is debatable as to whether one wants the Client to fundamentally view the Component as a single component or as a collection of components. Design Patterns models all Components as collections while the above design models them all as single components. The exact nature of those accessor methods is also debatable.

Debating design tradeoffs is healthy. That’s why I give my students GOF undistilled and we discuss tradeoffs as they learn patterns. This helps them gain design confidence as they articulate their values and say what they like and don’t like about the patterns as presented. But sometimes I think they’d prefer a simple canonical pattern form they could just use without much thought. I often point them to the Data & Object Factory website which has a quick pocket guide discussion of each pattern. But the composite pattern sample implementation there defines an abstract Shape class with empty add() and remove() methods. And leaf classes implemented add or remove by writing a console message, “can’t add/draw a shape to an xxx” A toy solution if I ever saw one! Perhaps if I paid $79 to purchase their Design Patterns Framework I’d see a more realistic implementation.

This leads me to wonder: what makes a pattern useful, how much change can or should it undergo, and how much stewardship should there be over pattern drift, pattern evolution and pattern explanations? I don’t expect patterns to be fixed and unchangeable. They should wiggle around a bit. But I like thoughtful discussions and reasonable examples. I wish there was a community that maintained an active PatternPedia repository where authors would be encouraged to keep their patterns up to date and where there were useful teaching examples, thoughtful reviews, and summaries of both new and classic patterns. The Portland Pattern Repository is a springboard for patterns, but it doesn’t seem very active. The Hillside website is useful, but it just points to other sources. I have more in mind a cross between Wikipedia and Amazon with an edge and an active editor/convener whose job is to keep us informed of the latest breaking pattern news. Sure I can search the internet and books for patterns. But my search feels scattered. I never know when I’ll stumble across some arcane shift, a mangling of a pattern’s intent, or a good trend to follow. It’s too hit and miss.

Currently, most patterns are copyrighted by authors and are locked up in relatively static media books or conference proceedings or magazine articlesâ or static online versions of the same. There’s no central source, no common repository for a growing body of pattern wisdom gained from experience. So when pattern interpretations shift, as they invariably do, it is in a quirky ad hoc manner. I don’t mind Kerievsky’s compact interpretation of Composite. I just wish his version was accessible to those who didn’t buy his book, and that there was a place for open debate about the merits of this implementation choice that was readily linked to other Composite pattern interpretations. Is this asking too much? Isolated works make it difficult to change/refine/
invigorate patterns with in a larger community of users/developers/pattern authors. What would it take to create a patterns commons? I’d be interested in hearing your thoughts.

John Vlissides

John Vlissides died November 24th. A wiki page is dedicated to his memory.

One of my favorite books is John’s Pattern Hatching. I love that book. Reading it is like conversing with a wise, witty, insightful friend. As parting advice in Pattern Hatching, John advises pattern writers to,”Write clearly and unpretentiously. Favor a down-to-earth style rather than a stuffy academic one. People understand and appreciate a conversational tone, making them more receptive to the material. Make sure everything you write is something you could hear yourself saying to a friend.”

Good advice. When I asked John to write a forward for our design book I was anxious about whether our writing and writing style would pass muster. They did (whew!). I was stunned by his generous, kind, and encouraging words. John has been a good and wise friend, mentor, and colleague, who has encouraged and inspired many in the software community. We will miss him very much.

OOPSLA, Creativity, and Practice

I’m home for 2 weeks after spending a week at San Diego at OOPSLA and last week teaching object design. It is good to be home as I can now configure my new tablet PC and start using it. It’s bad to be home as it is raining too hard and spoiling my plans for getting my perennial garden in shape for the winter. But truth be told, the rain leaves me hunkered down inside, forcing me to write, to reflect, and start new projects.

OOPSLA this year was full of creative types: George Platts led a number of workshops and experiences; Robert Hass, past poet-laureate of the USA, gave the keynote. This was no surprise with Dick Gabriel as program chair. Dick is a man of many talents. In addition to his heavy-duty computer side—having made Lisp implementations practical being one of Dick’s early accomplishments—he is a published poet, musician, patterns instigator, Sun fellow, and scholar. A highlight for me was getting Dick to autograph his new book of poetry Drive On and then to read it on the plane ride home.

Sunday morning I attended the tutorial, “How has the arts, sports or life stimulated, inspired and informed your work in computer science?” led by George Platts. George is an artist and game master who is a well known creativity/fun instigator at software pattern conferences. As it was a Sunday morning tutorial, I expected George to drive (and me to sit and quietly soak up his words). Silly me. After showing us an incredible film of an amazing panoply of pyrotechnics, mechanical feats, oozing chemical reactions crafted to produce a Rube Goldberg-like perpetual motion machine, we sat down to discuss how art or sports stimulated or inspired our work.

Two thoughts struck me about how arts and sports have stimulated my work. In college I fenced (with a foil—don’t ever call it a sword). Much preparation went into a competition. We repeatedly practiced standard moves (all with Italian names). Only after much practice with attacks and counter-attack moves would we do practice competitions. Being a left hander gave me a distinct advantage as my body was not where it was expected to be. Lefties fencing lefties are on equal footing as we, too, are accustomed to fencing right handers. So even while I was at an advantage (being short makes for a smaller target and being a lefty makes for an unusual target) during the heat of a competition I’d forget much and just go on raw instinct. Only when moves and countermoves become kinetic memory do you get really good. I never got good as I spent too much time getting my programs to work instead of devoting energy to perfecting my fencing technique.

Bringing up this notion of practice led us to discuss what constitutes “practice” or “repetition of scales” for software developers. What do you developers or designers or analysts do over and over and over again until it becomes second nature and makes them good at what they do? Programming? Applying design patterns? Writing use cases? Learning how to ask probing questions? Well maybe. I’m not sure we software types have a clear equivalent of scales. Does repeatedly programming yet another JSP make you a better at it? Building consistency into your design makes your design better. But does it make you a better designer?

A second artful inspiration I’ve had is from Betty Edwards, author of Drawing on the Artist Within. Betty has inspired me as a teacher of design in how I try to break down design ideas and thinking for others. Betty claims that people are crappy artists because they don’t know how to see, and that by learning special ways of seeing, most of us could be able to become passable renderers of what we see. She believes everyone can be taught how to draw likenesses of what they see. I had a really bad pottery teacher in college who asked us to “feel what was in the clay and then create”. I was frustrated and created lumpy awkward pots because I lacked technique and this instructor didn’t teach any technique. As a teacher of software design I don’t like it when my students create lumpy malformed objects. I teach them a number of techniques for seeing good formations of objects—role stereotypes, a smattering of patterns, the notion of domain entities and value objects from Eric Evans’ writing, a sense of control style choices. But sometimes these ways of seeing don’t click in and my students create strange designs. Or worse yet they get frustrated and just want to know what steps to go through to create passable designs; to heck with all this technique. All I can say is that design takes practice and reflection and technique. I don’t know how to teach design as a rote process.

As a result of George’s tutorial, I got to know Henry Barager of Instantiated Software Inc. Besides being a skilled software architect, Henry’s a whiz at cryptic crosswords. Over lunch one day at OOPSLA Henry taught me about cryptic crosswords by working through one with me (Henry did most of the work but he patiently let me solve a few entries after explaining the basic idea). The key to solving a cryptic crossword entry is to figure out how to separate the word or word phrase you are solving for from the encrypting part. Then there are clues in the encrypting instructions part which may lead you to take some letters and jumble them (key words may imply that you create an anagram, wrap one word inside another, truncate a word, etc.) or not. For example: Flower came up. The answer is Rose. Rose is a flower, and “came up” is another meaning for rose. Simple, right? Well try this: Piece of technology in broken device tossed out. Give up? It is evicted (tossed out = evicted, that’s the definition. The rest of the encrypting part is this: A piece of technology is the “t”, broken device is “evice d”, device jumbled or broken).

Explaining the idea behind cryptic crosswords is fairly simple. Solving entries takes a lot of effort and getting your brain in a problem solving frame of mind. Solving them in real time as Henry does requires skill, experience, and intelligence. Teaching others how to solve them takes another kind of skill. The same goes for object design. Learning object concepts is trivial. Crafting simplistic solutions is, too. Putting together elegant designs that work for complex problems is much harder. It requires practices, reflection, as well as learning techniques from masters who shouldn’t try to solve all the hard problems for you. I wish I’d had someone who would’ve demonstrated and helped me practice good technique when I was learning to shape pottery or to draw. I was fortunate to rub shoulders with some very bright Smalltalk folks when I was learning how to think in objects. Thanks to all the folks at Tektronix and Instantiations for teaching me how to see and build object designs.

Musings of an OOPSLA elder

I don’t think of myself as an “elder”. But that is what Linda Rising, who led the 20th OOPSLA retrospective, labeled those who were at the first OOPSLA. I am one of five who received a perfect attendance ribbon (Allen Wirfs-Brock, Brian Foote, Ralph Johnson and Ed Gehringer are the others) for having attended all OOPSLAs. At the very first OOPSLA I felt like an outsider. I wondered how I could get involved with this conference. Excitement was in the air. Objects were the next big idea. Just exactly what could I do that would have an impact? My paper on Color Smalltalk was rejected (the reviewers’ commented that it talked too much about hardware details) so I presented it as a poster. It was good that they rejected it. Our work was premature. Three years later, when Tektronix Color Smalltalk was finally a product, I wrote a paper about the design principles and class libraries in Color Smalltalk that was accepted. This success made me believe in my writing abilityand led to my paper with Brian Wilkerson on Responsibility-Driven Design in 1990, and launched my enduring interest in design.

Thursday I had another elder moment. I was on a panel with Ed Yourdon, Larry Constantine, Grady Booch, Kent Beck, and Brian Henderson-Sellers that looked back at echoes of the past and structured design and into the future. Larry Constantine provoked us to bring theory, technique and transparent tools into all we do. Kent brought the house down by quoting from Structured Design. He noted that while Ed and Larry got a lot right, they missed out on the fact that systems need to change. Refactoring wasn’t part of Ed and Larry’s vocabulary. Ed, who has been an expert witness on software cases for the past few years, noted that there often isn’t even a shred of a plan or design or any documentation for software systems. Grady mentioned that increasing abstractions have been a big factor and challenged us to move to even further levels of abstraction. More down to earth, I spoke about how objects enabled me to think clearly, and that the power of abstraction, encapsulation, and thinking in terms of small neighborhoods of collaborating, responsible objects as a big step forward. What’s next? To me, it seems that even more effective methods and practices, powerful development and testing environments, expressive languages, patterns, and thinking tools are in our future. Innovation in our industry is a constant. Yet every once in a while it is good to reflect on what we got right and remember influences from the past. But I’m forward looking too. After every OOPSLA I come home charged with new ideas and the urge to do more, collaborate, and continue learning. What a blast!

Why Objects?

As I’ve been working on a position statement for an OOPSLA panel reflecting on the roots of modern software development practices while looking to the future, I’ve been thinking hard about why I got hooked on object technology. Compared with structured programming and design, objects seemed significantly better at handling complexity. Object programming languages were an earth shattering improvement over the procedural and assembly languages I used when I first encountered structured design techniques. Instead of simply following conventions, object programming language constructs forced me to bundle together meaningful operation and data. Object-oriented methodologies generally incorporate the principles of structured design but OOD seems much more than an incremental improvement over SD. Instead of focusing on a thread of control and managing its complexity via procedural decomposition and structured control constructs, object design enables me to break a composition into thousands of semi-autonomous entities with structured roles and responsibilities. Objects offer me a completely different way to think about computation. This way of thinking empowers me to deal with a level complexity that I could never have dealt with only using structured design techniques. Object technology encourages me to form abstractions—objects—and to design how small neighborhoods of them interact.

Responsibility-driven design offers thinking tools that enable developers to conceive of an implementation in terms of interacting roles and their responsibilities. It provides a vocabulary for describing designs that helps developers communicate complex ideas and make tradeoffs more effectively. Agile practices, by emphasizing working code that satisfies customers, seek to reduce accidental complexity by admonishing you to design simply and grow complexity only when needed. Eric Evans, in Doman-Driven Design: Tackling Complexity in the Heart of Software, offers tactics for identifying, preserving, and sharing a common domain model. Refactoring tools have taken tediousness out of making changes and modern application development environments have made it possible for development teams to “hum” by testing and building incrementally. These all represent progress.

But at the end of the day, they cannot reduce the complexity inherent using diverse tools, platforms, and technologies that make up a typical sprawling IT system. While OOD/OOP did give us an order of magnitude improvement over previous techniques and tools, we still don’t have the order-of-magnitude better approach we need to sort today’s complex environments and minimize the gaps and seams that are inherent when diverse technology comes together in a complex system. In the meantime, thank goodness for the framework builders who give us various ways of linking objects with relational databases and for little languages and tools that take the tedium out of repetitive (error prone) tasks of gluing things together. We live in a complex world where objects will continue to make a lasting, significant contribution. What will be the next breakthrough in software development that will subsume the principles of OOD (and transitively SD) and provide the next order of magnitude improvement? I’m not sure. While I don’t think there are any silver bullets out there, I look forward to discovering and encountering even more effective practices, technologies, and techniques that allow us to address inherent complexity head on.