2021 Year End Review

Kapa’a, Hawaii photo by Rebecca

Here’s a quick recap of blog posts I wrote in 2021.

Agile Experience Reports

Juggling Multiple Scrum Teams I introduce Iuri Ilatanski’s experience report about life as a multi-tasking Scrum Master. Juggling involves meeting each team’s specific needs. I was Iuri’s “shepherd”—his sounding board and advocate—as he wrote this report presented at Agile 2021. Thank you, Iuri, for being so open to discussion, reflection, and the hard work of revising your writing.

Agile Experience Reports: A Fresh Look at Timeless Content I spent August organizing the vast Agile Alliance experience reports collection hosted on the Agile Alliance’s website. The collection includes reports from 2014 to 2021 as well five XP conferences. Experience reports are personal stories that pack a punch. There are many gems of wisdom here.

Domain Driven Design

Splitting a Domain Across Multiple Bounded Contexts Sometimes it can more productive to meet the specific needs of individual users rather than to spend the time designing common abstractions in support of a “unified” model.

Design and Reality We shouldn’t assume domain experts have all the language they need to describe their problem (and all that you need to do as a software designer is to “capture” that language and make those real-world concepts evident in your code).

Models and Metaphors Listening to the language people use in modeling discussions can lead to new insights. Sometimes we find metaphors, that when pushed on, lead to a clearer understanding of the problem and clarity in our design.

Decision Making

Noisy Decisions After reading Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony, and Cass Sunstein I wrote about noisy decisions in the context of software design and architecture. These authors define noise as undesirable variability in human judgment. Often, we want to reduce noise and there are ways we can do so, even in the context of software.

Is it Noise or Euphony? At other times, however, we desire variability in judgments. In these situations variability isn’t noise, but instead an opportunity for euphony. And if you leverage that variability, you just might turn up some unexpected, positive results.

Heuristics Revisited

Too Much Salt? We build a more powerful heuristic toolkit when we learn the reasons why (and when) particular heuristics work the way they do. I now think it is equally important to seek the why behind the what you are doing as you cultivate your personal heuristics.

Models and Metaphors

When a complex technical domain isn’t easily captured in a model, look for metaphors that bring clarity.

One of us (Mathias) consulted for a client that acted as a broker for paying copyright holders for the use of their content. To do this, they figured out who the copyright holders of a work were. Then they tracked usage claims, calculated the amounts owed, collected the money, and made the payments. Understanding who owned what was one of the trickier parts of their business.

-“It’s just a technical problem.”

-“But nobody really understands how it works!”

-“Some of us understand most of it. It just happens to be a complicated problem.”

-“Let’s do a little bit of modeling anyway.”

Case Study

Determining ownership was a complicated data matching process which pulled data from a number of data sources:

  • Research done by the company itself
  • Offshore data cleaning
  • Publicly available data from a wiki-style source
  • Publicly available, curated data
  • Private sources, for which the company paid a licence fee
  • Direct submissions from individuals
  • Agencies representing copyright holders

The company had a data quality problem. Because of the variety of data sources, there wasn’t a single source of truth for any claim. The data was often incomplete and inconsistent. On top of that, there was a possibility for fraud: bad actors claimed ownership of authors’ work. Most people acted in good faith. Even then, the data was always going to be messy, and it took considerable effort to sort things out. The data was in constant flux: even though the ownership of a work rarely changes, the data did.


Data Matching

The engineers were always improving the “data matching”. That’s what they called the process of reconciling the inconsistencies, and providing a clear view on who owned what and who had to pay whom. They used EventSourcing, and they could easily replay new matching algorithms on historic data. The data matching algorithms matched similar claims on the same works in the different data sources. When multiple data sources concurred, the match succeeded.

Initially, when most sources concurred on a claim, the algorithm ignored a lone exception. When there was more contention about a claim, it was less obvious what to do. The code reflected this lack of clarity. Later the team realised that a conflicting claim could tell them more: It was an indicator of the messiness of the data. If they used their records of noise in the data, they could learn about how often different data sources, parties, and individuals agreed on successful claims, and improve their algorithm.

For example, say a match was poor: 50% of sources point to one owner and 50% point to another owner. Based on that information alone, it’s impossible to decide who the owner is. But by using historical data, the algorithm could figure out which sources had been part of successful matches more often. They could give more weight to these sources, and tip the scales in one direction or the other. This way, even if 50% of sources claim A as the owner and 50% claim B, an answer can be found.


Domain Modelling

The code mixed responsibilities: pulling data, filtering, reformatting, interpreting, and applying matching rules. All the cases and rules made the data matching very complicated. Only a few engineers knew how it worked. Mathias noticed that the engineers couldn’t explain how it worked very well. And the business people he talked to were unable to explain anything at all about how the system worked. They simply referred to it as the “data matching.” The team wasn’t concerned about this. In their eyes, the complexity was just something they had to deal with.

Mathias proposed a whiteboard modeling session. Initially, the engineers resisted. After all, they didn’t feel this was a business domain, just a purely technical problem. However, Mathias argued, the quality of the results determined who got paid what, and mistakes meant customers would eventually move to a competitor. So even if the data matching was technical, it performed an essential function in the Core Domain. The knowledge about it was sketchy, engineering couldn’t explain it, business didn’t understand it. Because of that, they rarely discussed it, and when they did, it was in purely technical terms. If communication is hard, if conversations are cumbersome, you lack a good shared model.

Through modeling, the matching process became less opaque to the engineers. We made clearer distinctions between different steps to pull data, process it, identifying a match, and coming to a decision. The model included sources, claims, reconciliations, exceptions. We drew the matching rules on the whiteboard as well, making those rules explicit first class concepts in the model. As the matching process became clearer, the underlying ideas that led to the system design started surfacing. From the “what,” we moved to the “why.” This put us in a good position to start discovering abstractions.


Trust

Gradually, the assumptions that they built the algorithm on, surfaced in the conversations. We stated those assumptions, wrote them on stickies and put them on the whiteboard. One accepted assumption was that when a data source is frequently in agreement with other sources, it is less likely to be wrong in the future. If a source is more reliable, it should be trusted more, therefore claims from that source pulled more weight in the decision of who has a claim to what. When doing domain discovery and modeling, it’s good to be observant, and listen to subtleties in the language. Words like “reliable,” “trust,” “pull more weight,” and “decision” were being used informally in these conversations. What works in these situations, is to have a healthy obsession with language. Add this language to the whiteboard. Ask questions: what does this word mean, in what context do you use it?

Through these discussions, the concept of “trust” grew in importance. It became explicit in the whiteboard models. It was tangible: you could see it, point to it, move it around. You could start telling stories about trust. Why would one source be more trusted? What would damage that trust? What edge cases could we find that would affect trust in different ways?

Trust as an Object

During the next modelling session, we talked about trust a lot. From a random word that people threw into the conversation, it had morphed into a meaningful term. Mathias suggested a little thought experiment: What if _Trust_ was an actual object in the code? What would that look like? Quickly, a simple model of Trust emerged. Trust is a Value Object, and its value represents the “amount” of trust we have in a data source, or the trust we have in a claim on a work or usage, or the trust we have in the person making the claim. Trust is measured on a scale of -5 to 5. That number determines whether a claim is granted or not, whether it needs additional sources to confirm it, or whether the company needs to do further research.

It was a major mindshift.

The old code dynamically computed similar values to determine “matches.” These computations were spread and duplicated across the code, hiding in many branches. The team didn’t see that all these values and computations were really aspects of the same underlying concept. They didn’t see that the computations could be shared, whether you’re matching sources, people, or claims. There was no shared abstraction.

But now, in the new code, those values are encapsulated in a first class concept called Trust objects. This is where the magic happens: we move from a whiteboard concept, to making Trust an essential element in the design. The team cleaned up the ad hoc logic spread across the data matching code and replaced it with a single Trust concept.

Trust entered the Ubiquitous Language. The idea that degrees of Trust are ranked on a scale from -5 to 5, also became part of the language. And it gave us a new way to think about our Core Domain: We pay owners based on who earns our Trust.

Trust as a Process

The team was designing an EventSourced system, so naturally, the conversation moved to what events could affect Trust. How does Trust evolve over time? What used to be matching claims in the old model, now became events that positively or negatively affected our Trust in a claim. Earning Trust (or losing it) was now thought of as a process. A new claim was an event in that process. Trust was now seen as a snapshot of the Trust earning process. If a claim was denied, but new evidence emerged, Trust increased and the claim was granted. Certain sources, like the private databases that the company bought a license for, were highly trusted and stable. For others, like the wiki-style sources where people could submit claims, Trust was more volatile.


Business Involvement

During the discussions about the new Trust and Trust-building concepts, the team went back to the business regularly to make sure the concepts worked. They asked for their insights into how they should assign Trust, and what criteria they should use. We saw an interesting effect: people in the business became invested in these conversations and joined in modeling sessions. Data matching faded from the conversations, and Trust took over. There was a general excitement about being able to assign and evolve Trust. The engineers’ new model became a shared model within the business.

Trust as an Arithmetic

The copyright brokerage domain experts started throwing scenarios at the team: What if a Source A with a Trust of 0 made a claim that was corroborated by a Source B with a Trust of 5? The claim itself was now highly trusted, but what was the impact on Source A? One swallow doesn’t make Spring, so surely Source A shouldn’t be granted the same level of Trust as Source B. A repeated pattern of corroborated Trust on the other hand, should reflect in higher Trust for Source A.

During these continued explorations, people from the business and engineering listed the rules for how different events impacted Trust, and coding them. By seeing the rules in code, a new idea emerged. Trust could have its own arithmetic: a set of rules that defined how Trust was accumulated. For example, a claim with a Trust of 3, that was corroborated by a claim with a Trust of 5, would now be assigned a new Trust of 4. The larger set of arithmetics addressed various permutations of claims corroborating claims, sources corroborating sources, and patterns of corroboration over time. The Trust object encapsulated this arithmetic, and managed the properties and behaviors for it.

From an anemic Trust object, we had now arrived at a richer model of Trust that was responsible for all these operations. The team came up with polymorphic Strategy objects. These allowed them to swap out different mechanisms for assigning and evolving Trust. The old data matching code had mixed fetching and storing information with the sprawling logic. Now, the team found it easy to separate it into a layer that dealt with the plumbing separate from the clean Trust model.


The Evolution of the Model

In summary, this was the evolution:

  1. Ad hoc code that computes values for matches.
  2. Using Trust in conversations that explained how the current system worked.
  3. Trust as a Value Object in the code.
  4. Evolving Trust as a process, with events (such as finding a matching claim) that assigned new values of Trust.
  5. Trust as a shared term between business and engineering, that replaced the old language of technical data matching.
  6. Exploring how to assign Trust using more real-world scenarios.
  7. Building an arithmetic that controls the computation of Trust.
  8. Polymorphic Strategies for assigning Trust.

When you find a better, more meaningful abstraction, it becomes a catalyst: it enables other modeling constructs, allowing other ideas to form around that concept. It takes exploration, coding, conversations, trying scenarios, … There’s no golden recipe for making this happen. You need to be open to possibility, and take the time for it.


Discussion

The engineers originally introduced the concept of “matching,” but that was an anemic description of the algorithm itself, not the purpose. “If this value equals that value, do this.” Data matching was devoid of meaning. That’s what Trust introduces: conceptual scaffolding for the meaning of the system. Trust is a magnet, an attractor for a way of thinking about and organizing the design.

Initially, the technical details of the problem were so complicated, and provided such interesting challenges to the engineers, that that was all they talked about with the business stakeholders. Those details got in the way of designing a useful Ubiquitous Language. The engineers had assumed that their code looked the way it needed to look. In their eyes, the code was complex because the problem of matching was complex. The code simply manifested that complexity. They didn’t see the complexity of that code as a problem in its own right. The belief that there wasn’t a better model to be found, obscured the Core Domain for both business and engineering.

The domain experts were indeed experts in the copyrights domain, and had crisp concepts for ownership, claims, intellectual property, the laws, and the industry practices. But that was not their Core Domain. The real Core was the efficient, automated business they’re trying to build out of it. That was their new domain. That explains why knowledge of copyright concepts alone wasn’t sufficient to make a great model.

Before they developed an understanding of Trust, business stakeholders could tell you detailed stories about how the system should behave in specific situations. But they had lacked the language to talk about these stories in terms of the bigger idea that governs them. They were missing crisp concepts for them.


Good Metaphors

We moved from raw code, to a model based on the new concept of Trust. But what kind of thing is this Trust concept? Trust is a metaphor.1 Actual trust is a human emotion, and partly irrational. You trust someone instinctively, and for entirely subjective reasons that might change. Machines don’t have these emotions. We have an artificial metric in our system, with algorithms to manipulate it, and we named it Trust. It’s a proxy term.

This metaphor enables a more compact conversation, as evidenced by the fact that engineers and domain experts alike can discuss Trust without losing each other in technical details. A sentence like “The claims from this source were repeatedly confirmed by other sources,” was replaced by “This source has built up trust,” and all knew what that entailed.

The metaphor allows us to handle the same degree of complexity, but we can reason about determining Trust without having to understand every detail at the point where it’s used. For those of us without Einstein brains, it’s now a lot easier to work on the code, it lowers the cognitive load.

A good metaphor in the right context, such as Trust, enables us to achieve things we couldn’t easily do before. The team reconsidered a feature that would allow them to swap out different strategies for matching claims. Originally they had dismissed the idea, because, in the old code, it would have been prohibitively expensive to build. It would have resulted in huge condition trees and sprawling dependencies on shared state. They’d have to be very careful, and it would be difficult to test that logic. With the new model, swapping out polymorphic Strategy objects is trivial. The new model allows testing low level units like the Trust object, higher level logic like the Trust-building process, and individual Claims Strategies, with each test remaining at a single level of abstraction.

Our Trust model not only organizes the details better, but it is also concise. We can go to a single point in the code and know how something is determined. A Trust object computes its own value, in a single place in the code. We don’t have to look at twenty different conditionals across the code to understand the behavior; instead we can look at a single strategy. It’s much easier to spot bugs, which in turn helps us make the code more correct.

A good model helps you reason about the behavior of a system. A good metaphor helps you reason about the desired behavior of a system.

The Trust metaphor unlocked a path to tackle complexity. We discovered it by listening closely to the language used to describe the solution, using that language in examples, and trying thought experiments. We’re not matching data anymore, we’re determining Trust and using it to resolve claims. Instead of coding the rules, we’re now encoding them. We’re better copyright brokers because of this.


Bad Metaphors

Be wary of bad, ill-fitting metaphors. Imagine the team had come with Star Ratings as the metaphor. Sure, it also works as a quantification, but it’s based on popularity, and calculates the average. We could still have built all the same behavior of the Trust model, but with a lot of bizarre rules, like “Our own sources get 20 five-star ratings.” When you notice that you have to force-fit elements of your problem space into a metaphor, and there’s friction between what you want to say and what that metaphor allows you to say, you need to get rid of it. No metaphor will make a perfect fit, but a bad metaphor leads you into awkward conversations without buying you clarity.

To make things trickier, whenever you introduce a new metaphor, it can be awkward at first. In our case study, Trust didn’t instantly become a fully explored and accepted metaphor. There’s a delicate line between the early struggles of adopting a new good metaphor, and one that is simply bad. Keep trying, work on using your new metaphor, see if it buys you explanatory power, and don’t be afraid to drop it if it does not.

And sometimes, there simply isn’t any good metaphor, or even a simpler model to be found. In those cases, you just have to crunch it. There’s no simplification to be found. You just have to work out all the rules, list all the cases, and deal with the complexity as is.

Conclusion

To find good metaphors, put yourself in a position where you’ll notice them in conversation. Invite diverse roles into your design discussions. Have a healthy obsession with language: What does this mean? Is this the best way to say it? Be observant about this language, listen for terms that people say off the cuff. Capture any metaphors that people use. Reinforce them in conversations, but be ready to drop them if you feel you have to force-fit them. Is a metaphor bringing clarity? Does it help you express the problem better? Try scenarios and edge cases, even if they’re highly unlikely. They’ll teach you about the limits of your metaphor. Then distill the metaphor, agree on a precise meaning. Use it in your model, and then translate it to your code and tests. Metaphors are how language works, how our brains attach meaning, and we’re using that to our advantage.


Written by Rebecca Wirfs-Brock and Mathias Verraes

  1. If you want to learn more about metaphors and how they shape language, read “Metaphors we Live By” (George Lakoff and Mark Johnson, University of Chicago Press, 2003).

  • Share this article on Twitter or LinkedIn
  • Follow Rebecca on Twitter, LinkedIn
  • Design and Reality

    Reframing the problem through design.

    “The transition to a really deep model is a profound shift in your thinking and demands a major change to the design.”

    Domain-Driven Design, Eric Evans

    There is a fallacy about how domain modelling works. The misconception is that we can design software by discovering all the relevant concepts in the domain, turn them into concepts in our design, add some behaviors, and voilà, we’ve solved our problem. It’s a simplistic perception of how design works: a linear path from A to B:

    1. understand the problem,
    2. apply design,
    3. end up with a solution.

    That idea was so central to early Object-Oriented Design, that one of us (Rebecca) thought to refute it in her book:

    “Early object design books including [my own] Designing Object-Oriented Software [Wirfs-Brock et al 1990], speak of finding objects by identifying things (noun phrases) written in a design specification. In hindsight, this approach seems naive. Today, we don’t advocate underlining nouns and simplistically modeling things in the real world. It’s much more complicated than that. Finding good objects means identifying abstractions that are part of your application’s domain and its execution machinery. Their correspondence to real-world things may be tenuous, at best. Even when modeling domain concepts, you need to look carefully at how those objects fit into your overall application design.”

    Object Design, Rebecca Wirfs-Brock

    Domain language vs Ubiquitous Language

    The idea has persisted in many naive interpretations of Domain-Driven Design as well. Domain language and Ubiquitous Language are often conflated. They’re not the same.

    Domain language is what is used by people working in the domain. It’s a natural language, and therefore messy. It’s organic: concepts are introduced out of necessity, without deliberation, without agreement, without precision. Terminology spreads across the organization or fades out. Meaning shifts. People adapt old terms into new meanings, or terms acquire multiple, ambiguous meanings. It exists because it works, at least well enough for human-to-human communication. A domain language (like all language) only works in the specific context it evolved in.

    For us system designers, messy language is not good enough. We need precise language with well understood concepts, and explicit context. This is what a Ubiquitous Language is: a constructed, formalized language, agreed upon by stakeholders and designers, to serve the needs of our design. We need more control over this language than we have over the domain language. The Ubiquitous Language has to be deeply connected to the domain language, or there will be discord. The level of formality and precision in any Ubiquitous Language depends on its environment: a meme sharing app and an oil rig control system have different needs.

    Drilling Mud

    Talking of oil rigs:

    Rebecca was invited to consult for a company that makes hardware and software for oil rigs. She was asked to help with object design and modelling, working on redesigning the control system that monitors and manages sensors and equipment on the oil rig. Drilling causes a lot of friction, and “drilling mud” (a proprietary chemical substance) is used as a lubricant. It’s also used as a carrier for the rocks and debris you get from drilling, lifting it all up and out of the hole. Equipment monitors the drilling mud pressure, and by changing the composition of the mud during drilling, you can control that pressure. Too much pressure is a really bad thing.

    And then an oil rig in the gulf exploded.

    As the news stories were coming out, the team found out that the rig was using a competitor’s equipment. Whew! The team started speculating about what could have happened, and were thinking about how something like that could happen with their own systems. Was it faulty equipment, sensors, the telemetry, communications between various components, the software?

    Scenarios

    When in doubt, look for examples. The team ran through scenarios. What happens when a catastrophic condition occurs? How do people react? When something fails, it’s a noisy environment for the oil rig engineers: sirens blaring, alarms going off, … We discovered that when a problem couldn’t be fixed immediately, the engineers, in order to concentrate, would turn off the alarms after a while. When a failure is easy to fix, the control system logs reflect that the alarm went on and was turned off a few minutes later.

    But for more consequential failures, even though these problems take much longer to resolve, it still shows up on the logs as being resolved within minutes. Then, when people study the logs, it looks like the failure was resolved quickly. But that’s totally inaccurate. This may look like a software bug, but it’s really a flaw in the model. And we should use it as an opportunity to improve that model.

    The initial modeling assumption is that alarms are directly connected to the emergency conditions in the world. However, the system’s perception of the world is distorted: when the engineers turn off the alarm, the system believes the emergency is over. But it’s not, turning an alarm off doesn’t change the emergency condition in the world. The alarms are only indirectly connected to the emergency. If it’s indirectly connected, there’s something else in between, that doesn’t exist in our model. The model is an incomplete representation of a fact of the world, and that could be catastrophic.

    A Breakthrough

    The team explored scenarios, specifically the weird ones, the awkward edge cases where nobody really knows how the system behaves, or even how it should behave. One such scenario is when two separate sensor measurements raise alarms at the same time. The alarm sounds, an engineer turns it off, but what happens to the second alarm? Should the alarm still sound or not? Should turning off one turn off the other? If it didn’t turn off, would the engineers think the off switch didn’t work and just push it again?

    By working through these scenarios, the team figured out there was a distinction between the alarm sounding, and the state of alertness. Now, in this new model, when measurements from the sensors exceed certain thresholds or exhibit certain patterns, the system doesn’t sound the alarm directly anymore. Instead, it raises an alert condition, which is also logged. It’s this alert condition that is associated with the actual problem. The new alert concept is now responsible for sounding the alarm (or not). The alarm can still be turned off, but the alert condition remains. Two alert conditions with different causes can coexist without being confused by the single alarm. This model decouples the emergency from the sounding of the alarm.

    The old model didn’t make that distinction, and therefore it couldn’t handle edge cases very well. When at last the team understood the need for separating alert conditions from the alarms, they couldn’t unsee it. It’s one of those aha-moments that seem obvious in retrospect. Such distinctions are not easily unearthed. It’s what Eric Evans calls a Breakthrough.

    An Act of Creation

    There was a missing concept, and at the first the team didn’t know something was missing. It wasn’t obvious at first, because there wasn’t a name for “alert condition” in the domain language. The oil rig engineers’ job isn’t designing software or creating a precise language, they just want to be able to respond to alarms and fix problems in peace. Alert conditions didn’t turn up in a specification document, or in any communication between the oil rig engineers. The concept was not used implicitly by the engineers or the software; no, the whole concept did not exist.

    Then where did the concept come from?

    People in the domain experienced the problem, but without explicit terminology, they couldn’t express the problem to the system designers. So it’s us, the designers, who created it. It’s an act of creative modeling. The concept is invented. In our oil rig monitoring domain, it was a novel way to perceive reality.

    Of course, in English, alert and alarm exist. They are almost synonymous. But in our Ubiquitous Language, we agreed to make them distinct. We designed our Ubiquitous Language to fit our purpose, and it’s different from the domain language. After we introduced “alert conditions”, the oil rig engineers incorporated it in their language. This change in the domain is driven by the design. This is a break with the linear, unidirectional understanding of moving from problem to solution through design. Instead, through design, we reframed the problem.
    Is it a better model?

    How do we know that this newly invented model is in fact better (specifically, more fit for purpose)? We find realistic scenarios and test them against the alert condition model, as well as other candidate models. In our case, with the new model, the logs will be more accurate, which was the original problem.

    But in addition to helping with the original problem, a deeper model often opens new possibilities. This alert conditions model suggests several:

    • Different measurements can be associated with the same alert.
    • Alert conditions can be qualified.
    • We can define alarm behaviors for simultaneous alert conditions, for example by spacing the alarms, or picking different sound patterns.
    • Critical alerts could block less critical ones from hogging the alarm.
    • Alert conditions can be lowered as the situation improves, without resolving them.

    These new options are relevant, and likely to bring value. Yet another sign we’d hit on a better model is that we had new conversations with the domain experts. A lot of failure scenarios became easier to detect and respond to. We started asking, what other alert conditions could exist? What risks aren’t we mitigating yet? How should we react?

    Design Creates New Realities

    In a world-centric view of design, only the sensors and the alarms existed in the real world, and the old software model reflected that accurately. Therefore it was an accurate model. The new model that includes alerts isn’t more “accurate” than the old one, it doesn’t come from the real world, it’s not more realistic, and it isn’t more “domain-ish”. But it is more useful. Sensors and alarms are objective, compared to alert conditions. Something is an alert condition because in this environment, we believe it should be an alert condition, and that’s subjective.

    The model works for the domain and is connected to it, but it is not purely a model of the problem domain. It better addresses the problems in the contexts we envision. The solution clarified the problem. Having only a real world focus for modelling blinds us to better options and innovations.

    These creative introductions of novel concepts into the model are rarely discussed in literature about modelling. Software design books talk about turning concepts into types and data structures, but what if the concept isn’t there yet? Forming distinctions, not just abstractions, however, can help clarify a model. These distinctions create opportunities.

    The model must be radically about its utility in solving the problem.

    “Our measure of success lies in how clearly we invent a software reality that satisfies our application’s requirements—and not in how closely it resembles the real world.”

    Object Design, Rebecca Wirfs-Brock

    Written by Mathias Verraes and Rebecca Wirfs-Brock. Special thanks to Eric Evans for the spot on feedback and constructive advice.