Skip to main content
Rebecca Wirfs-Brock’s Blog and Informal Essays wirfs-brock.com

Surfacing Worldviews in Design


Implicit worldviews underlie our design choices. Exposing them can generate better options.

Design choices carry along the worldviews of the designer. This often is not apparent, especially when design ideas are for obvious technical improvements. Let’s look into a design challenge faced by a fictional Maker Lab.

In this Maker Lab, people in the community can come in and make all kinds of things such as metal work, robotics, furniture, glasswork, 3D printing, laser cutting, lighting, electronics, …. The Lab provides tools, machines, and inventory that makers can use in the maker space. The makers are supposed to use the hand scanners, and scan the barcodes on equipment and materials as they take them out for use. Sometimes makers may need something for a few days, so it’s normal that they don’t put everything back right away. Or they might make parts, that are then stored for later use or for sharing.

As it happens in a community space, things get lost or misplaced. You don’t always know if it’s really lost, or just being used somewhere for an extended period of time. In any case, it’s a problem, and management decides to do something about it.

Intervention #

The Lab’s software engineer is asked to add a new feature to the inventory app: it should track who was the last person to scan an item. The idea is that if you can check who was the last person to touch it, you can ask them about the item’s whereabouts. The feature is quickly put in production, and for a while, things get a bit better. But after a while, it gets worse: items are rarely being scanned anymore, if at all. Things get lost more often, and turn up much later in unexpected places.

Eventually they figure out what happened: When makers couldn’t find the item or tool they were looking for, they would ask the person who was the last to scan it. This led to some heated discussions, and accusations of theft. But of course when something is lost, the last person to scan it, isn’t necessarily the last person to touch it. For example, someone leaves a tool at their station because they’ll need it again soon. But then someone else wants to use the station, and moves the tool to makes space. Or sometimes a maker grabs the tool from the station for a quick use, doesn’t scan it, and forgets to put it back. The makers who conscientiously scanned all their tools were getting all the blame. They felt they were punished for trying to do a good job. And after all, this is a hobby environment, not a NASA lab. Eventually they too stopped bothering, and the problem of lost items got even worse.

Consequences #

What happened here, is that a technical solution was implemented to “help” deal with a social problem. Social problems tend to require more nuanced solutions than technical ones. Seemingly simple and straightforward interventions can have unintended consequences, and those consequences can have more consequences, and so on. It’s impossible to predict the chains of consequences. The result is that sometimes, the consequences have the opposite effect that you expect.

What happens next? Management discovers that the tracking isn’t sufficient, and imposes new, stricter rules on using items. A person is assigned to guard the inventory, and makers have to go through them to get the items they need. The guard makes sure the makers bring back the material before leaving, even if they intend to use it the next day. These “improvements” add more work for the makers, most of whom were already compliant, and with no clear extra benefit to them. They are made to feel they couldn’t be trusted. Gradually, the culture declines, and makers stop coming. Often in such situations, you never really find out why.

Another scenario #

Now let’s imagine another scenario of how the Maker Lab inventory problem might have played out. Instead of rushing to a technical solution, management took stock of how people in the lab behave. What aspects and qualities of the larger system make the Maker Lab what it is? People enjoy coming, and most of them actually do try to put things back correctly. Some do so because it’s the rules, and they want to be good community members. Others intrinsically understand that by putting things back, they increase the likelihood of finding them later. They get value from finding items, and they create value by putting them back.

If 80% of people are doing things well, should they be punished for the other 20%? Perhaps we can find out how we can grow the 80% to an acceptable 95%. If you take that point of view, the question changes from “How can we force people into good behavior?” into, “What bottlenecks can we remove to get the remaining people into this mindset?” It’s not just about preserving what already works, it’s about expanding on it.

So management decides to do gradual redesign of how inventory is handled, through communicating expectations. Management makes some improvements in how they onboard new makers and welcome them to the lab. They add signs explaining the value of a well organized lab. They even assign a person to help locate things when they do get lost, and to thank people when they put stuff back. It’s subtle, but it’s focused on rewarding good behavior rather than enforcing it.

A social system like the lab often requires a social approach instead of merely a technical intervention. You consider the whole system, and by finding and reinforcing positive actions instead of making big sweeping changes, you can gradually tease out better ways.

Worldviews #

At the heart of these scenarios are two conflicting worldviews.

In the first one, management views the makers as unmotivated community members who need to be coerced into behaving correctly. They look for rules and enforcing mechanisms to get a change in behavior. In the second scenario, management assumes that the makers are intrinsically motivated to do good things and be part of a community. They look for ways to enable that behavior, remove any barriers, and get out of the way.

There are contexts where coercion is in fact the right approach. For example, when it comes to security, the appropriate assumption is that some users are malevolently trying to abuse the system. It also applies to areas like safety and regulatory compliance.

But many cases aren’t as clearcut. When looking for solutions, or when a solution is given to us (as in a feature request), we need to surface which worldview underlies that solution: Does it treat all problems as technical ones? Does it favor coercion over rewards? Does it assume that users are unwilling participants, or that they are intrinsically motivated?

By making these assumptions explicit, we can evaluate different solutions more thoroughly. Instead of debating the pros and cons of potential solutions based on only their visible aspects, we can debate what worldview is the most appropriate for the current context and what might be some potential consequences of promoting that worldview in our design.