Musings of an OOPSLA elder

I don’t think of myself as an “elder”. But that is what Linda Rising, who led the 20th OOPSLA retrospective, labeled those who were at the first OOPSLA. I am one of five who received a perfect attendance ribbon (Allen Wirfs-Brock, Brian Foote, Ralph Johnson and Ed Gehringer are the others) for having attended all OOPSLAs. At the very first OOPSLA I felt like an outsider. I wondered how I could get involved with this conference. Excitement was in the air. Objects were the next big idea. Just exactly what could I do that would have an impact? My paper on Color Smalltalk was rejected (the reviewers’ commented that it talked too much about hardware details) so I presented it as a poster. It was good that they rejected it. Our work was premature. Three years later, when Tektronix Color Smalltalk was finally a product, I wrote a paper about the design principles and class libraries in Color Smalltalk that was accepted. This success made me believe in my writing abilityand led to my paper with Brian Wilkerson on Responsibility-Driven Design in 1990, and launched my enduring interest in design.

Thursday I had another elder moment. I was on a panel with Ed Yourdon, Larry Constantine, Grady Booch, Kent Beck, and Brian Henderson-Sellers that looked back at echoes of the past and structured design and into the future. Larry Constantine provoked us to bring theory, technique and transparent tools into all we do. Kent brought the house down by quoting from Structured Design. He noted that while Ed and Larry got a lot right, they missed out on the fact that systems need to change. Refactoring wasn’t part of Ed and Larry’s vocabulary. Ed, who has been an expert witness on software cases for the past few years, noted that there often isn’t even a shred of a plan or design or any documentation for software systems. Grady mentioned that increasing abstractions have been a big factor and challenged us to move to even further levels of abstraction. More down to earth, I spoke about how objects enabled me to think clearly, and that the power of abstraction, encapsulation, and thinking in terms of small neighborhoods of collaborating, responsible objects as a big step forward. What’s next? To me, it seems that even more effective methods and practices, powerful development and testing environments, expressive languages, patterns, and thinking tools are in our future. Innovation in our industry is a constant. Yet every once in a while it is good to reflect on what we got right and remember influences from the past. But I’m forward looking too. After every OOPSLA I come home charged with new ideas and the urge to do more, collaborate, and continue learning. What a blast!

Paremetric Diversions Revisited

OK, I admit it. After writing about H.S. Lahman’s talk on invariants and parametric polymorphism, I wanted to see what the rest of the world thinks parametric polymorphism is (and isn’t).

The Wikipedia entry for polymorphism states,

“polymorphism is the idea of allowing the same definitions to be used with different types of data (specifically, different classes of objects), resulting in more general and abstract implementations.”

Hold that parenthetical thought—specifically different classes of objects resulting in more general implementations.

Well in the billing example I defined a single generic algorithm that worked on a single class of data (a RateSpec). The clever encoding of the RateSpec threshold and rate tables allowed a single algorithm to cover a wide range of data values: some customers could only be charged a single base rate, others could have different thresholds. The RateSpec object provided a uniform view of this data from the algorithm’s perspective. I wasn’t technically using different classes to represent different encodings. I was being economical by only defining a single class that could encapsulate many different rate and threshold encodings. Is this really parametric polymorphism?

The Wikipedia entry goes on to say:

“Using parametric polymorphism, a function or datatype can be written generically so that it can deal equally well with objects of various types. For example, a function append that joins two lists can be constructed so that it does not depend on one particular type of list: it can append lists of integers, lists of real numbers, lists of strings, and so on. Parametric polymorphism was the first type of polymorphism developed, first identified by Christopher Strachey in 1967. It was also the first type of polymorphism to appear in an actual programming language, ML in 1976. It exists today in Standard ML, O’Caml, Haskell, and others. Some argue that templates should be considered an example of parametric polymorphism, though instead of actually reusing generic code they rely on macros to generate specific code (which can result in code bloat).”

There’s a subtle distinction between modern implementation techniques that are labeled parametric polymorphism and what H.S. was getting at. Before hearing H.S.’s talk, I’d thought of parametric polymorphism as just being another word for parameterized types and/or template classes. That does seem to be a fairly common modern definition. But H.S. Lahman put a slight twist on things. His main point was that careful design of data along with a generic algorithm can accommodate wide variations without lots of classes, case statements or conditional logic. In fact, the key is to design a single, uniform view of seemingly disparate data. In a complex system (the Shreveport LA water rates aren’t complicated enough), different RateSpec implementations would likely be needed. And I suspect that there isn’t one algorithm to calculate rates. But in my simple example, they weren’t. So while I technically might not have been using parametric polymorphism, I did achieve uniformity by encapsulating what varies in a single RateSpec class whose instances would be composed of different rate and threshold table attributes. And that what makes this design simple and flexible.

Parametric polymorphic–or driving behavior with data

At Software Development Best Practices 2005 I attended as many design and architecture talks as I could fit in. I enjoyed H.S. Lahman‘s talk on Invariants and Parametricpolymorphicm. His talk illustrated how instead of abusing inheritance, you can use data to drive general algorithms. This is something I’ve done on many an occasions, and no, I don’t think it violates responsibility-driven design principles. In fact, it is a vital technique for creating extensible designs where extensions can be done externally to your application (by business people creating appropriate data).

An example is worth illustrating the concept. Consider the problem of calculating charges for a water and sewage bill (my first COBOL program as an analyst/programmer for the city of Vancouver, WA made these calculations). Although there were many, many different types of customers (industrial, commercial, residential of various sorts, government, non-profit, etc.), the basic algorithm for computing the customer’s bill was to apply a minimum charge based on the customer type and meter size, then apply a rate stepping function: Base rate + (first tier rate * units in first tier) + (2nd tier rate * units in second tier). Sewage was another similar calculation. Searching the internet, I was happy to find a published similar example of water and sewage rates for the city of Shreveport, Louisiana.

The key to making a single calculation work for a variety of different computations is inventing an encoding for a business rule or algorithm or policy (your invariant) that can be driven by descriptive data that can be treated uniformly by a single algorithm. This data can be encapsulated in a specification object. Of course, given a myriad of Rates and Tier values, these would most likely be encoded in a relational database or some tabular scheme that would be objectified into a specification object.

Computing monthly charges is pretty simple (ignoring floating point arithmetic precision for the sake of simplicity).

Parametric polymorphism is a fancy name for using parameterized data to drive behavior. A bad alternative would be to create many classes of objects to accomplish the same thing. Imagine the nightmare of maintaining a different class for each type of rate and threshold combination! Fact is, though, there is a lot of rate information to maintain. No getting around that. Rather than being hardwired into a program, the rate and threshold specs can be maintained by non-programmers. Even better.

More generically, decision tables can represent information compactly that can be used to drive complex computations. offers open source frameworks that support integrating decision tables with Java. If you’ve tried these on a project, I’d be interested in hearing about your experiences.

The key is to effectively exploit objects and data to drive flexible, scalable solutions. If you have really quirky computations and rules that can’t be tamed and codified in a simple manner, you can still exploit these techniques. But you will likely have to untangle and codify several decisions in order to drive several different computations. No one said life is easy. But effectively encoding decisions and variations is key to building flexible, scalable solutions that don’t have to be solved by bruteforce, ugly, rigid code.

Fitting problem framing into everything else you do

At Software Development Best Practices 2005 I presented a tutorial, Introducing Problem Frames. Problem frames are a way of thinking about software problems and approaching the task of writing descriptions of desired and expected behavior. More can be found about them in Michael Jackson’s definitive book. There’s also a website devoted to problem frames. I find Jackson’s paper, Problem Frames and Software Engineeringgood summary of problem frames for the mildly curious. Jackson introduces five basic problem types: workpiece, information, transformation, commanded and required behavior. Each frame can be described in terms of :

  • A decomposition of a problem into a particular set of interacting domains;
  • A characterization of domains as being lexical (symbolic), causal (responding to and/or causing events), or biddable (typically humans who can be asked but not forced to respond);
  • A characterization of the shared interfaces (events, state changes) between domains, and;
  • A depiction of a requirement and its relation to particular domains.

My tutorial illustrates Jackson’s problem framing using an email client application (thanks to colleagues Jim and Nathan who worked with me on this example). I hope to expand on it in the future. The example illustrates various frames and concerns but doesn’t go into great detail specifying requirements. That’s OK. The point was to introduce frames, not introduce the art of writing specifications after identifying frames.

For now, I’m content to have an example in hand which illustrates the basic mechanics of problem framing applied to a purely software system. Jackson’s examples and emphasis is on software interacting with the real world. He made this very clear in a posting to our Yahoo Problem Frame discussion group:

…the emphasis on physical phenomena is very important—even central—to problem frames. If your problem is “given a very large integer, find its factors” there aren’t really any physical phenomena involved at all. aren’t really any physical phenomena involved at all. Of course the integer must be somehow presented to the machine, and this will involve physical phenomena of keyboard strokes or something of the kind; but these phenomena are just incidental to the problem.

I don’t think that the problem frames ideas are totally useless for a problem without signficant phenomena, but I think they lose a lot of their value and appeal. Problem frames are about engineering in the sense that Vincenti quotes: “Engineering … [is] the practice of organizing the design and construction of any artifice which transforms the physical world around us to meet some recognized need.”

— Michael

Most folks I work with aren’t developing software to control physical devices. Yet I believe that framing could be applied to many IT software problems and help clarify what the system should do, as long as framers can transcend Jackson’s real-world focus and look inward to the interior domains prevalent in their software world and understand how to describe the shared phenomena between them.

However, I think there are several hurdles to overcome before problem framing will have a wider IT audience. Practicing analysts already have many tools for specifying systems: context diagrams, event-response tables, business rules, use cases, user navigation models, user classes, personas, data and information models, decision tables. Where do frames fit with what they are already doing? In my tutorial, I made a point that the end-products of framing aren’t just those simple frame diagrams (they are in fact only a placeholder for discussing and writing—if you are so inclined—requirements and descriptions of domains and phenomena). But framing still has to compete with other analysis activities. And the descriptions and requirements have to fit in with other behavioral descriptions analysts write. And at the end of the day, one of the tutorial attendees remarked, “Thanks for introducing problem frames to us, Rebecca. I’m not sure I am going to use them formally as an analysis tool, but I suspect just knowing about problem frames will help me write better specifications using the tools we already use.” We’ll see.

Then there is that sometimes difficult distinction to be made between specifying what the system should be vs. what it should do. People are slowly adopting techniques for specifying measurable non-functional requirements using Planguage. These ideas also compete with problem framing for mindshare.

Contrast framing with the agile practice of not writing down requirements (according to Gerard Mesazros who presented Agile Requirements with User Stories at Agile 2005, user stories are an “I.O.U. for a conversation”. Any agile person would throw up their hands and shout, Enough! Just give me practical advice on how to apply framing techniques directly to what I do, and I’ll consider them. But I don’t want to write a lot of formal descriptions.” Framing has to be more than a set of frame diagrams that lead to descriptions or it isn’t going to find any audience the agile marketplace, even though I believe that framing is an invaluable thinking tool when having that conversation with the customer.

Those are some challenges I see. Coming up with practical techniques applying problem framing to complement and clarify the work busy people are already doing.

Loaded words

What do you do when people react negatively to terms you use to describe ideas? If you are like one clever manager I met at Software Development Best Practices, you turn around and let the team take ownership of the way they are going to speak about things. This manager of managers in a health care company recounted how he introduced Scrum into his organization. After talking about Scrum values and practices, he got pushback on the names of Scrum activities. “Scrum? Sounds like a fight. We don’t like that. Sprints? Why the goofy terminology? We don’t like the sound of it. Sounds like people are always running hard. And besides, we’re not athletic.” So he asked his group to propose alternative names. Instead of sprints, his group calls them iterations. And yeah, they know they need to be short. They are following scrum practices; they just don’t call a spade a spade. He’s convinced that they are the better for it. It isn’t so important what they’re called as how they’re applied. They’ve even renamed daily standups. And they have them mid-morning so everyone can attend (as the team’s work hours are staggered).

Another case in point. At Agile 2005, Jon Spence from Medtronic presented an experience recounting how he got his company to adopt agile practices on a project. Medtronic makes defibrillators and pacemakers. It was somewhat tricky introducing agile concepts into his organization. Jon had to tone down the edginess of the agile message. He can’t imagine the Agile Manifesto hanging in the hallways at his company. For one thing, one of its tenets favoring, “Working software over comprehensive documentation” would be highly controversial. Medtronic builds FDA regulated products that require extensive documentation. According to Jon, the Agile Manifesto would cause an “allergic reaction” at Medtronic. He said he wasn’t going to bring back copies of it to pass around (they were handed out at the conference). No sir. Those would be fighting words. And Jon wants to avoid controversy so he can focus on introducing agile practices. What proved effective was talking about delivering code incrementally with higher quality using a balanced set of practices that provide a safety net. Those were the right words to convince management. His project delivered on their promises and he and others are now spreading agile practices to other project teams.

I appreciate powerful words that people can rally around. But they don’t have to be edgy. By avoiding loaded words you can more effectively get your message across. If the agile manifesto doesn’t have the right words for your organization (and you don’t want to be branded a radical) you may need to discover different ways to talk about agile practices. It isn’t always necessary to use inflammatory words and shake people up to cause change.

It’s official…specs are “bad”

…according to Linus Torvalds. I have to chime in on Linus’ newsgroup posting and the attendent buzz it sparked on the net this week (and on the Linux Kernel mailing list). Linus stated:

So there’s two MAJOR reasons to avoid specs:
– they’re dangerously wrong. Reality is different, and anybody who thinks specs matter over reality should get out of kernel programming NOW. When reality and specs clash, the spec has zero meaning. Zilch. Nada. None.
It’s like real science: if you have a theory that doesn’t match experiments, it doesn’t matter _how_ much you like that theory. It’s wrong. You can use it as an approximation, but you MUST keep in mind that it’s an approximation.
Specs have an inevitably tendency to try to introduce abstractions levels and wording and documentation policies that make sense for a written spec. Trying to implement actual code off the spec leads to the code looking and working like CRAP.

He went on to conclude:

“But the spec says …” is pretty much always a sign of somebody who has just blocked out the fact that some device doesn’t. So don’t talk about specs. Talk about working code that is _readable_ and _works_. There’s an absolutely mindbogglingly huge difference between the two.

This posting launched an onslaught of discussion. Linus is right. Reality always differs from a specification of how software is supposed to behave. That’s a reflection on how difficult it is to write precise specifications of behavior and on how many decisions during implementation are left open. Still, I’m not willing to say “no specs, ever” even though I’m a signer of the Agile Manifesto and on the board of the Agile Alliance. We need to get better at recognizing what types of descriptions do add value and under what circumstances. And become more aware of when and where precision is needed (and when it drags us down).

Linus points out that specs often introduce abstractions and concepts that shouldn’t be directly implemented in code. I never expect to directly translate what someone writes into code without using my brain. I design and think before and during and after coding realizing that nothing substitutes for testing/proving out a design and implementation against the real environment it works in.

But that doesn’t mean specs have no value. Working, readable code isn’t the only thing that matters. It matters very much in the short and long term. But try understanding design rationale by just reading code. Or reading the test code. It’s difficult, if not impossible. I find value in design documentation that explains the tricky bits. This type of documentation is especially valuable when those coding aren’t going to hang around to offer explanations.

A spec is an approximation of what is desired. I certainly don’t expect it to tell me everything. There can be enormous value in writing descriptions of what software should do, especially when it is important to communicate design parameters and system behaviors instead of just providing an implementation. Most developers aren’t good at writing specs, let alone descriptions/discussions about their code and design choices. But that doesn’t mean they should stop writing them and resort to “organic code growth” in every situation. A firm believer in agile practices, I do’t insist in writing merely for fun or because it is expected. But if I need a spec, I write it. And if doesn’t reflect reality or is misunderstood, I change it if there is value in keeping it up to date. There may not be. And if that’s the case, I don’t update it. It depends on the project and the need. It helps if I write these descriptions for someone who wants to read them (and will actually use it rather than toss it aside). I’ve got to know my audience. That often takes experimentation. Maybe I need to include sample prototype code in addition to design notes/models/sketches. Maybe I don’t. Communicating ideas to a diverse audience is especially hard. But specs aren’t the problem. It’s that effectively communicating how something works or should work is more difficult than cutting code. I prefer working code over piles of outdated, difficult diagrams and explanations. But that doesn’t duck the issue. Specs aren’t inherently bad. Most spec writers would rather be doing something else. And that is a problem.

It’s not OK..or is it?

Inspired by the TV show Starved, which chronicles the lives of friends with eating disorders who attend meetings with other food-challenged folks (where inappropriate behavior is censored with the chant, “it’s not OK”, I imagine a support group for software agilists gone astray:

“Hi, I’m Dave and I don’t like to pair program. If I spend a few quiet hours alone before everyone shows up, I can get a whole lot more done.
“Dave, it’s not OK.”

“Hi, I’m Beth and I prefer to sketch out my design before I write any code.”
“Beth, it’s not OK.”

“Hi, I’m Rick and I plan the work for my team and then show them my plan.”
“Rick, it’s not OK.”

Or is it?
I recoil from absolutes. The chant “It’s not OK!” grates against my core values. Sure sometimes behaviors may be inappropriate, but there’s got to be a better way to address the issue. Imagine another world:

“Dave, you don’t like pair programming. I want our team to really try pairing. Maybe as a group we should tone down all our chatter. It can get pretty loud sometimes and that makes it hard to focus. If you really don’t think pairing is going to work for you, you can still be agile, but you might find it more to your style to work on the mark project. They’re writing unit tests, doing daily builds and have short iterations, but they’re not following all XP practices. One thing the mark team insists on instead of paired programming is to have paired checkins for all new modules and any changes close to the end of an iteration.”

“Beth, I like that you know UML and use it effectively. When you do draw design ideas everyone seems to understand thing better. I think you have a knack for making ideas understandable. When you take the time to sketch out what’s really needed, I suspect you save rework time. Maybe we should consider doing even more design pre-work for complex functionality. Let’s set up an experiment to measure the time we spend refactoring vs. the time we spend doing some upfront design for a couple of challenging user stories on our next sprint.”

“Rick, I like that you want to plan ahead. But instead of planning for your group, why not get them involved in planning? They’ll be more committed if they set their own goals.”

Hard line agilists used to say that until you know how to play by the rules don’t break them. But I think that hard line stance is changing. In the second edition of Extreme Programming Explained, Kent talks about core XP practices and ways to move towards your values. At Agile 2005, in a keynote Bob Martin talked about the trend of adopting agile practices from a “Chinese Menu”. It is a better strategy to adopt agile practices that fit your development lifestyle. As any dieter knows, any successful eating plan has to fit into your lifestyle and work to your strengths. Some dieters can succeed eating a little chocolate each day. Some can’t. No food should be censored or out of bounds unless it’s too difficult to handle.

The same goes for agile development practices. Deviations from typical (published) agile practices shouldn’t automatically be censored in a knee-jerk fashion. That’s counter productive. But don’t cheat on your agile goals, either. If you find a particular recommended practice too hard to adopt, ask why and dig deeper. Maybe that practice doesn’t fit with your team or your company or with the way you work. Maybe you’ve got to change something first. Or maybe it just isn’t a good fit. But if a particular practice causes you to stray from your goals, take a long hard look at why it’s counterproductive and how you might clean up your act. Sorting it out will require some honest thinking, experimentation, and reflection. And that’s OK.