Our Heuristics are Shaped Through Experience

This post is part two of some reflections on a conversation I had with Chelsea Troy about our testing heuristics. You may also want to read part one and Chelsea’s writeup.

I shared with Chelsea how my Smalltalk development background contributed to my testing and design heuristics. I was involved in the early days of Smalltalk at Tektronix as a principal engineer in the AI Machines group. After a yearlong stint managing the software group through product introduction, I switched back to full-time engineering. Among other things, I added features to Smalltalk including color graphics, fonts, and support for low-level OS calls. All our code was visible to our users, and we had a strong engineering culture.

I learned how to work effectively in the Smalltalk environment by studying existing code, figuring out what it did, and understanding its coding and design style. I also observed more experienced Smalltalk programmers. Kent Beck, along with Ward Cunningham, and other Tek Lab’s folks were some of the very earliest Smalltalk application programmers. Ward and Kent worked together, developing prototypes and exploring what Smalltalk was good at. Many ideas about Extreme Programming (and TDD) and object-design can be traced back to these programming experiences.

The Smalltalk image was always running. It contained the entire development environment and had a browser where you could look at existing code and add your own. Much of my time was spent experimenting with and reading existing code, then trying to fit my new code in. The code I wrote was a mix of new classes as well as extensions or modifications to existing ones. To show someone else how to use your code, you’d create a workspace—a scratchpad window—and put snippets of commented code for them to read, edit, and evaluate it. By convention, methods were categorized (see the third pane of the System Browser below, which shows the categories for the abstract Collection class). Other classes also had a testing category, but it was not used for what you might think! The testing category for the class Collection included methods for querying (i.e. testing) its contents.

What a typical Smalltalk-80 system image looked like courtesy of https://randoc.wordpress.com/2018/07/20/tektronix-smalltalk-workstations-4400-and-4300-series/

So how did programmers test Smalltalk code? I didn’t have any conventions to follow for organizing my tests (and not inconsequentially, leaving test code around would clutter up the Smalltalk image). Since I could highlight code anywhere and execute it, I tested code as I wrote it. I could step through code with a debugger, change it on the fly, and run it again. I tested my code into existence, but didn’t leave around any tests.

In an article I wrote about Color Smalltalk here’s how I described this experience: “… the workspace, lets programmers experiment with code without actually incorporating the experimental code into the valid, running environment. A programmer can write, execute and debug code in a workspace, then pull it into the Smalltalk application when the new code is tested and operational.”

While this statement is mostly true, it is also misleading. Anything I did as a programmer would add more objects to and change the state of the running Smalltalk image. Code you executed in a workspace changed the image (sometimes with catastrophic results, especially if you were tinkering with basic low-level system functionality as I was). But the Smalltalk environment and tools made it so easy to back up a step or two, revise your code, and try again, even with code that mucked with low-level stuff.

Kent’s Smalltalk experience heavily influenced how he thought about incremental development. But when it came to testing, I suspect he tried to boil down his Smalltalk experiences into practices that would be more “failsafe” for programmers who didn’t work in such a dynamic and forgiving development environment. Kent’s thinking about testing has evolved since he wrote his books. In an interview with Andrew Binstock in 2019, Kent and Andrew chat about this evolution:

Binstock: Do you still work on strictly a test-first basis?

Beck: No. Sometimes, yes.

Binstock: OK. Tell me how your thoughts have evolved on that. When I look at your book Extreme Programming Explained, there seems to be very little wiggle room in terms of that. Has your view changed?

Beck: Sure. So there’s a variable that I didn’t know existed at that time, which is really important for the trade-off about when automated testing is valuable. It is the half-life of the line of code. If you’re in exploration mode and you’re just trying to figure out what a program might do and most of your experiments are going to be failures and be deleted in a matter of hours or perhaps days, then most of the benefits of TDD don’t kick in, and it slows down the experimentation—a latency between “I wonder” and “I see.” You want that time to be as short as possible. If tests help you make that time shorter, fine, but often, they make the latency longer, and if the latency matters and the half-life of the line of code is short, then you shouldn’t write tests.

Binstock: Indeed, when exploring, if I run into errors, I may backtrack and write some tests just to get the code going where I think it’s supposed to go.

Beck: I learned there are lots of forms of feedback. Tests are just one form of feedback, and there are some really good things about them, but depending on the situation you’re in, there can also be some very substantial costs. Then you have to decide, is this one of these cases where the trade-off tips one way or the other? People want the rule, the one absolute rule, but that’s just sloppy thinking as far as I’m concerned.

Practicing TDD ensures developers write tests. The underlying value heuristic is, “any tests are better than no tests.” But if we take Kent’s more recent thoughts to heart, we shouldn’t test without thinking through some consequences. Kent’s more recent guiding heuristic: Test when it matters and when you need a safety net. Think through both the benefits and costs of testing. If you are exploring, don’t let testing slow you down.

There is no single “definitive” answer to the question, “when should I test?”

Develop Test Strategies Based on System Context

Chelsea asked, “So, how do you determine what kinds of tests to write?”

I don’t have a definitive answer to this question, either. So, I shared a few stories. I’ve worked with clients unschooled in TDD. They write code, test it a little, and then throw these initial tests away. They build successful products. The tests they tend to keep are regression tests, tests that demonstrate a quirky bug that has been fixed (and to ensure that it stays that way). It’s always a bet.

If code is stable, and the tests always pass, running tests all the time isn’t buying any new information. Even worse, passing tests can give you a false sense of security about your code’s quality. So why do we write tests?

I like to focus on writing tests that check that stable (relatively unchanging) system expectations still hold, and that demonstrate ways new capabilities can be safely added. I also try to write tests that capture expectations I have around my system’s behavior.

For complex systems, though, this can be difficult. Unforeseen side effects can pop up in strange places (changing code in one place unexpectedly causing other code to break in a distant part of the system). It’s impossible to test for every possible edge case and you don’t know all the dependencies.

I remember Kent Beck telling this oddball story of writing his first TDD code when he went to work at Facebook. His code, which passed all his tests, suddenly caused other tests for other parts of the system to fail. Rather than revert his code, those familiar with the system decided to throw out those failing tests. Seems weird, but they knew those tests were brittle, and making some wrong assumptions. When you find problems with tests, think carefully about whether it is appropriate to add additional tests to ensure that things don’t break, whether your existing tests are brittle, or whether your assumptions are wrong.

Data Scientists Have Different Testing Values

When you need to process massive amounts of data, and the code for processing of that data is predictable, there is little value in repeatedly running functional tests that always pass.

I worked for a number of years for a client doing healthcare analytics on patient medical data. Sometimes, their heuristic for verifying an algorithm would be: test that the new code works by comparing its results against code written in an entirely different system/programming language. They would take a massive cut of the data and run it through and compare the results.

Another heuristic they sometimes used to test new algorithms and capabilities was to run their code and compare their results against those reported in published research papers. Where the results differed, they need to reason about those differences (sometimes it was a problem with their code; at other times, it was that their code was more accurate at choosing cohorts or their statistical algorithms were better). Some person needed to critically analyze the results, reason why the discrepancies were there, and determine what, if anything, to do about them. This process couldn’t be automated.

Chelsea works with data scientists at Mozilla on sanitizing personal data for searches. The rules for this are complicated, language-specific, and sometimes people enter search terms in more than one language. She finds data scientists don’t share the same testing values as many software developers do.

Data scientists make informed assumptions about aggregated data. If those assumptions don’t hold, they reassess the data processing rules and revisit their assumptions. To them, testing is insufficient to ensure system quality. Monitoring actual system behavior against expected data characteristics, however, is critical. When the data characteristics being monitored fall outside of expected tolerances, this triggers developers to look into the situation. Developers then run some automated tests to determine if something is wrong with their code. If those automated tests pass, they then call on a data scientist to analyze a sample of the data and decide what to do. Something has changed and there likely needs to be some change to either in the assertions about the data’s characteristics or the rules for handling it.

Trialing new Heuristics

Chelsea and I appreciate what we can learn from people with different backgrounds: data scientists, QA folks, testers, and new colleagues. There are many different ways to test and design software. And if I don’t hold onto my preferred heuristics too tightly, I might learn something.

But how do I decide when to try out some new heuristics or to stick with what I know?

If things are going well, I’m not as motivated to try out new ideas. I need a small nudge. I’ll try new some new-to-me heuristics if I feel I have some wiggle room. Let me experiment, practice, and think through the consequences. Give me a bit of time to let new values and practices soak in.

When I start to work on a new system or folks from different backgrounds, that too, is another opportunity to try out new ways of working.

But under pressure, I find myself narrowing my focus and sticking with what I know best (even if it is a poor option). So, if I can, I catch myself and take a small step back from problem solving. I pause, take a breath, and ask: If my heuristics aren’t currently working for me, what are some options?

If I want to introduce a test-first TDDer to my testing approach, I might suggest a modest experiment: “Let’s work together on some design and coding problems and compare our two approaches. Let’s find out what tests we come up with following my test-driven development approach. Let’s try your test-first TDD on a similar problem and see what tests we come up with. Let’s see what we learn.”

At the very minimum, I hope we’d learn of our shared value: we both value tested code. We might learn from each other more about the kinds of tests we like to write. Or how many tests we think are needed. Or how we rework existing tests. We might share some heuristics for deciding what next test to write or what isn’t worth testing. Through experimentation and reflection, we can grow and learn from each other.

Testing, Testing…our Heuristics

We gather heuristics through conversations
We gather heuristics through storytelling and conversations

Recently Chelsea Troy and I chatted over Zoom about software testing heuristics. I met Chelsea last year at DDD Europe. In this and a couple of snack-sized posts, I will reflect on some highlights of our conversation. Chelsea has also written about our conversation.

A Leading Question Leads to Some Heuristics

I started by asking, “What is important about testing that people should get but don’t?”

Chelsea answered, that while Test Driven Development (TDD) is useful, it doesn’t solve all testing needs.  If developers are oversold on the benefits of TDD, they can become jaded on testing in general. They shouldn’t. TDD doesn’t include specific practices that address resilience, or reliability. But it is useful for developing and testing deterministic code.

Chelsea shared the experience of learning first-hand how TDD didn’t have all the answers to testing. She worked on a team of TDD enthusiasts developing a mobile app for a client. Although the team thought they knew how to develop quality software, their initial prototype developed following TDD didn’t address these challenging requirements: being usable under extreme weather conditions, having a simple UX, and functioning when only intermittently connected to the internet and their backend software. They needed to add more design and testing techniques to their toolbox, along with their TDD testing. Chelsea also said that she learned a lot about testing for these kinds requirements from their client’s QA team.

Some heuristics we’ve touched on:

  • Use TDD to develop and test functionality of deterministic software.
  • Use other strategies to design and test for software system qualities such as usability, performance, reliability, or resilience.
  • Match your testing strategies and tactics to your application’s development and execution context.

A Brief Introduction to Heuristics

I have been intrigued by software development heuristics, ever since I read Billy Vaughn Koen’s Discussion of the Method: Conducting the Engineer’s approach to problem solving. Koen defines a heuristic as, “anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and potentially fallible.” Heuristics are never guaranteed. When a heuristic fails, you back up and try another one.

I enjoy hunting for heuristics while designing and coding with others. Open-ended conversations where we swap stories and reflect on our heuristics are another great opportunity. Generally, I look for three kinds of heuristics:

  1. Action heuristics. Things we do to solve our immediate problem. There are many action heuristics. Design patterns are one well-known form of action heuristic. We know these heuristics by name because authors took the time to write up them as named software patterns. But there are many testing and development techniques both smaller and larger than patterns. For example, in Test-Driven Development (TDD), the practice of “write a test, then write code to pass the test” is a heuristic for incrementally designing and implementing tested code.
  1. Value Heuristics. Values motivate our actions. Underlying TDD is the value: Testing should be an integral part of design and coding.

Our values determine what actions seem appropriate. Because I value understandable code, I I take several actions to make my code more comprehensible: I give methods, functions, and variables meaningful names; keep code in methods short; and write code at the same level of detail in a method, factoring out lower-level details into helper methods.

Values depend on context. As the context shifts, so do our values. This doesn’t mean we are fickle; just pragmatic. Most of the time we aren’t conscious of making these shifts. When cutting and pasting code from stack overflow, I don’t value code understandability so much as I do the ability to quickly determine whether that code addresses my current problem. If it does, then I rewrite that code to make it clearer and to fit with the style in my existing codebase. In production code, I do value understandability.

  1. Guiding heuristics. Heuristics that lead to related actions. For example, Chelsea shared one guiding heuristic: Don’t treat test code the same as production code, instead, make each test understandable in isolation. This leads her to write self-contained test methods. She doesn’t like a test where she has to read the code that it calls on before she can understand the test. She also isn’t a fan of applying the DRY (Don’t Repeat Yourself) heuristic to test code.

Comparing competing heuristics

Chelsea mentioned that understandable tests can also serve as valuable design documentation and discovery tools. It’s easier to modify test code that is self-contained, rerun it, and explore how the software responds.

I asked Chelsea whether she would put aside her heuristic of keeping tests self-contained if there were compelling reasons. What if set up conditions for tests took a long time (for example, doing a cut of a database in order to build an in-memory cache of test data)? What if there was complex code that was repeated in similar tests but was slightly altered? Did someone make cut-and-paste-modify-and-reuse errors, or were there valid reasons for these differences?

Factoring common initialization code out of tests into common setup code, provides a “standard” execution context for a suite of tests. It also makes it easier to vary that context and rerun the test suite. Factoring out code common to several tests and clearly labeling what it does eliminates having to second guess reasons for slight variations in test code.

Depending on your situation and personal preferences, you may choose the heuristic, “Keep code in tests so you can understand and easily manipulate it,” or the other, “Factor out expensive or error prone code into common code shared by tests.” These heuristics compete with each other. Neither is better. They are simply alternative ways to structure your test code.

The Value of Knowing your Values

If people don’t know your values (and how they differ from their values), they may not understand why you prefer to work the way you do. For example, while I value testing, I don’t practice test-first development.

If you understand TDD to mean strictly write tests before writing any code, your TDD heuristic is: begin by writing a small test, then write code that proves that the test fails, then rewrite your code to pass that test. Don’t add any more code than necessary to make the test pass. Do this repeatedly until you’ve fully implemented your code.

At the end of a TDD cycle, you have a bunch of tests and fully functioning code that passes those tests. Working this way, you typically implement a single class at a time. You test and implement lower-level functionality, then repeat the process to develop the code that uses that functionality. Your software tends to grow from the “bottom” up.

I value testing, but typically design and implement several classes that work together at the same time. Once I prove to myself that my overall design hangs together (through some sort of simulation), I implement it. When finished, I check in code for several classes along with tests that demonstrate their behavior. My code is tested, but I don’t leave around lots of low-level tests.

For example, I may use a strategy pattern to calculate charges for different items on an invoice. I would initially implement each individual strategy class and check that it worked as I expected. But I’d remove most if not all tests for those individual strategies once I proved to myself that they worked. Their code is simple enough to read at a glance. Once I get low level classes working (especially if they don’t retain any state), I don’t need to keep tests around to ensure that they work. Once implemented, they rarely change. If I do need to revise them, at that point I might reconsider my testing heuristics (and add some tests that reflect these changes). The valuable tests I tend to preserve are those that determine which strategy to use, how to add new kinds of strategies, and different ways to apply discounts and special pricing.

Let’s contrast my testing heuristics with those of test-first TDDers.

We both share this value heuristic: Value code that has tests over code (even if it works) that doesn’t have tests.

Test-first TDDers apply this heuristic: Write tests as you incrementally design code. Interleave testing and coding, repeatedly. Start with the simplest test and the simplest implementation. Only implement enough functionality so that your latest test passes. Build functionality and tests in small increments; each increment moving you closer to your final tested design.

They also have this guiding heuristic: You produce a cleaner design if you write tests first before writing any code.

I don’t share that heuristic.

My heuristic for developing designed, tested code is: Consider the design of one or more classes working together to achieve some functionality. Model your design using some lightweight technique, such as CRC cards (Class-Responsibility-Collaborators) or whiteboard sketches. Once you know what each class’ responsibilities are and how they interact, then implement them. Write simple tests and debug as you implement, but remove them if they are low level (and other code that has tests exercises their functionality). Keep only a lean set of illustrative tests that demonstrate how the classes work together and ensure that your design will continue to function properly.

At the end of my design/development cycle, I may write additional tests, revise existing ones, or remove insignificant tests. I use this grooming and cleanup step, before committing my code, as one way to double check my work.

Chelsea summarized my TDD heuristic as: Put tests in at the right level of abstraction once you know what your design is about.

Chelsea cautions, however, that if you don’t know what the right level of abstraction is and you follow test-first TDD heuristics by rote, you end up with tests at too low a level. Also, if you don’t have heuristics for pruning them, you end up with too many.

I view most testing I do while I implement my design as temporary scaffolding. Since I’ve already sketched out design ideas before coding, tests are not my primary tool for design. I test to verify my design. If I need to adjust my design as I implement it (and I expect to), that’s OK. I keep tweaking it and my code, and continue testing.

I suspect the biggest difference in our two approaches is that test-first TDDers don’t view their tests as temporary scaffolding, and I don’t view the cycle of test-first TDD as the only (or best) way to understand what a design should be. We both value tested, well-designed code.

Bringing to light the different values that underlie competing heuristics can be illuminating. But how can we get others to appreciate and try out our heuristics? How can we approach new-to-us heuristics with an open mind? I’ll touch on these topics and more in my next post.

Life in the Mob

The latest report in the Agile Experiences Program has just been published. It is a story by Jason Kerney about what it is like to join a Mob Programming Team.

Jason’s report started with a conversation over lunch at Agile 2014. Jason was sitting with his manager, Woody Zuill who reported on how mob programming was born and how it works in practice. Woody introduced Jason as the newest member of their mob.

Ever on the lookout for good stories (I’m Director of the Agile Experience Report Program), I asked Jason what it was like to join the mob. And thus began a conversation which after a lot of hard work by Jason turned into this latest report.

When he first joined the team, Jason was hyper vigilant. Not wanting to let his new team down, he found it hard to take any breaks. For the first few weeks he would go home from work exhausted. By accident one day he discovered that if he stepped away from the team for awhile, he could catch up in a just a few minutes. He had stumbled on a sustainable rhythm for mob programming and went home from work energized.

Jason’s initial hyper vigilance reminds me of my first two weeks on the job as a forest service lookout. I was constantly scanning for fires through my binoculars. I got eyestrain. After two weeks, I just couldn’t keep up my constant surveillance. So I backed off to looking for fires every 15 minutes. That was plenty of attention. No fire is going to explode and burn down the entire forest in 15 minutes.

Jason also shares a keen observation into the collective way his team dynamically works. He likens how they each pay attention to different things as being like a howling wolf pack (where no two wolves sing at the same frequency). Wolves just join in and find an agreeable pitch. Mob programmers who are paying attention to their teammates find a way to contribute what’s “missing”. Jason finds mob programming powerful because:

“In coding, there are a lot of things to think about, architecture, design, the problem at hand, coding standards, testing, deployment, business impact and security to name a few. I have found that our mob treats each of these like a wolves’ howling frequency. We each take one or a few and pay attention to those. If someone else appears to be covering it, we choose something else. None of this is an active decision, it just happens.”

Don’t dismiss Mob Programming as a simple variant of an XP programming team. There’s more to it.

If you have an itch to write and share your agile experiences, please feel free to contact me at rebecca at wirfs hypen brock dot com. I know that writing is hard work. I want to hear your intriguing stories and help you tease out your wisdom. I can help you find a voice in your writing. We learn from experience (especially when we reflect on it). And it’s a wondrous gift to share your experiences with others. Thanks for sharing, Jason.

Why Process Matters

I’ve been working on a talk for Smalltalks 2014 about Discovering Alexander’s Properties in Your Code and Life.

I don’t want it to be an esoteric review of Alexander’s properties.

That won’t satisfy my audience or me.

I want to impart information about how Alexander’s physical properties might translate to properties of our software code as well as illustrate poignant personal examples in the physical world.

But equally important, I want impress upon my audience that process is vital to making lively things (software and physical things). In his, The Process of Creating Life: Nature of Order, Book 2, Alexander states,

“Processes play a more fundamental role in determining the life or death of the building than does the ‘design’.”

Traditionally, building architects hand off their designs as a set of formal drawings for others others to build. Does this remind you of waterfall software development? There isn’t anything inherently wrong with constructing formal architectural drawings…but they never end up reflecting accurately what was built. Due to errors in design, situational decisions based on new discoveries made as things are built, better construction techniques, changing requirements, limitations in tools or materials, a building is never exactly constructed as an architect draws it up.

Builders know that. Good ones exercise their judgment as they make on the spot tactical re-design decisions. Architects who are deeply involved in the building process know that.

Alexander is rather unhappy with how buildings are typically created and suggests that any “living” process (whether it be for building design or software or any other complex process) incorporate the following ten characteristics.

He challenges us software makers to do better, too:

“The way forward in the next decades, towards programs with highly adapted human performance, will be through programs which are generated through unfolding, in some fashion comparable to what I have described for buildings.”

As software designers and implementers we know that nothing is ever built exactly as initially conceived. Not even close. Over the past decade or so we have made significant strides our processes and our tools that enable us to be more effective at adaptively and incrementally building software. My thoughts on some ways we have tackled these characteristics are interspersed in italics, below.

Characteristics of Living Processes

1.Step-by-step adaptive. Small increments with opportunity for feedback and correction.
Incremental delivery, retrospectives, stakeholder reviews
Repetitive incremental design cycles:
Design a little– implement–refactor rework refine–design…
Design/test cycles: Write specifications of behavior, write some code that correctly works according to the specification, test and adapt…
Tests and production code equally valued

2. Whatever the greater whole is always the main focus of attention and the driving force.
Working deployable software, minimally-marketable features

3. The entire process is governed and guided by the formation of living centers (that help each other)
Code with defined boundaries, separate responsibilities, and planned for interconnections

4. Steps take place in a specific sequence to control the unfolding.
We have a rhythm to our work. Whether it is test-first or test-frequent development, conversations with customers to define behavioral “specifications”, or other specific actions. In order to control unfolding we need to understand what we need to build, build it, then refine as we go. And we have tools that let us manage and incrementally build and record our changes.

5. Parts created must become locally unique.
Build the next thing so it fits with and expands the wholeness of what we are building. Consider our options. Refactor and rework our design. Make functions/classes/code cohesive. Bust up things that are too big into smaller elements. Revise.

6. The formation of generic centers is guided by patterns.
We have in mind a high-level software architecture that guides our design and implementation.

7. Congruent with feeling and governed by feeling.
Instead of just making a test pass, see if what you just wrote feels right (or if it feels like an ugly hack). Reflect on how and what we are building. Don’t be merely satisfied with making your code work. How do you feel about what you’ve just built? How do those using your software react to it? How do those who have to maintain and live with your code feel about it?

8. For buildings, the formation of structure is guided by the emergence of an aperiodic grid, which brings coherent geometric order
Software is structured, too…we’ve got to be aware of how we are structuring our code.

9.Oriented by a form language that provides concrete methods of implemented adapted structure through simple combinatory rules
We use accepted “schemas” to create coherent software systems. We have software architecture styles, framework support, and even pattern languages emerging…

10. Oriented by the simplicity transformation, and is pruned steadily
We can consistently refactor and rework our code with the goal of simplifying in order to enable building more functionality. We rebuild to create sustainable software structures. Even if we come back to some old working code and see how to simplify it, we can rework it taking into consideration what we’ve learned in the meantime.

Yet, let’s not be complacent. Agile or Lean or Clean Code or Scrum practices don’t address every process characteristic Alexander mentions. I am not sure that all these characteristics are important for building lively software. Alexander is not a builder of software systems, although he spent a lot of time talking with some pioneers and leaders of the software patterns movement.

Some process ideas of Alexander sound expensive and time consuming. Do we always need to reflect on how we feel about what we code? Sometimes we need to build quickly, not painstakingly. We need to prove its worth, and then refine our software. Our main thought may be on just simply making it work, not how it makes us or others feel. So how do we add liveliness to this quickly fashioned software? What’s a good process for that? Mike Feathers wrote about Working Effectively With Legacy Code, but there is a lot more to consider. Maybe that quickly fashioned software has tests, maybe it doesn’t, maybe some parts have a reasonable structure, and maybe other parts should be tossed.

We often build disposable and hopefully short-lived software. Problems crop up when that code gets rudely hacked to extend its capabilities and live past its expiration date.

There are most likely different processes for creating lively software, based on where you start, where you think you are headed, and how lively it needs to be (not everything needs to be fashioned with such care).

People are continually building new and better tools and libraries. There is a rich and growing ecosystem of innovative open source software. Process matters. I think we have a lot still to learn about building lively software. It is a heady time to be building complex software systems.

Making Strong, Lively Centers

Making things with lively, cohesive centers (whether software, buildings, landscapes, educational experiences, or artfully designed bento boxes) involves hard work, practice, skill, reflection, and the development of a discriminating eye.

One great example of hard work over a long period of time was this bonsai boat tree I saw in Kyoto. This tree is over 600 years old!

Can you imagine the effort and attention the bonsai gardeners spent over the centuries to create, grow, and maintain this beautiful shape with its many centers?

I wish I could sit with great software designers and architects, soak up their wisdom, and then effortlessly incorporate that wisdom into my own code. I would love to write lively code without breaking a sweat. But that hasn’t been my experience.

My first Smalltalk code wasn’t very good. I didn’t immediately get the shift from procedural thinking, where I had to worry about controlling every aspect of the call chain, to that flowing object-oriented style where learning how to delegate responsibility was key.

To understand how to make my Smalltalk code lively (because of stronger centers) took practice and experimentation, reflection, and more practice. And letting go of preconceived notions that no longer fit.

As I program in a yet another programming language, I can’t avoid bringing along techniques I learned earlier. Some fit. Some do not. (I keep re-framing my notions of how to implement a good design). And I keep adding useful programming techniques to my toolkit.

Techniques for constructing well designed code is programming-language specific, even though underlying good design principles seem universal.

It took a while for me to realize that to become a better Smalltalk programmer I had to let go of my incessant urge to understand and control every little detail (I had to do that as an 8086 assembly language programmer, my prior language). Trust in polymorphism. Delegate. Don’t try to do too much in any one method. Don’t pass in too many arguments. Let objects take responsibility for their actions.

Even as I learned to let go of details, I still made dumb mistakes.

Initially I didn’t understand the difference between elegant and overly clever code (I liked Smalltalk blocks—er, closures). I didn’t realize the overhead of lots of closures that held on to context. I thought it was clever that my font management code held blocks that could read fonts from the file system (embedding references to external files in them for goodness sakes).

Seasoned Smalltalkers don’t make these mistakes. See this wiki page for a short discussion of Smalltalk and Closures and this Stack Overflow posting.

Was I tone deaf when it came to using blocks? I don’t think so. I just wasn’t paying attention to the right details. And I wasn’t looking in the right places for inspiration or guidance.

Instead of performing my own experiments, ideally, I should’ve been studying and emulating good examples. Such as the Smalltalk collection hierarchy’s use of closures. There, code blocks are used elegantly to execute differential behavior. The Smalltalk collection hierarchy is one of the most beautiful set of classes I’ve ever seen.

Fortunately, I had people around me who took the time to rewrite my code and explain to me why they did what they did. Consequently, I learned to write simpler, less clever, less resource intensive, more maintainable Smalltalk code.

Recently I have been programming in JavaScript. I was motivated to develop JavaScript code to front-end our client-side Java reference app we developed and use in our Enterprise Application Design course. For that initial programming exercise I took the stance that I’d use pretty much “stock” JavaScript libraries (hence me learning about JQuery) and keep things pretty simple.

Since that first whiff of JavaScript programming, I’ve been honing my JavaScript by learning more libraries and plugins and improving my programming skills. I am no expert. Not yet.

I’ve learned effective techniques somewhat randomly because I am not surrounded by JavaScript experts who teach me their craft. Combing through the Internet for advice and inspiration is haphazard and compounded by the fact that our notion of good programming practices evolves over time as languages and tools and libraries grow and evolve.

But now, after more time and experience, I can appreciate several coding practices that contribute to maintainable JavaScript. Such as:

Modules. At first, the coding technique to define a module just seemed confusing. It is. But modularity, which helps to define and separate code “centers” is really important. Not only does it strengthen a “center” by making it more defined (and encapsulated), it makes it more easily integrated with other code.

Being aware of variable scope and limiting it.

Not constantly searching and mucking with DOM objects on every event. Initially I was content if my JQuery searches were “optimized”. Now I am thinking how to avoid DOM references by caching appropriate state in my own variables.

Not blindly nesting anonymous callbacks, but defining functions and then using them.

These techniques contribute to better-defined untangled code centers. But I want to caution you: don’t blindly follow coding best practices without knowing about and buying into the rationale behind them. Arguably your code might be better if you do. But you won’t learn how to exercise judgment until you know more about why you are doing what you are doing. Understanding how to write code that has strong, lively centers takes time, feedback, and the right kind of experience.

When I first started programming in JavaScript I could not have appreciated these techniques. I needed to gain more experience before I could see their value. With time writing more code, looking at more good and bad code, discussions with others, and reflection, I have gotten better at JavaScript. I’m not sure what steps I could leave out to shorten this process. It certainly is easier to learn how to write lively code if you work with others who care deeply about the code they write and who willingly point out and explain the good bits to you when you are ready to absorb them. If you are fortunate to have wise souls around you, take advantage of their wisdom…then put in the time you need to become better.