Other people’s messes

There’s a funny thing, that when I walk into the kitchen and there’s sunscreen on the counter that I left there before yesterday’s bike ride, there’s a plate that I put to soak on the counter and there’s a book about Klimt that I had hoped to read with my coffee, this is fine.

It doesn’t feel messy. I almost feel a comforting sense of continuity with my past self.

But when other people leave crab rangoons and cups and bowls and mail and washcloths lying around, ugh! mess!

It’s somehow different when I have the story behind it.

Code is like this too. If I remember writing it, then I know this nutty interface name was a placeholder. This ratty console.log helped me debug a tricky crash. I’m in the process of refactoring to use the new functions, but the old ones are still around some places.

When that’s anyone else’s code, it looks like trash.

This tells me: if the code is my own toy, and I’m likely to be back to it before I forget who I was when I wrote it, then all those intermediate states are fine. They might help me get back into this groove, even.

But if the code is shared, or if I won’t be back in it later this week, make small changes completely. One at a time, in series, so that each commit leaves the code clean. Put the sunscreen on and put it away. Wash the plate and put it in the dishwasher immediately. Take my coffee upstairs to drink with the book.

Sometimes I wish I lived alone, so I’d only have my own mess to deal with. I don’t want to code alone, though, except on toy projects. I do wish I lived with people who cleaned up after themselves. I resolve to make smaller messes in our shared code, one at a time.

Mostly we orient

Observe, Orient, Decide, Act. This is the OODA loop, first recognized in fighter pilots and then in the Toyota Production System. It represents every choice of action in humans and higher level systems: take in sensory data, form a model of the world, choose the next action, make a change in the world.

At least in fighter pilots, and in all our daily life, most of this is automatic. We can’t help observing while we are awake. We constantly decide and act, it is part of being alive. The leverage point here is Orient.

The model we form of the world guides our decisions, both conscious and unconscious. Once the pilot has a geometric plane of battle in mind, the decisions are obvious. Once you see the bottleneck in production, you can’t look away from it. When I have an idea what’s going on in my daughter’s mind, I can talk to her.

Our power to change our actions, our habits, and our impact on the world lies in Orient. When we direct our attention to finding new models of the world, whole new possibilities of action open to us.

Fighter pilots can see what is possible when they picture the battle in the best geometric plane. Production managers needs to look at the flow of work. In software, I look at the flow of data through services and functions — different from when I used to see in objects or think about spots in memory.

The power of breaking work into smaller chunks is the chance to re-Orient in between them. TDD gives us lots of little stable points to stop and think. Pairing lets one person think about where we are in the problem space while the other is busy acting. Mob programming gives us the chance to negotiate an orientation among the whole group.

That co-orientation is crucial to collaboration. With that, we can predict each other’s decisions and understand each other’s actions. If we have a shared model of the world and when we are going, plus trust in the competence of our team in their respective specialties, that’s when we can really fly.

(This post is based on a conversation with Zack Kanter.)

Brains and eyes: hierarchies in vision

We see with our brains. Then we check with our eyes.

Our retina takes in light, varying by brightness and color. It transmits information along the optical nerve to the primary visual cortex. There, specialized cells activate on outlines and contours in various orientations (horizontal, vertical, oblique). This part of the brain separates objects from backgrounds.

Along the pathway from there to the inferior temporal cortex, face-contours go one way, object-contours go another. Here and in higher-level processing, meaning and categories are assigned to images. Then we perceive.

All of this is affected by memories of things we’ve seen before. Visible edges are supplemented by inferred ones. Depth is judged by remembered sizes, among other clues; binocular vision is only useful close-up. What we think we’re looking at determines where our eyes move in their saccades, and this determines what we get a clear view of. Vision depends on context and history.

like, some light comes into the eyeball and hits the retina, which passes up data about colors and positions to the primary visual cortex, which comes up with contours and edges and depth and passes that on up to higher levels

This highly inexpert summary comes from listening to The Age of Insight, by Eric Kandel, neuroscientist. (Audible does not provide a PDF of diagrams, grr.)

Andy Clark goes farther in Surfing Uncertainty. At every level, from retinal nerve cell on up, signals from the outside are compared to expectations. Only surprises are transmitted up the hierarchy. Our vision starts with guesses, which are broken down into what we expect to see at smaller and smaller scales, and at each scale these guesses are tested against the incoming light signals.

expectations come from higher level brain function; they get broken up into what we expect to see in each area and then each cell. Each level compares these to what it’s getting from outside, and informs higher levels of differences.

This makes sense to me. When I hear stuff like “the retinal gangleon get the light signals and assemble them into colors and position, and then the primary visual cortex deduces edges and contours, and then the inferior temporal cortex recognizes objects and faces” I think: gah, that sounds like so much work.

Why would we do that work? I know very well that I see a sky and trees and billboards and road. Why would I ask my eyes to process the incoming data? If my retina cells don’t see blue (or gray or white) in the top part of the visual range, then I want to notice it. Otherwise, geez, take a breather. Read the billboards, they’re all different.

One day while carpooling to work, in the passenger seat, I played a game. I looked out the window and tried to see what was there. Not what my brain is trained to see, the buildings and billboards placed there by humans for humans to look at. I noticed some wild growth, some derelict corners and alleys, and many cell phone towers. Each time, I tried not to judge (categorize, evaluate) what I saw, but keep seeing.

It was exhausting! By the time I got to work, my brain was done. I didn’t get any useful code written that day. This is not what my eyes are doing most of the time.

In video transmission, we send deltas, not pixels. And we can use all kinds of protocols to describe common deltas, expected changes, to reduce bandwidth use. Our brains do that, too.

The hierarchy of vision communicates in both directions. Expectations down, surprises up. At every level, an interplay between meaning and incoming signals. Hypothesis, test. Result, new hypothesis, test. It’s a duck, OK yeah. It’s a rabbit, OK yeah.

Thinking about vision this way gives me new appreciation for how our past experience changes what we see. It also gives me new ways of thinking about hierarchies: the helpful ones pass information in both directions.

We see with our brains and our eyes and many nerve cells in between, working together in both directions. I wonder if we can work this well together in our organizations.

Rules are not easy

Sometimes in software design we get this idea, “We’ll make this a rule engine. Then the business can write the rules, and they’ll be able to change them without changing the code. That’ll make it more flexible.”


The rules are code; they change the behavior of the system. Rules interact in ways that are hard to anticipate. It’s harder to write rules than to write code.

It seems like we make business decisions in terms of rules, because we talk about them that way.

People make uncomplicated decisions by rule. We make complicated decisions by aesthetic (from expertise), and these are difficult or impossible to express in rules.

Real-life rules often contradict each other. A human with a feeling for the situation can prioritize among them.

For instance, “How do you position a picture in a column of text?” Back in the day, people laid out the newspaper pages, and they positioned them using some rules and also their eyes. How does a browser do it? Careful people have created nine precise rules for positioning float elements. Excerpt:

4. A floating box’s outer top may not be higher than the top of its containing block. When the float occurs between two collapsing margins, the float is positioned as if it had an otherwise empty anonymous block parent taking part in the flow. The position of such a parent is defined by the rules in the section on margin collapsing.

you don’t need to actually read this

If you think “Rules are declarative, they’re easier to reason about than imperative code” then go format a complicated web site with CSS. Make changes in the hundreds of lines of CSS, and see if you can predict the results. Now see if you can predict the results of changing someone else’s CSS.

Writing rules is hard. Designing a syntax and semantics that let people write rules to cover all the cases in the world, even harder. Do you really want to embark on that? Is it really more effective than changing some code when the business wants change?

As humans, we make aesthetic judgements for complicated decisions. This is one of our superpowers. Putting those judgements into rules is never easy; don’t pretend it is. And no, you don’t need to implement a rule engine.

Thanks to @nokusu for teaching me about floats and margins and other layout fun.

Human, or Person

Sometimes I think about, what if aliens kept humans as pets? Raised them from babies in isolation, without human language, without society. What would a human be like, outside of other humans?

Not a person.

As Abeba Birhane points out beautifully in her talk at NCrafts and article in Aeon, we aren’t people alone. We form ourselves in our interactions. “A person is a person through other persons.” A baby (while homo sapiens) is less of a person than an adult, because the adult has connections, the adult participates in many systems within society.

Every hear that saying “You are the sum of the five people you hang out with most”? Yeah. Good approximation.

We get meaning from interaction with others. We form our self through this. Most of our relationships are not transactional, we’re not in it for some end goal — the interactions have intrinsic value. They build and reflect who we are. That’s enough!

If you have a friend you wish you could do more for, know that being a friend is itself a thing. Listening, being present, sharing a reality with a person — this is already a thing. Often we help each other out in tangible ways, and that feels great. But a simple “I hear you, I see you” — we can’t live without that, we can’t be a person without that.

Without each other, might as well sit by the food bowl and yowl.

Rules in context: D&D edition

In Dungeons & Dragons (the tabletop game), there are universal laws. These are published in the Player’s Guide. They set parameters for the characters, like how powerful they should be relative to monsters. The Player’s Guide outlines weapons, combat procedures, and success rates. It describes spells, what they do and how long they last. What is a reasonable amount of gold to pay for a weapon, and how much learning (XP) comes from a fight.

The Players Guide does not tell you: everything else. What happens when a player attempts to save a drowning baby using a waffle?

The Player’s Guide represents the universal laws of D&D. The rules exist because they’ve been shown (over time, this is the 5th edition) to enable games that are fun.

Yet the prime directive of D&D is: what the DM says, goes. (The DM is the dungeon master, the person telling the story in collaboration with the players.) The DM can override the rules when necessary. More often, the DM makes up rules to suit the situation. The rulebooks do not cover everything the players might choose to do, that that’s both essential and by design.

In D&D, the DM sets the stage with a situation. Then the players respond, describing how the characters they control act in this situation. The DM determines what happens as a result of their actions.

In our game today, Tyler was DM. Tyler DMs by the “Rule of Cool”: “If it’s cool, let them do it. If it’s not cool, don’t make them do it.” One character, TDK Turtle, ran out of the inn with a waffle in hand. On his next turn, he tried to use the waffle to save a drowning baby.

Could that ever work? The DM decides. How unlikely is this? More unlikely than Turtle rolled. And yet Tyler came up with a consequence: Turtle threw the waffle in the river, our dog jumped in to eat the waffle, the baby grabbed onto the dog, and thus the dog saved the baby.

Every D&D campaign (series of games with the same DM and roughly the same players) has its own contextual rules. These build up over time. Our party has a dog because yesterday we rescued this pet from a Kuo-toa tribe that was trying to worship it as a Doge. (The Kuo-toa worship gods of random construction. Where by random I mean, DM’s choice. This DM chose Doge, because it advanced the plot.)

What works for a group of players, we stick with. What doesn’t, we leave behind. If it’s cool, do it. If not, don’t. Results drive future practices.

Our teams are like this. Humans work within universal laws of needing to eat and sleep and commute. Organizations impose constraints. Within these bounds, we come up with what works for us, what makes us laugh, and what helps us advance the plot of the system we are building.

Not every baby-saving-waffle-toss is the same. Not every party has this dog. Let teams build their own process, and don’t expect it to transfer. Do look for the wider rules that facilitate a productive game, and try those more broadly.

Zooming in and out, for software and love

The most mentally straining part of programming (for me) is focusing down on the detail of a line of code while maintaining perspective on why we are doing this at all. When a particular implementation gets hard, should I keep going? back up a step and redesign? or back way up and solve the problem in a different way?

Understanding the full “why” of what I’m doing helps me make decisions from naming to error handling to library and tool integrations. But it’s hard. It takes time to shift my brain from the detail level to the business level and back. (This is one way pairing helps.)

That zooming in and out is tough, and it’s essential. This morning I learned that it is also essential in love. Maria Popova quotes poets and philosophers on how love requires understanding, then:

We might feel that such an understanding calls for crouching closer and closer to its subject, be it self or other, in order to examine it with narrow focus and shallow depth of field, but this is a misleading intuition — the understanding of love is an expansive understanding, requiring us to zoom out of our habitual solipsism so as to regard ourselves and the object of our love from a great distance against the backdrop of universal life.

Maria Popova, Brain Pickings

Abeba Birhane, cognitive scientist, points out that Western culture is great at picking things apart, breaking problems up to their smallest possible components. She quotes Prigogine and Stenges: “We are so good at it. So good, we often forget to put the pieces back together again.”

She also sees this problem in software. “We forgot why we are doing it, What does this little component have to do with the big picture?” (video)

Software, love, everywhere. Juarrero brings this together when she promotes hermeneutics as the way to understand complex systems. Hermeneutics means interpretation, finding meaning, especially of language. (Canonical example: Jews studying the torah, every word in excruciating detail, in the context of the person who wrote it, in the context of their place and time and relations.) Hermeneutics emphasizes zooming in and out from the specific words to the work as a whole, and then back. We can’t understand the details outside the broader purpose, and the purpose is revealed by all the details.

This is the approach that can get us good software. (Not just clean code, actual good software.) I recommend this 4-minute definition of hermeneutics; it’s super dense and taught me some things this morning. Who knows, it might help your love life too.

Certainty, Uncertainty, or the worst of both

Des Cartes looked for certainty because he wanted good grounds for knowledge, a place of fixity to build on, to make predictions.

Juarrero counters that uncertainty allows for novelty and individuation.

In software, we like to aim for certainty. Correctness. Except in machine learning or AI; we don’t ask or expect our algorithms to be “correct,” just useful.

The predictions made by algorithms reproduce the interpretations of the past. When we use these to make decisions, we are reinforcing those interpretations. Black people are more likely to be arrested. Women are less likely to be hired.

Machine learning based on the past, choosing the future — this reinforces bias. It suppresses novelty and individuation. It is the worst of both worlds!

This doesn’t mean we should eschew this technology. It means we should add to it. To combine the fluidity of the human world with the discreteness of machines, as Kevlin Henney puts it. We need humans working in symmathesy with the software, researching the factors that influence its decision and consciously altering them. We can tweak the algorithms toward the future we want, beyond the past they have observed.

Machine learning models come from empirical data. Logical deduction comes from theory. As Gregory Bateson insisted: progress happens in the interaction between the two. It takes a person to tack back and forth.

We can benefit from the reasoning ability we wanted from certainty, and still support novelty and individuation. It takes a symmathesy.

This post is based on Abeba Birhane’s talk at NCrafts this year. Video

The next architecture book you must read

Today, another tweet about “how can I write the cleanest, best architected code?” gets piles of book references in response.

Yes, we want to be good at writing code. We want to write the best code. The best code for what? “Writing code” is an abstraction, like a transitive verb without an object. I can’t just “write code,” I must “write code to….”

The work is software development is not typing, it is making decisions. To make those decisions, we have to understand the details of code and technology, yes. We also have to understand the context and purpose, what we are writing the code to do.

My advice for “What should I read in order to write better code?” is usually, a book or magazine or internal memos about the business. Better is having conversations about the business with the experts inside your company, and to do that well, you need the vocabulary.

We need both the specific technical understanding and the business understanding. It’s so much easier to push for technical understanding. Because the business understanding is specific to each context. I can’t make a wide-audience tweet recommending a book, you have to find that closer-in.

Supplement Twitter with kitchen conversations or internal Slack channels that give you a broader perspective on the purpose of your work in the specific context you work in.

Zooming in and zooming out

Inertia in the interface

What makes software hard to change?

As a developer, it’s easy to focus on the internal properties of the software system. The code needs refactored, the framework is old, we need more tests, or else fewer tests.

If your software is in production, these are not the biggest obstacle to change.

The important changes are the ones visible to the outside world, the ones that change the interface. And changing the interface means more than your software has to change: the systems that use your software need to change.

The more useful your software is, the more other systems depend on it, the scarier change is. You’re risking more than your piece of the world. You need progressive delivery, careful data migrations. Backwards compatibility: twice as many tests, and special cases in the code. You need to design the whole change.

And then, for your work to be useful, people have to use it. You must get the word out through social channels: talking to people, advocacy, (dare I say it) marketing.

This is all fine. It is right. No complaints.

Because our work is not to change software, it is to change a system. To add and broaden capabilities in systems larger than ours.

If our code is full of if/else statements for backwards compatibility, if our database integration is cluttered by a migration, if we spend more time crafting the deploy strategy than we did writing the code, cool.

The more people use the code, the harder it is to change, and the more valuable.

Caveat: yes, there are performance and scale improvements that are useful. A wider variety of functionality changes are useful, and these are more frequent, so that’s what I’m talking about here.

For more on this topic, see: From Puzzles to Products