Domain-specific laws

“there appear new laws and even new kinds of laws, which apply in the domain in question.”

David Bohm, quoted by Alicia Juarrero

He’s talking about the qualitative transformation that happens in a system when certain quantitative transition points are passed.

Qualitative transformation

I notice this when something that used to be a pain gets easier, sufficiently easier that I stop thinking about it and just use it. Like git log. There is such a thing as svn log but it’s so slow that I used it once ever in my years of svn. The crucial value in git log is that it’s so fast I can use it over and over again, each time tweaking the output.

  • git log
  • git log --oneline
  • git log --oneline | grep test
  • etc.

Now git log has way more functionality, because I can combine it with other shell commands, because it’s fast enough. This changes the system in more ways than “I use the commit log”: because I use the log, I make more commits with better messages. Now my system history is more informative than it used to be, all since the log command is faster.

The REPL has that effect in many languages. We try stuff all the time instead of thinking about it or looking it up, and as a result we learn faster, which changes the system.

Non-universal laws

I love the part about “laws, which apply in the domain in question.” There are laws of causality which are not universal, which apply only in specific contexts. The entire system history (including all its qualitative transformations) contribute to these contexts, so it’s very hard to generalize these laws even with conditions around them.

But can we study them? Can we observe the context-specific laws that apply on our own team, in our own symmathesy?

Can we each become scientists in the particular world we work in?

A definition of play, and how to live

In games, we choose an objective that has no intrinsic value. Get points, run out of cards, reach the finish line. We take aim, and restricting our actions with rules, because this leads us to actions that we enjoy. Thinking, interacting with other players, running all-out. We play the game because it’s fun. We try to win because that makes it fun. (Some people get happy if they win. But if you don’t enjoy the play, you won’t keep coming back.)

This is play, because the ends don’t have particular value, but the means of getting there give us satisfaction.

We can take this strategy in life too:

Choose the ends that lead you to the means that get you what you need.

I call it a quest, an unreachable star, this aim that we choose not because we expect to get there (that would be a milestone) but because it leads us in useful directions.

The book Obliquity (amazon) explains it well: there are some things we can’t get by aiming for them, such as profit or happiness. So you choose an end (“build the best airplane”) that leads you to means (engineering, research, investment, production quality) that get you what you need in order to keep going (profit). Choose an end (“build up my community”) that leads you to means (forming relationships, organizing, helping people) that get you what you need (joy).

When the end has some intrinsic value of its own, like the airplane, or the community, or operating useful software — then we call it work.

People say “Do what you love.” This is how to do that: find an objective that matters to others, which also leads you to means that bring satisfaction. Some people find hard physical work satisfying, others mental exertion, others human interaction. It doesn’t need to be your favorite activity; fun does not equal joy.

When both the ends and the means are fulfilling, then work and play align.

Each milestone (produce an engine, get someone to like you, code up a feature) has many routes to reach it. If you aim for the quickest route, you might end up messing up your quest (the fastest code is harder to operate) or worse, missing out on what you need (long-term profit is down, the community is poisoned, the work is unsatisfying). How do we restrict our means to the ones that take us toward our quest, not just our milestone? and also give us what we need to keep going?

In games, we use rules. In life, we use values.

Change at different timescales

On recommendation from @mtnygard and others, I have acquired a copy of How Buildings Learn (Stewart Brand, 1994). Highlights so far:

Buildings are designed not to adapt, but they adapt anyway, because the usages are changing constantly. “The idea is crystalline, the fact fluid.” They’re designed not to adapt because “‘Form ever follows function’ … misled a century of architects into believing that they could really anticipate function.”

We think of buildings as static, because they change at a timescale slower than our notice. They change over generations. Humans operate at some natural timescale, and we see things slower as unchanging, and things faster as transient, insignificant. We are uncomfortable with change in stuff we think of as permanent, like buildings or culture or language. It isn’t permanent, just slower than us.

A jar of honey on its side will leak. The lid doesn’t trap the honey, just slows it down. Imperceptible movement is invisible, until the whole cupboard is sticky.

Software’s pace of change can be unnaturally fast, the opposite of buildings. That makes people uncomfortable. Updating buildings, “we deal with decisions taken long ago for remote reasons.” In software, “long ago” might be last year.

As usages change, so must our environs, in brick and in computers.

What changes faster than usages? Fashion. “Buildings are treated by fashion as big, difficult clothing, always lagging embarrassingly behind the mode of the day. This issue has nothing to do with function.” The latest hot tech may not improve on the value of your legacy software. “The meaningless change of fashion often obstructs necessary change.”

Reasons, heuristics, and revealed intentions

In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.

So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.

There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.

Will these predictions come true? can’t know until we try it.

Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.

Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)

That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.

Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.

We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?

Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.

We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.

And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.

Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.

Designing Change vs Change Management

Our job as developers is to change software. And that means that when we decide what to do, we’re not designing new code, we’re designing change.

Our software (if it is useful) does not work in isolation. It does not poof transition to a new state and take the rest of the world with it.

If our software is used by people, they need training (often in the form of careful UI design). They need support. hint: your support team is crucial, because they talk to people. They can help with change.

If our software is a service called by other software, then that software needs to change to, if it’s going to use anything new that we implemented. hint: that software is changed by people. You need to talk to the people.

If our software is a library imported by other software, then changing it does nothing at all by itself. People need to upgrade it.

The biggest barriers to change are outside your system.

Designing change means thinking about observability (how will I know it worked? how will I know it didn’t hurt anything else?). It means progressive delivery. It often means backwards compatibility, gradual data migrations, and feature flags.

Our job is not to change code, it is to change systems. Systems that we are part of, and that are code is part of (symmathesy).

If we look at our work this way, then “Change Management” sounds ridiculous. Wait, there’s a committee to tell me when I’m allowed to do my job? Like, they might as well call it “Work Management.”

It is my team’s job to understand enough of the system context to guess at the implications of a change and check for unintended consequences. We don’t all have that, yet. We can add visibility to the software and infrastructure, so that we can react to unintended consequences, and lead the other parts of the system forward toward the change we want.

Never and Always mean “right now”

I have this saying that I learned from dating in high school and college:

In love and anger, ‘never’ and ‘always’ mean ‘right now.’

When in love, there is a feeling of “I will always love you” and sometimes “I could never be with anyone else.”

Cuddled up to a lover, there is a feeling of “I never want to move.”

These remind me of the flow of action in probability spaces shaped by intentions (from Dynamics in Action). When I have an intention (to stay right here) and no intention of changing it, then that intention stretches out into forever.

It’s a great feeling, because the moment feels like eternity.

It doesn’t say anything about eternity or the rest of my life. It says something about the purity of my intentions right now.

Action is a process. Reasons matter.

Knowledge work has to be done for the right reason to be done well.

If I choose to implement a feature because I’ll get some reward for it, like a gold star on my next review, then I will implement it far enough to check the box. I won’t get it smooth, or optimal. I will optimize the wrong parts or not at all.

If I implement that feature because I believe it’s the next thing I can do to improve our customer’s experience, then I’ll get optimize for that experience. I’ll make all the tiny decisions with that in mind. I’ll check whether I’ve slowed the site down, or interfered with other features, or cluttered the interface.

Creation is a process, not a transaction.

We are guided throughout that process by an intention. If that intention is to check a box, we’ll check the box, and leave the rest of the system to fall as it will.

In Dynamics in Action, Alicia Juarrero models human action as a process: we start with an intention, which shapes the probability space of what we do next. We do something, and then we check the results against our intention, and then we respond to difference.

Action is a process, not an atomic motion.

Development is action, and it is knowledge work not hand work. To do it well, to shape the system we work in and work on, we line our intentions up with the goals of the system. Not with some external reward or punishment. Intention shapes action, over time, in response to feedback. Reasons matter.

Respond to input? No, we create it

Naturally intelligent systems do not passively await sensory stimulation.


They are constantly active, trying to predict (and actively elicit) the streams of sensory information before they arrive…. Systems like that are already (pretty much constantly) poised to act.


We act for the the evolving streams of sensory information that keep us viable and serve our ends… perception and action in a circular causal flow.

Andy Clark, Surfing Uncertainty

This book is about how human brains perceive based on predictions and prediction errors. I used to think we take in light from our eyes, assemble all that into objects that we perceive, and then respond to those perceptions. But it’s different: we are always guessing at what we expect to perceive, breaking that down into objects and then into expected light levels, and then processing only differences from us that expectation. We turn that into possibilities for our next action.

We don’t sit around waiting to notice something and then respond! We’re always predicting, and then thinking what would we like to see or explore next, and then acting toward that imagined reality.

Teams are intelligent systems like this too. Development teams (symmathesies) operating software in production have expectations of what it should be outputting. We are constantly developing better visibility, and stronger expectations. Then we want that output (data, usage volume) to change, so we take action on the software.

Teams and developers, as intelligent systems in conjunction with software, need to choose our own actions. Acting and perceiving are interlinked. Some of that perception is from the higher levels of organization, like: what does our company want to accomplish? What are people asking us to change? But in the end we have to act in ways we choose, and look for differences in what we perceive, to learn and grow.

Notice that “act as I choose” is very different from “do what I want” or worse, “do what I feel like doing” (which rarely corresponds to what will make me happy). I choose what to do based on input from people and systems around me, according to what might be useful for the team and organization and world.

If my boss wants something done in the code, they’d better convince me that it’s worth doing. Because only if I understand the real “why” can I predict what success looks like, and then I can make the million big and tiny decisions that are the work of software development. Does memory use matter? What is permanent API and what internal? Which off-happy-path cases are crucial, and how can I make the rest fall back safely? Where should this new function or service live?

If I choose to do the work only because they said to, only so that I can check off a box, I am not gonna make these umpteen decisions in ways that serve future-us.

Our job is making choices. We need to the background and understanding to choose our high-level work, so that we can make skillful choices at the low levels.

Intelligent systems don’t wait around to be told what to do. We are constantly looking for the next input that we like better, and creating that input. Act in order to perceive in order to act. This is living.