Inner coherence vs usefulness

Around 1900, when modernist art was emerging, art historians talked about the significance of art in context: the painting is not complete without the beholder.

Before that, the beholder wasn’t so important. People looked for art to express some universal truth through beauty.

Before cultures and artists considered the role of the beholder, they made art that didn’t need you. The art has inner coherence.

A lot of software development aims for inner coherence. Code that is elegant, that is well-designed and admirable on its own.

I used to like that, too. But now I want to think about code only in context. Software is incomplete without use.

If my code is full of feature flags and deprecated fields for backwards compatibility, that’s a sign that it is used. The history of the software is right there to see. I don’t want to hide that history; I want to make it very clear so that I can work within it.

My job isn’t to change code, it’s to change systems, so that we can adapt and grow increasingly useful instead of obsolete.

This old painting may be beautiful, but it doesn’t affect me the way Gustav Klimt’s work does. (some drawings, NSFW) He was one of the early modernist artists to speak of contextual, rather than universal, truths.

Within our teams, contextual truths have the most power. In my software, it’s a contextual coherence of its larger system that I care about.

Tiny dramas, tiny deploys

It is better to practice risky things often and in small chunks with a limited blast radius, rather than to avoid risky things.

Charity Majors, “Test in production? Yes

Charity is writing about deploys. Not-deploying may be safer for tonight, but in the medium term it leads to larger deploys and bigger, trickier failures.

In the long term, slow change means losing relevance and going out of business.

In relationships, the same applies. If I have some feeling or fact that my partner might not like, I can say it or not. It never feels like the right time to say it. There is no “right time,” there is only now. There is positive reinforcement for holding back, because then our evening continues pleasantly. No drama.

This leads to an accumulation of feelings and facts they don’t know about. Then when it does become urgent to talk about those, they react with feelings of betrayal: Why didn’t you tell me about this sooner?

In the long term, lack of sharing means growing apart and breaking up.

My new strategy in relationships is: tiny dramas, all the time. The more tiny dramas we have, the fewer big dramas. Also we get practice at handling drama in a way that is safe, because it’s minor. I take any mental question of “should I say this?” as a clue, an opportunity! Yes, say it. Unless it’s a really bad time, it’s the best time.

And the complementary strategy: whenever my partner tells me something scary, like something I did that they don’t like or some feeling they had that might upset me, my first response is “Thank you.” Usually it is not a drama anyway, it’s fine. When I do have feelings about it, we can talk about them. Reassurance helps a lot, especially when I recognize and appreciate the risk they took by telling me in this moment.

If a small deploy causes failure, please respond with “Thank you for not making this part of a bigger deploy.”

We have built a glass castle, where we ought to have a playground.

Charity again, on our lack of safe tooling and therefore fear of production

Reasons, heuristics, and revealed intentions

In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.

So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.

There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.

Will these predictions come true? can’t know until we try it.

Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.

Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)

That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.

Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.

We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?

Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.

We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.

And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.

Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.

Designing Change vs Change Management

Our job as developers is to change software. And that means that when we decide what to do, we’re not designing new code, we’re designing change.

Our software (if it is useful) does not work in isolation. It does not poof transition to a new state and take the rest of the world with it.

If our software is used by people, they need training (often in the form of careful UI design). They need support. hint: your support team is crucial, because they talk to people. They can help with change.

If our software is a service called by other software, then that software needs to change to, if it’s going to use anything new that we implemented. hint: that software is changed by people. You need to talk to the people.

If our software is a library imported by other software, then changing it does nothing at all by itself. People need to upgrade it.

The biggest barriers to change are outside your system.

Designing change means thinking about observability (how will I know it worked? how will I know it didn’t hurt anything else?). It means progressive delivery. It often means backwards compatibility, gradual data migrations, and feature flags.

Our job is not to change code, it is to change systems. Systems that we are part of, and that are code is part of (symmathesy).

If we look at our work this way, then “Change Management” sounds ridiculous. Wait, there’s a committee to tell me when I’m allowed to do my job? Like, they might as well call it “Work Management.”

It is my team’s job to understand enough of the system context to guess at the implications of a change and check for unintended consequences. We don’t all have that, yet. We can add visibility to the software and infrastructure, so that we can react to unintended consequences, and lead the other parts of the system forward toward the change we want.

Never and Always mean “right now”

I have this saying that I learned from dating in high school and college:

In love and anger, ‘never’ and ‘always’ mean ‘right now.’

When in love, there is a feeling of “I will always love you” and sometimes “I could never be with anyone else.”

Cuddled up to a lover, there is a feeling of “I never want to move.”

These remind me of the flow of action in probability spaces shaped by intentions (from Dynamics in Action). When I have an intention (to stay right here) and no intention of changing it, then that intention stretches out into forever.

It’s a great feeling, because the moment feels like eternity.

It doesn’t say anything about eternity or the rest of my life. It says something about the purity of my intentions right now.

Action is a process. Reasons matter.

Knowledge work has to be done for the right reason to be done well.

If I choose to implement a feature because I’ll get some reward for it, like a gold star on my next review, then I will implement it far enough to check the box. I won’t get it smooth, or optimal. I will optimize the wrong parts or not at all.

If I implement that feature because I believe it’s the next thing I can do to improve our customer’s experience, then I’ll get optimize for that experience. I’ll make all the tiny decisions with that in mind. I’ll check whether I’ve slowed the site down, or interfered with other features, or cluttered the interface.

Creation is a process, not a transaction.

We are guided throughout that process by an intention. If that intention is to check a box, we’ll check the box, and leave the rest of the system to fall as it will.

In Dynamics in Action, Alicia Juarrero models human action as a process: we start with an intention, which shapes the probability space of what we do next. We do something, and then we check the results against our intention, and then we respond to difference.

Action is a process, not an atomic motion.

Development is action, and it is knowledge work not hand work. To do it well, to shape the system we work in and work on, we line our intentions up with the goals of the system. Not with some external reward or punishment. Intention shapes action, over time, in response to feedback. Reasons matter.

Respond to input? No, we create it

Naturally intelligent systems do not passively await sensory stimulation.


They are constantly active, trying to predict (and actively elicit) the streams of sensory information before they arrive…. Systems like that are already (pretty much constantly) poised to act.


We act for the the evolving streams of sensory information that keep us viable and serve our ends… perception and action in a circular causal flow.

Andy Clark, Surfing Uncertainty

This book is about how human brains perceive based on predictions and prediction errors. I used to think we take in light from our eyes, assemble all that into objects that we perceive, and then respond to those perceptions. But it’s different: we are always guessing at what we expect to perceive, breaking that down into objects and then into expected light levels, and then processing only differences from us that expectation. We turn that into possibilities for our next action.

We don’t sit around waiting to notice something and then respond! We’re always predicting, and then thinking what would we like to see or explore next, and then acting toward that imagined reality.

Teams are intelligent systems like this too. Development teams (symmathesies) operating software in production have expectations of what it should be outputting. We are constantly developing better visibility, and stronger expectations. Then we want that output (data, usage volume) to change, so we take action on the software.

Teams and developers, as intelligent systems in conjunction with software, need to choose our own actions. Acting and perceiving are interlinked. Some of that perception is from the higher levels of organization, like: what does our company want to accomplish? What are people asking us to change? But in the end we have to act in ways we choose, and look for differences in what we perceive, to learn and grow.

Notice that “act as I choose” is very different from “do what I want” or worse, “do what I feel like doing” (which rarely corresponds to what will make me happy). I choose what to do based on input from people and systems around me, according to what might be useful for the team and organization and world.

If my boss wants something done in the code, they’d better convince me that it’s worth doing. Because only if I understand the real “why” can I predict what success looks like, and then I can make the million big and tiny decisions that are the work of software development. Does memory use matter? What is permanent API and what internal? Which off-happy-path cases are crucial, and how can I make the rest fall back safely? Where should this new function or service live?

If I choose to do the work only because they said to, only so that I can check off a box, I am not gonna make these umpteen decisions in ways that serve future-us.

Our job is making choices. We need to the background and understanding to choose our high-level work, so that we can make skillful choices at the low levels.

Intelligent systems don’t wait around to be told what to do. We are constantly looking for the next input that we like better, and creating that input. Act in order to perceive in order to act. This is living.

Reductionism with Command and Control

In hard sciences, we aim to describe causality from the bottom up, from elementary particles. Atoms form molecules, molecules form objects, and the reason objects bounce off each other is reduced to electromagnetic interactions between the molecules in their surfaces.

Molecules in DNA determine production of proteins which result in cell operations which construct organisms.

This is reductionism, and it’s valuable. The elementary particle interactions follow universal laws. They are predictable and deterministic (to the omits of quantum mechanics). From this level we learn fundamental constraints and abilities that are extremely useful. We can build objects that are magnetic or low friction or super extra hard. We can build plants immune to a herbicide.

Bottom-up causality. It’s science!

In Dynamics in Action, Juarrero spends pages and pages asserting and justifying that causality in systems is not only bottom-up; the whole impacts the parts. Causality goes both ways.

Why is it foreign to us that causality is also top-down?

In business, the classic model is all top-down. Command and control hierarchies are all about the big dog at the top telling the next level down what to do. Intention flows from larger (company) levels to smaller (division), and on down to the elementary humans at the sharp end of work.

Forces push upward from particles to objects; intentions flow downward through an org chart

Of course when life is involved, there is top-down causality as well as bottom-up. Somehow we try to deny that in the hard sciences.

Juarrero illustrates how top-down and bottom-up causality interact more intimately than we usually imagine. In systems as small as a forming snowflake, levels of organization influence each adjacent level.

We see this in software development, where our intention (design) is influenced by what is possible given available building blocks (implementation). A healthy development process tightens this interplay to short time scales, like daily.

Software design in our heads learns from what happens in the real world implementation

Now that I think about how obviously human (and organization) intention flows downward, impacted by limitations and human psychology pushing upward; and physical causality flows upward, impacted by what is near what and what moves together mattering downward; why is it even strange to us that causality moves both ways?

Probability Spaces as reality

In Dynamics in Action, Alicia Juarrero describes human action as a selection, a sample, from probability space. Everything we could do, and the likelihood of each, is a function of our situation, our habits, and our intentions. From this we select some action in each moment.

Karl Popper calls these possible actions propensities, and he asserts that they are as real as electromagnetic fields.

You can’t see fields, but they’re real. You can measure their effects. What if the probability space of human action is just as real as fields?

It’s harder to measure this probability space; we get one sample. (There’s more we can do, but that’s another topic.) But if we take this probability space of action as real, what possibilities would that open for our thinking?

Treat “what happened” as a collapse, an oversimplification, of a higher-dimensional reality of everything that could happen.

How is this useful? Well, we can see our work as shaping that probability space. When I create software for other people to use, I’m shaping the probability space of their actions. When I ask questions, I’m opening (or narrowing) possibilities in the set of their possible actions. When I push “like” on Facebook or twitter, I’m bumping the probability of a person repeating that sentiment. We can start imagining the effects of our choices on probability spaces, instead of on concrete behavior (in a way more concrete, more mathematically modelable, than “influence”).

I can ask my child, not “why did you do that?”, but “what other actions also made sense to you, that you didn’t happen to go with?” and “what factors in the situation made this particular action feel more cromulent than sometimes?” We are not only what we do; we are everything we could do.

It’s like this example from Wardley’s book: if you could only see a projection of the chess game — what piece moved, but not from where or two where, if you didn’t know the board existed — then you can’t have as much strategy. Maybe if we accept probability spaces as part of reality, we can work toward illuminating them and influencing them, instead of projecting reality down into only what happened.

Look for this in yourself and others. Tell me if you find anything interesting!

One skill is not enough

It takes more than one skill to be useful these days.

Development, communication, writing, interviewing, management, business — these skills don’t do anything by themselves.

Developing what? communicating what? Which business? You need a domain of expertise to apply these abstract skills to.

I can’t communicate what I don’t understand. Nor develop good software in a foreign-to-me domain. Nor run a business without a particular business to run.

Today on On Being, they remarked that the straight-line path to success is outdated. Good grades may open a few doors, but grades aren’t success.

Find many different mentors, take not-normal paths. This is where we get interesting, and where real opportunities emerge.