Reasons, heuristics, and revealed intentions

In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.

So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.

There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.

Will these predictions come true? can’t know until we try it.

Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.

Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)

That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.

Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.

We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?

Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.

We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.

And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.

Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.

Reductionism with Command and Control

In hard sciences, we aim to describe causality from the bottom up, from elementary particles. Atoms form molecules, molecules form objects, and the reason objects bounce off each other is reduced to electromagnetic interactions between the molecules in their surfaces.

Molecules in DNA determine production of proteins which result in cell operations which construct organisms.

This is reductionism, and it’s valuable. The elementary particle interactions follow universal laws. They are predictable and deterministic (to the omits of quantum mechanics). From this level we learn fundamental constraints and abilities that are extremely useful. We can build objects that are magnetic or low friction or super extra hard. We can build plants immune to a herbicide.

Bottom-up causality. It’s science!

In Dynamics in Action, Juarrero spends pages and pages asserting and justifying that causality in systems is not only bottom-up; the whole impacts the parts. Causality goes both ways.

Why is it foreign to us that causality is also top-down?

In business, the classic model is all top-down. Command and control hierarchies are all about the big dog at the top telling the next level down what to do. Intention flows from larger (company) levels to smaller (division), and on down to the elementary humans at the sharp end of work.

Forces push upward from particles to objects; intentions flow downward through an org chart

Of course when life is involved, there is top-down causality as well as bottom-up. Somehow we try to deny that in the hard sciences.

Juarrero illustrates how top-down and bottom-up causality interact more intimately than we usually imagine. In systems as small as a forming snowflake, levels of organization influence each adjacent level.

We see this in software development, where our intention (design) is influenced by what is possible given available building blocks (implementation). A healthy development process tightens this interplay to short time scales, like daily.

Software design in our heads learns from what happens in the real world implementation

Now that I think about how obviously human (and organization) intention flows downward, impacted by limitations and human psychology pushing upward; and physical causality flows upward, impacted by what is near what and what moves together mattering downward; why is it even strange to us that causality moves both ways?

Probability Spaces as reality

In Dynamics in Action, Alicia Juarrero describes human action as a selection, a sample, from probability space. Everything we could do, and the likelihood of each, is a function of our situation, our habits, and our intentions. From this we select some action in each moment.

Karl Popper calls these possible actions propensities, and he asserts that they are as real as electromagnetic fields.

You can’t see fields, but they’re real. You can measure their effects. What if the probability space of human action is just as real as fields?

It’s harder to measure this probability space; we get one sample. (There’s more we can do, but that’s another topic.) But if we take this probability space of action as real, what possibilities would that open for our thinking?

Treat “what happened” as a collapse, an oversimplification, of a higher-dimensional reality of everything that could happen.

How is this useful? Well, we can see our work as shaping that probability space. When I create software for other people to use, I’m shaping the probability space of their actions. When I ask questions, I’m opening (or narrowing) possibilities in the set of their possible actions. When I push “like” on Facebook or twitter, I’m bumping the probability of a person repeating that sentiment. We can start imagining the effects of our choices on probability spaces, instead of on concrete behavior (in a way more concrete, more mathematically modelable, than “influence”).

I can ask my child, not “why did you do that?”, but “what other actions also made sense to you, that you didn’t happen to go with?” and “what factors in the situation made this particular action feel more cromulent than sometimes?” We are not only what we do; we are everything we could do.

It’s like this example from Wardley’s book: if you could only see a projection of the chess game — what piece moved, but not from where or two where, if you didn’t know the board existed — then you can’t have as much strategy. Maybe if we accept probability spaces as part of reality, we can work toward illuminating them and influencing them, instead of projecting reality down into only what happened.

Look for this in yourself and others. Tell me if you find anything interesting!