Reasons, heuristics, and revealed intentions

In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.

So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.

There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.

Will these predictions come true? can’t know until we try it.

Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.

Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)

That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.

Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.

We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?

Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.

We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.

And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.

Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.

Never and Always mean “right now”

I have this saying that I learned from dating in high school and college:

In love and anger, ‘never’ and ‘always’ mean ‘right now.’

When in love, there is a feeling of “I will always love you” and sometimes “I could never be with anyone else.”

Cuddled up to a lover, there is a feeling of “I never want to move.”

These remind me of the flow of action in probability spaces shaped by intentions (from Dynamics in Action). When I have an intention (to stay right here) and no intention of changing it, then that intention stretches out into forever.

It’s a great feeling, because the moment feels like eternity.

It doesn’t say anything about eternity or the rest of my life. It says something about the purity of my intentions right now.

Action is a process. Reasons matter.

Knowledge work has to be done for the right reason to be done well.

If I choose to implement a feature because I’ll get some reward for it, like a gold star on my next review, then I will implement it far enough to check the box. I won’t get it smooth, or optimal. I will optimize the wrong parts or not at all.

If I implement that feature because I believe it’s the next thing I can do to improve our customer’s experience, then I’ll get optimize for that experience. I’ll make all the tiny decisions with that in mind. I’ll check whether I’ve slowed the site down, or interfered with other features, or cluttered the interface.

Creation is a process, not a transaction.

We are guided throughout that process by an intention. If that intention is to check a box, we’ll check the box, and leave the rest of the system to fall as it will.

In Dynamics in Action, Alicia Juarrero models human action as a process: we start with an intention, which shapes the probability space of what we do next. We do something, and then we check the results against our intention, and then we respond to difference.

Action is a process, not an atomic motion.

Development is action, and it is knowledge work not hand work. To do it well, to shape the system we work in and work on, we line our intentions up with the goals of the system. Not with some external reward or punishment. Intention shapes action, over time, in response to feedback. Reasons matter.

Reductionism with Command and Control

In hard sciences, we aim to describe causality from the bottom up, from elementary particles. Atoms form molecules, molecules form objects, and the reason objects bounce off each other is reduced to electromagnetic interactions between the molecules in their surfaces.

Molecules in DNA determine production of proteins which result in cell operations which construct organisms.

This is reductionism, and it’s valuable. The elementary particle interactions follow universal laws. They are predictable and deterministic (to the omits of quantum mechanics). From this level we learn fundamental constraints and abilities that are extremely useful. We can build objects that are magnetic or low friction or super extra hard. We can build plants immune to a herbicide.

Bottom-up causality. It’s science!

In Dynamics in Action, Juarrero spends pages and pages asserting and justifying that causality in systems is not only bottom-up; the whole impacts the parts. Causality goes both ways.

Why is it foreign to us that causality is also top-down?

In business, the classic model is all top-down. Command and control hierarchies are all about the big dog at the top telling the next level down what to do. Intention flows from larger (company) levels to smaller (division), and on down to the elementary humans at the sharp end of work.

Forces push upward from particles to objects; intentions flow downward through an org chart

Of course when life is involved, there is top-down causality as well as bottom-up. Somehow we try to deny that in the hard sciences.

Juarrero illustrates how top-down and bottom-up causality interact more intimately than we usually imagine. In systems as small as a forming snowflake, levels of organization influence each adjacent level.

We see this in software development, where our intention (design) is influenced by what is possible given available building blocks (implementation). A healthy development process tightens this interplay to short time scales, like daily.

Software design in our heads learns from what happens in the real world implementation

Now that I think about how obviously human (and organization) intention flows downward, impacted by limitations and human psychology pushing upward; and physical causality flows upward, impacted by what is near what and what moves together mattering downward; why is it even strange to us that causality moves both ways?