Morning Stance

It is 7:09. One child is out, and I have returned to bed. Alexa will wake me at 7:15.

Six minutes: I could make my bed or do tiny morning yoga. Six minutes of rest is useless; I’ll feel worse afterward. What am I likely to do?

I picture the probability space in front of me. Intention, habit, and a better start to the day push me toward yoga. Yet there’s a boundary there, a blockage: it is my current stance.

At 7:09, if I were standing, I’d likely do yoga. But at 7:09 and horizontal, I’m gonna stay horizontal. Only a change in surrounding conditions (beep, beep, beep!) will trigger motion.

Cat Swetel talks about stances. By changing your stance, you change your inclinations.

It is 7:10. I choose to change my stance. I stand up.

I make my bed.

One deliberate change of stance, and positive habits and intentions take it from there.

Zooming in and out, for software and love

The most mentally straining part of programming (for me) is focusing down on the detail of a line of code while maintaining perspective on why we are doing this at all. When a particular implementation gets hard, should I keep going? back up a step and redesign? or back way up and solve the problem in a different way?

Understanding the full “why” of what I’m doing helps me make decisions from naming to error handling to library and tool integrations. But it’s hard. It takes time to shift my brain from the detail level to the business level and back. (This is one way pairing helps.)

That zooming in and out is tough, and it’s essential. This morning I learned that it is also essential in love. Maria Popova quotes poets and philosophers on how love requires understanding, then:

We might feel that such an understanding calls for crouching closer and closer to its subject, be it self or other, in order to examine it with narrow focus and shallow depth of field, but this is a misleading intuition — the understanding of love is an expansive understanding, requiring us to zoom out of our habitual solipsism so as to regard ourselves and the object of our love from a great distance against the backdrop of universal life.

Maria Popova, Brain Pickings

Abeba Birhane, cognitive scientist, points out that Western culture is great at picking things apart, breaking problems up to their smallest possible components. She quotes Prigogine and Stenges: “We are so good at it. So good, we often forget to put the pieces back together again.”

She also sees this problem in software. “We forgot why we are doing it, What does this little component have to do with the big picture?” (video)

Software, love, everywhere. Juarrero brings this together when she promotes hermeneutics as the way to understand complex systems. Hermeneutics means interpretation, finding meaning, especially of language. (Canonical example: Jews studying the torah, every word in excruciating detail, in the context of the person who wrote it, in the context of their place and time and relations.) Hermeneutics emphasizes zooming in and out from the specific words to the work as a whole, and then back. We can’t understand the details outside the broader purpose, and the purpose is revealed by all the details.

This is the approach that can get us good software. (Not just clean code, actual good software.) I recommend this 4-minute definition of hermeneutics; it’s super dense and taught me some things this morning. Who knows, it might help your love life too.

Reasons, heuristics, and revealed intentions

In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.

So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.

There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.

Will these predictions come true? can’t know until we try it.

Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.

Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)

That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.

Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.

We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?

Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.

We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.

And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.

Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.

Never and Always mean “right now”

I have this saying that I learned from dating in high school and college:

In love and anger, ‘never’ and ‘always’ mean ‘right now.’

When in love, there is a feeling of “I will always love you” and sometimes “I could never be with anyone else.”

Cuddled up to a lover, there is a feeling of “I never want to move.”

These remind me of the flow of action in probability spaces shaped by intentions (from Dynamics in Action). When I have an intention (to stay right here) and no intention of changing it, then that intention stretches out into forever.

It’s a great feeling, because the moment feels like eternity.

It doesn’t say anything about eternity or the rest of my life. It says something about the purity of my intentions right now.

Action is a process. Reasons matter.

Knowledge work has to be done for the right reason to be done well.

If I choose to implement a feature because I’ll get some reward for it, like a gold star on my next review, then I will implement it far enough to check the box. I won’t get it smooth, or optimal. I will optimize the wrong parts or not at all.

If I implement that feature because I believe it’s the next thing I can do to improve our customer’s experience, then I’ll get optimize for that experience. I’ll make all the tiny decisions with that in mind. I’ll check whether I’ve slowed the site down, or interfered with other features, or cluttered the interface.

Creation is a process, not a transaction.

We are guided throughout that process by an intention. If that intention is to check a box, we’ll check the box, and leave the rest of the system to fall as it will.

In Dynamics in Action, Alicia Juarrero models human action as a process: we start with an intention, which shapes the probability space of what we do next. We do something, and then we check the results against our intention, and then we respond to difference.

Action is a process, not an atomic motion.

Development is action, and it is knowledge work not hand work. To do it well, to shape the system we work in and work on, we line our intentions up with the goals of the system. Not with some external reward or punishment. Intention shapes action, over time, in response to feedback. Reasons matter.

Reductionism with Command and Control

In hard sciences, we aim to describe causality from the bottom up, from elementary particles. Atoms form molecules, molecules form objects, and the reason objects bounce off each other is reduced to electromagnetic interactions between the molecules in their surfaces.

Molecules in DNA determine production of proteins which result in cell operations which construct organisms.

This is reductionism, and it’s valuable. The elementary particle interactions follow universal laws. They are predictable and deterministic (to the omits of quantum mechanics). From this level we learn fundamental constraints and abilities that are extremely useful. We can build objects that are magnetic or low friction or super extra hard. We can build plants immune to a herbicide.

Bottom-up causality. It’s science!

In Dynamics in Action, Juarrero spends pages and pages asserting and justifying that causality in systems is not only bottom-up; the whole impacts the parts. Causality goes both ways.

Why is it foreign to us that causality is also top-down?

In business, the classic model is all top-down. Command and control hierarchies are all about the big dog at the top telling the next level down what to do. Intention flows from larger (company) levels to smaller (division), and on down to the elementary humans at the sharp end of work.

Forces push upward from particles to objects; intentions flow downward through an org chart

Of course when life is involved, there is top-down causality as well as bottom-up. Somehow we try to deny that in the hard sciences.

Juarrero illustrates how top-down and bottom-up causality interact more intimately than we usually imagine. In systems as small as a forming snowflake, levels of organization influence each adjacent level.

We see this in software development, where our intention (design) is influenced by what is possible given available building blocks (implementation). A healthy development process tightens this interplay to short time scales, like daily.

Software design in our heads learns from what happens in the real world implementation

Now that I think about how obviously human (and organization) intention flows downward, impacted by limitations and human psychology pushing upward; and physical causality flows upward, impacted by what is near what and what moves together mattering downward; why is it even strange to us that causality moves both ways?