Adding qualities to systems

The other day there was a tweet about a Chief Happiness Officer.

Later someone remarked about their Agile Transformation Office.

It seems like we (as a culture) think that we can add qualities to systems the way we add ingredients to a recipe.

Systems, especially symmathesies, aren’t additive! Agility, happiness, these are spread throughout the interactions in the whole company. You can’t inject these things, people.

You can’t make a node responsible for a system property. Maybe they like having a human to “hold accountable”? Humans are great blame receptacles; there’s always something a human physically could have done differently.

How religion is important

I begin to wonder whether I am mad or have hit on an idea which is much bigger than I am.

Gregory Bateson

As someone who grew up in a religion and then let go of it in my mid-twenties, it’s easy to say, religion is a useless fiction that persists because a powerful group finds it useful.

Bateson (an atheist in a family of atheists) has a bigger idea. He believes that religions exist to hold the “everything else” of whether and why we should do a thing. To hold all the systemic and invisible-to-consciousness reasons for an action. They are the foil to strait-line purpose.

“Supernatural entities of religion are, in some sort, cybernetic models built into the larger cybernetic system [our culture] in order to correct for noncybernetic computation in a part of that system [our conscious, purposive minds].” (this from a letter; thanks to @gdinwiddie for leading me to it.)

As people in our (capitalist) culture, we aim to meet goals. Those goals accomplish something, and have some side effects that are very hard to notice or measure. Bateson proposes that religion is designed to account for all of the rest of those effects.

Can we come up with a way to notice the effects of our actions, wider than the progress toward our goals, that is not based on the fiction of existing religions?

Five levels of learning

Gregory Bateson talks about distinct levels of learning. From behavior to enlightenment, each level represents change in the previous level.

Zero Learning: this is behavior, responding in the way you always do. The bell rings, oh it’s lunchtime, eat. This does not surprise you, so you just do the usual thing.

Learning I: this is change in behavior. Different response to the same stimulus in a given context. Rote learning is here, because it is training for a response to a prompt. Forming or removing habits.

Learning II: this is change in Learning I; so it’s learning to learn. It can be a change in the way we approach situations, problems, relationships. Character traits are formed here: are you bold, hostile, curious?

For example — you know me, so when you see me you say “Hi, Jess” — zero learning. Then you meet Avdi, so next time you can greet him by name — Learning I. Lately at meetups Avdi is working on learning everyone’s names as introductions are happening, a new strategy for him: Learning II.

Bateson sees learning in every changing system, from cells to societies.

In code — a stateless service processes a request: zero learning. A stateful application retains information and recognizes that user next time: Learning I. We change the app so it retains different data: Learning II.

Learning III: This is change in Learning II, so it is change in how character is formed. Bateson says this is rare in humans. It can happen in psychotherapy or religious conversions. “Self” is no longer a constant, nor independent of the world.

Letting go of major assumptions about life, changing worldviews, this makes me feel alive. The important shift is going from one to two, and accepting that both are cromulent: my model is, there are many models. It is OK when a new model changes me; I’m not important (for whatever version of “I” is referenced).

Learning IV: would be a change in Learning III. Evolution achieves this. It doesn’t happen in individual humans, but in a culture it could. Maybe this is development of a new religion?

I wonder where team and organizational changes fall in this.

  • Zero learning: “A bug came in, so we fixed it.”
  • Learning 1: “Now when bugs come in, we make sure there is a test to catch regressions.”
  • Learning II: “When a bug comes in, we ask: how could we change the way we work so that this kind of bug doesn’t happen?”
  • Learning III: “Bugs will always happen, so we continually improve our monitoring and observability in production, and we refine our delivery pipeline so rolling forward is smoother and easier all the time.”
  • Learning IV: a framework for agile transformation! hahahahahaha

Design by implementation

We can develop things faster than we can define them.

@jryanday

I’ve remarked before that @cdupuis can implement new features way faster than I can document them. This is not ridiculous; creating something is often faster than connecting it to existing people who can use it.

Ryan took this farther: we can’t even define what we want faster than we can implement it, sometimes. This is not ridiculous; design informs implementation, and implementation informs design.

Implementation informs design, and the process of implementation builds a mental model in the developer’s head. Making that model concrete and communicating it to someone else is hard, sometimes harder than getting the code working. Communicating that model in a way that oodles of people can use it is harder still, sometimes an order of magnitude harder. It includes documentation, yes, and marketing and customer service and advocacy. If you’re really serious, writing a specification. These all inform the next iterations of design and of implementation.

Developing things is only a beginning.

Distance outside of maps

Distance is seriously strange.

Yesterday on the southern coast of Spain, at an Italian restaurant run by a German, I had tiramisu, because I’ve never had tiramisu this close to Italy. People laughed,
because Spain is farther from Italy than Germany or Poland (geographically) – but for food purposes it’s closer, right?

Geographic distance is so nice, on a map, so clear and measurable.
And it’s almost never relevant.

Sydney is farther from SF than SF is from Sydney, by 2 hours of flying, because of wind.
St Louis is farther than San Francisco from Europe, because there are direct flights to SF.

Today in Frankfurt I went from A gates to Z gates. Sounds far! … except in the map, Z is right on top of A. Which does not make it close, because the path from A to Z goes through passport control.


Forget maps. They’re satisfying, fun, and deceptive, because they give us the feeling we understand distance.

Distance is fluid, inconstant. Gates are closer when the sidewalk is moving, farther when people are bunched up and slow.

In software systems, distance is all kinds of inconsistent. Networks get slow and computers get farther apart. WiFi goes down and suddenly they’re on another planet.

And here’s the thing about distance: it’s crucial to our understanding of time.
One thing distributed systems can teach us about the real world: there is no time outside of place. There is no ordering of events across space. There is only what we see in a particular spot.

Roland Kuhn spoke at J on the Beach about building reliable systems in the face of fluctuating distance like this. The hardest part is coming up with a consistent (necessarily fictional) ordering of events, so programs can make decisions based on those events.

Humans deal all the time with ambiguity, with “yeah this person said stop over here, and also this machine kept going over there.” We don’t expect the world to
have perfect consistency. Yet we wish it did, so we create facsimiles of certainty in our software.  

Distributed systems teach us how expensive that is. how limiting.

Martin Thompson’s talk about protocols had real-life advice for collaborating over fluctuating distance. Think carefully about how we will interact, make decisions locally, deal with feedback and recover.

Distance is a thing, and it is not simple or constant. Time is not universal, it is always located in space. Humans are good at putting ambiguous situations together, at sensemaking. This is really hard to do in a computer.
Software, in its difficulty, teaches us to appreciate our own skills.

Predictability is hard, surprises are everywhere

Today at J on the Beach, Helena Edelson talked about a lot of things that Akka can do for event streaming, microservices,  generally handling a lot of data and keeping it (eventually) consistent across great distances. With opt-in causal consistency! 

I am amazed by how hard it is to have consistent causality and data across geographic distance. It is counterintuitive, because our culture emphasizes logical deduction and universality. Science, as we learned it, is straight-line deterministic causality. The difficulty of making systems that work like that across geographic distance illustrates how little of our human-scale world works that way.

the “hard” sciences are narrow.

It is great that people are working on software that can have these properties of consistency, and that they’re studying the limitations. Meanwhile it is also fun to work on the other side of the problem: letting go of perfect predictability, and building (relative) certainty on top of uncertainty. Right now I’m in a talk by Philip Brisk about the 4th Industrial Revolution of IoT sensors and machine learning sensemaking. It turns out: now that we can synthesize DNA in a little machine, we can also find out what DNA was synthesized using only a microphone placed near the machine (and a lot of domain knowledge). Humans are fascinating.

predictability is hard, and surprises are everywhere, and that is beautiful.

Mission Statement

“Code, as a medium, is unlike anything humans have worked with before. You can almost design right into it.”

me, in my Camerata keynote


But not totally, because we always find surprises. Complex systems are always full of surprises. That is their frustration and their beauty. 

We live in complex systems. From biology up through cultures and nations and economies, we breathe complexity. And yet in school we learned science as reductive.’

In software, we now have seriously complex systems that we can play with on a time scale that helps us learn. We have incidents we can learn from, with many clues to the real events, to the rich causalities, and sometimes we can trace those back to social pressures in the human half of our software systems. What is more, we can introduce new clues. We can add tracing, and we can make better tools that help the humans (and also provide a trail of what we did). So we have access to complex systems that are (1) malleable and (2) observable. 

My work in automating delivery increases that malleability. My speaking about collaborative automation aims to increase observability.

My quest is: as people, let’s create software systems that are complex and malleable and observable enough that we learn how to work with and within complex systems. That we develop instincts and sciences to change systems from the inside, in ways that benefit the whole system as well as ourselves. And that we apply that learning to the systems we live and breathe in: biology, ecology, economy, culture.

That’s my mission as a symmathecist.

Strong Opinions Strongly Overrated

Today Avdi tweeted about this lovely article debunking “strong ideas, loosely held”.
The gist is, yo! stop talking like you’re sure about stuff. It makes it hard for other people to chime in, and it anchors you, making it hard to let go of your opinion.
The simple trick is to quantify your certainty, below 100%. “I’m 90% sure…” or “At 80% confidence…” This helps you because it’s easier to reduce confidence from 90 to 60, given new information, than to change an absolute (P=1) belief.

Like consequences, beliefs are not yes/no, black/white, all/nothing. We hold beliefs at different confidence levels. There’s a book about this: Probabilistic Knowledge, by Sarah Moss. With academic rigor, she proposes that we don’t believe with 50% confidence that Go is the best language for this. Instead, we believe that “There is a 50% chance that Go is the best language for this.” Modeling belief this way leads to some cool stuff that I haven’t read yet; we can do math on this. And it’s how our brains really work; we are constantly choosing action while uncertain.

This also shows up in How to Measure Anything, which uses experts and confidence intervals to get rough measurements of risk and expected outcomes. This can help with decision making. Dan North talks about this with software estimation: the trick is not to ask for a single number “how long will it take” but instead, “It’ll definitely take at least X” and “I’d be embarrassed if it took more than Y”. This gives you a 90% confidence interval, and then you can spend effort to narrow that interval (reduce uncertainty), if you choose.

Many things become possible when you let go of belief being all or nothing. We can act on 80% confidence, and learning from additional data while we proceed. Real life happens in the gray areas.

Open Source needs Open Source companies.

The other day, AWS announced its latest plans to work around the license of ElasticSearch, a very useful open source project cared for by Elastic.

“At AWS, we believe that maintainers of an open source project have a responsibility to ensure that the primary open source distribution… does not advantage any one company over another. “

Can I call BS on this?

I mean, I’m sure AWS does believe that, because it’s in their interest. But not because it’s true.

Great pieces of software are not made of code alone. They’re made of code, which is alive in the mental models of people, who deeply understand this software and its place in the wider world. These people can change the software, keep it working, keep it safe, keep it relevant, and grow it as the growing world needs.

These people aren’t part-time, volunteer hobbyists. They’re dedicated to this project, not to a megacorporation. And they eat, and they have healthcare.

Reusable software is not made of code alone. It includes careful APIs, with documentation, and of communication lines. The wider world has to hear about the software, understand its purpose, and then contribute feedback about problems and needs. These essential lines of communication are known as marketing, developer relations, sales, and customer service.

Useful software comes out of a healthy sociotechnical system. It comes out of a company.

If a scrappy, dedicated company like Elastic has a license that leans customers toward paying for the services that keep ElasticSearch growing, secure, and relevant — great! This benefits everyone.

Poor AWS, complaining that it doesn’t have quite enough advantages over smaller, specialized software companies.

Then the next sentence: “This was part of the promise the maintainer made when they gained developers’ trust to adopt the software.” Leaving aside the BS that a person who gives away software owes anyone anything —

The best thing a maintainer can do for the future of all the adopters is to keep the software healthy. That means watching over the sociotechnical system, and that means building a company around it. A giant company fighting against that is not on the side of open source.