Generativity

Generativity is about caring about what comes after you.

In a social context, it’s when old people care about subsequent generations, instead of maximizing their own happiness for the last few years.

In business, it’s when executives care about the health and outcomes of the whole executive team, instead of their own career after this job.

Generativity is building the next version of yourself, your team, and your organization. This includes current work and all the work you’ll be capable of in the future.

For software development teams (among many others), companies care about the work they’re capable of doing. They care about productivity — the outward-facing work of the team.

productivity is externally visible; generativity improves the team

If we look at productivity as a property of individuals, if we measure and reward “how many tickets did you personally close?” then we give people reasons to work alone, to rush, to keep information to themselves. This cuts away at the potential of the team.

If we care about the team as a whole, both now and in the future, then we encourage generativity. Teach each other, communicate, craft software so that people can understand and work with it.

Generativity is hard to measure, but not so hard to detect. Ask, who helped you today? Who answered your questions and taught you something useful? What scripts or documentation or comments or variable naming made your work smoother?

long-term productivity comes from a healthy team

If productivity is the externally visible work that is attributed to you, then I define:

Generativity – the difference between your team’s outcomes with you, vs without you.

It is not about me, and it is not about right now. I want to make my team and my company better for the future. I want to be generative.

The artifact and the process

again: Every action has two results: a set of side effects on the world, and the next version of ourselves.

Sometimes in the creation of an artifact, the artifact itself is less important than the process of creating it. Wardley Maps are like that too – it’s about the thought that goes into creating them, and the conversations they enable. (video by @catswetel)

Most diagrams, especially on the white board, communicate very well to the people who participated in their creation, and very little to anyone else.

It’s the co-creation that matters. The participation in creating the artifact. The output is the next version of our team, with this common ground we established in the process.

Every action has two results (Erlang edition)

Every action has two results: a set of side effects on the world, and the next version of ourselves.

I learned this from Erlang, a purely functional yet stateful programming language. Erlang uses actor-based concurrency. The language is fully immutable, yet the programs are not: every time an actor receives a message, it can send messages to other actors, and then return a different version of itself, instantiated with different state.

Here’s an example from the tutorial in the Erlang docs:

%%% This is the server process for the "messenger"
%%% the user list has the format [{ClientPid1, Name1},{ClientPid22, Name2},...]
server(User_List) ->
    receive
        {From, logon, Name} ->
            New_User_List = server_logon(From, Name, User_List),
            server(New_User_List);
%%% ...
   end.

This defines a server process (that is, an actor) which receives a logon message. When it does that, it builds a new list of users including the one it just received and all the ones it knew about before. Then it constructs a new version of itself with the new list! (That’s implicitly returned from receive.) The next message will be received by the new server.

It’s like that with us, too. Today I made coffee, with a side effect of some dirty cups and fewer coffee beans, and a next version of me that was more alert. Today I checked twitter, with a side effect of nothing observable, and a next version of me ready to write this post. Now I’m writing this post, which will have side effects with unknown consequences, depending on what y’all do with it.

This works in our teams, too. Every task we complete changes the world, and it changes us. Maybe we add tests or implement a feature. In the process, we learn about the software system we participate in. Did we do this as a team, or will we catch each other up later? Is changing the software more safe or harder than before?

When “productivity” measures focus on externally-visible outcomes, sometimes the internal system is left in a terrible state. Burnout in people, “technical debt” in code, and a degeneration of the mental models that connect us with the code we care for.

The consequences of our work matter now. The next version of us matters for the whole future, for everything after now.

An old idea in a new context is new.

“Ideas don’t spring fully formed from the mind of an individual. Ideas emerge between people.”

Avdi Grimm, in his closing keynote at Southeast Ruby, told how when he sets out to write a talk, he wants to go out on his deck and walk back and forth until he squeezes the ideas out of his brain. “That works a little, but it’s a slow trickle. Then I phone a friend, and the ideas gush out.”

People think together. Through direct conversations, and through what we write and what we make.

Also at Southeast Ruby, Brandon Weaver talked about how programming languages like Ruby evolve by incorporating ideas from other languages (with a magic book! and 62 original drawings! of lemurs!). When people write horrifying gems to make Ruby to look like Scala, that’s a step in the evolution. Why do it? because we can. To let people see something new. That’s art, man.

And in the opening keynote, I talked about how ideas don’t belong to one canonical source. If some idea has been around (published) since the 70s, and someone recently made it useful in a new library, that is new work! If you find an idea in an article and apply it in your context, that is new work! If you explain a concept to someone who didn’t understand it before, new work! Heck, if you send a link to someone that gives them the explanation that they needed in this situation, that contributes to the idea. It draws a line between the idea and a person, in a context where it is useful.

But what about attribution, about credit?

If you find use in work published by someone in academia, please go to lengths to give them credit, and link to their work publicly. Attribution is currency in academia; it’s scarce and necessary to careers.

My favorite part about naming people who carried an idea to me is that it shows that nothing springs fully formed out of my brain. Everything is a synthesis, a reapplication in a new situation, a restatement — all these are new work.

For me personally, attribution is not scarce. Spreading ideas has intrinsic value. That value also appears between people, in conversation. The reward is who I get to talk with.

In his keynote, Avdi quoted Abeba Birhane‘s work, on “A person is a person through other persons.” None of this is worth anything, alone.

Human, or Person

Sometimes I think about, what if aliens kept humans as pets? Raised them from babies in isolation, without human language, without society. What would a human be like, outside of other humans?

Not a person.

As Abeba Birhane points out beautifully in her talk at NCrafts and article in Aeon, we aren’t people alone. We form ourselves in our interactions. “A person is a person through other persons.” A baby (while homo sapiens) is less of a person than an adult, because the adult has connections, the adult participates in many systems within society.

Every hear that saying “You are the sum of the five people you hang out with most”? Yeah. Good approximation.

We get meaning from interaction with others. We form our self through this. Most of our relationships are not transactional, we’re not in it for some end goal — the interactions have intrinsic value. They build and reflect who we are. That’s enough!

If you have a friend you wish you could do more for, know that being a friend is itself a thing. Listening, being present, sharing a reality with a person — this is already a thing. Often we help each other out in tangible ways, and that feels great. But a simple “I hear you, I see you” — we can’t live without that, we can’t be a person without that.

Without each other, might as well sit by the food bowl and yowl.

Certainty, Uncertainty, or the worst of both

Des Cartes looked for certainty because he wanted good grounds for knowledge, a place of fixity to build on, to make predictions.

Juarrero counters that uncertainty allows for novelty and individuation.

In software, we like to aim for certainty. Correctness. Except in machine learning or AI; we don’t ask or expect our algorithms to be “correct,” just useful.

The predictions made by algorithms reproduce the interpretations of the past. When we use these to make decisions, we are reinforcing those interpretations. Black people are more likely to be arrested. Women are less likely to be hired.

Machine learning based on the past, choosing the future — this reinforces bias. It suppresses novelty and individuation. It is the worst of both worlds!

This doesn’t mean we should eschew this technology. It means we should add to it. To combine the fluidity of the human world with the discreteness of machines, as Kevlin Henney puts it. We need humans working in symmathesy with the software, researching the factors that influence its decision and consciously altering them. We can tweak the algorithms toward the future we want, beyond the past they have observed.

Machine learning models come from empirical data. Logical deduction comes from theory. As Gregory Bateson insisted: progress happens in the interaction between the two. It takes a person to tack back and forth.

We can benefit from the reasoning ability we wanted from certainty, and still support novelty and individuation. It takes a symmathesy.

This post is based on Abeba Birhane’s talk at NCrafts this year. Video

Designing Change vs Change Management

Our job as developers is to change software. And that means that when we decide what to do, we’re not designing new code, we’re designing change.

Our software (if it is useful) does not work in isolation. It does not poof transition to a new state and take the rest of the world with it.

If our software is used by people, they need training (often in the form of careful UI design). They need support. hint: your support team is crucial, because they talk to people. They can help with change.

If our software is a service called by other software, then that software needs to change to, if it’s going to use anything new that we implemented. hint: that software is changed by people. You need to talk to the people.

If our software is a library imported by other software, then changing it does nothing at all by itself. People need to upgrade it.

The biggest barriers to change are outside your system.

Designing change means thinking about observability (how will I know it worked? how will I know it didn’t hurt anything else?). It means progressive delivery. It often means backwards compatibility, gradual data migrations, and feature flags.

Our job is not to change code, it is to change systems. Systems that we are part of, and that are code is part of (symmathesy).

If we look at our work this way, then “Change Management” sounds ridiculous. Wait, there’s a committee to tell me when I’m allowed to do my job? Like, they might as well call it “Work Management.”

It is my team’s job to understand enough of the system context to guess at the implications of a change and check for unintended consequences. We don’t all have that, yet. We can add visibility to the software and infrastructure, so that we can react to unintended consequences, and lead the other parts of the system forward toward the change we want.

Action is a process. Reasons matter.

Knowledge work has to be done for the right reason to be done well.

If I choose to implement a feature because I’ll get some reward for it, like a gold star on my next review, then I will implement it far enough to check the box. I won’t get it smooth, or optimal. I will optimize the wrong parts or not at all.

If I implement that feature because I believe it’s the next thing I can do to improve our customer’s experience, then I’ll get optimize for that experience. I’ll make all the tiny decisions with that in mind. I’ll check whether I’ve slowed the site down, or interfered with other features, or cluttered the interface.

Creation is a process, not a transaction.

We are guided throughout that process by an intention. If that intention is to check a box, we’ll check the box, and leave the rest of the system to fall as it will.

In Dynamics in Action, Alicia Juarrero models human action as a process: we start with an intention, which shapes the probability space of what we do next. We do something, and then we check the results against our intention, and then we respond to difference.

Action is a process, not an atomic motion.

Development is action, and it is knowledge work not hand work. To do it well, to shape the system we work in and work on, we line our intentions up with the goals of the system. Not with some external reward or punishment. Intention shapes action, over time, in response to feedback. Reasons matter.

Respond to input? No, we create it

Naturally intelligent systems do not passively await sensory stimulation.


They are constantly active, trying to predict (and actively elicit) the streams of sensory information before they arrive…. Systems like that are already (pretty much constantly) poised to act.


We act for the the evolving streams of sensory information that keep us viable and serve our ends… perception and action in a circular causal flow.

Andy Clark, Surfing Uncertainty

This book is about how human brains perceive based on predictions and prediction errors. I used to think we take in light from our eyes, assemble all that into objects that we perceive, and then respond to those perceptions. But it’s different: we are always guessing at what we expect to perceive, breaking that down into objects and then into expected light levels, and then processing only differences from us that expectation. We turn that into possibilities for our next action.

We don’t sit around waiting to notice something and then respond! We’re always predicting, and then thinking what would we like to see or explore next, and then acting toward that imagined reality.

Teams are intelligent systems like this too. Development teams (symmathesies) operating software in production have expectations of what it should be outputting. We are constantly developing better visibility, and stronger expectations. Then we want that output (data, usage volume) to change, so we take action on the software.

Teams and developers, as intelligent systems in conjunction with software, need to choose our own actions. Acting and perceiving are interlinked. Some of that perception is from the higher levels of organization, like: what does our company want to accomplish? What are people asking us to change? But in the end we have to act in ways we choose, and look for differences in what we perceive, to learn and grow.

Notice that “act as I choose” is very different from “do what I want” or worse, “do what I feel like doing” (which rarely corresponds to what will make me happy). I choose what to do based on input from people and systems around me, according to what might be useful for the team and organization and world.

If my boss wants something done in the code, they’d better convince me that it’s worth doing. Because only if I understand the real “why” can I predict what success looks like, and then I can make the million big and tiny decisions that are the work of software development. Does memory use matter? What is permanent API and what internal? Which off-happy-path cases are crucial, and how can I make the rest fall back safely? Where should this new function or service live?

If I choose to do the work only because they said to, only so that I can check off a box, I am not gonna make these umpteen decisions in ways that serve future-us.

Our job is making choices. We need to the background and understanding to choose our high-level work, so that we can make skillful choices at the low levels.

Intelligent systems don’t wait around to be told what to do. We are constantly looking for the next input that we like better, and creating that input. Act in order to perceive in order to act. This is living.

Adding qualities to systems

The other day there was a tweet about a Chief Happiness Officer.

Later someone remarked about their Agile Transformation Office.

It seems like we (as a culture) think that we can add qualities to systems the way we add ingredients to a recipe.

Systems, especially symmathesies, aren’t additive! Agility, happiness, these are spread throughout the interactions in the whole company. You can’t inject these things, people.

You can’t make a node responsible for a system property. Maybe they like having a human to “hold accountable”? Humans are great blame receptacles; there’s always something a human physically could have done differently.