What makes a dependency dependable?

Lately I’ve been working in Node, and that means I have access to all the open-source modules on npm. It is easy to add a dependency on any of them. But is it a good idea?

Not every library is production-grade.

Not every library (or service) is something that I want my production code to depend on.

The first thing I look at when a module comes up in an npm search is: how many weekly downloads does it have? This is a cheat. “It’s production grade if other people use it in production” is a reasonable heuristic, but it doesn’t tell us how code becomes worthy of extensive re-use.

What does it take to make a production-grade library?

Some static qualities:

  • The code works as expected.
  • It is fast enough.
  • It has a well-designed API that lets me write readable code to call it.
  • The documentation is accurate and complete enough.

We look at:

  • What version is it on?
  • When was the last commit?
  • How many committers are there?
  • How many answered questions about it are on StackOverflow?

These touch on something deeper: how does the library change?

When I build my software on top of a library (or runtime, framework, toolchain etc), I want that library to be solid — and I want it to become more solid over time. I don’t want it to shift under my feet!

I want to know:

  • When there’s a security problem, a fix is released promptly.
  • Useful features are added (but not too quickly).
  • New features follow the pattern of existing APIs.
  • When I upgrade, my code will continue to work.

For bonus points:

  • Before I upgrade, I’ll still have access to the documentation for the version I use.
  • There is advance notice of upcoming features.
  • Someone from the core team responds to GitHub issues, StackOverflow questions, and tweets of despair.

I expect a production-grade library to practice responsible change.

Responsible change considers the larger world, all the people and software depending on the system that’s changing. It starts with backwards compatibility, regression testing, and careful API design. It continues with documentation, expectation-setting, and responsiveness to questions.

Very little of this happens in the code. Responsible change is a property of something larger. Whether in a company or a team of volunteers, a production-grade dependency is part of a symmathesy. Software resilience comes from people — people who have a mental model of how the software works. People who are actively involved in keeping it safe, and making it more appropriate for the shifting present.

That’s why we look for a team of committers, for recent attention, for a broad base of interest.

Most npm modules have a single author and haven’t changed in years. That’s fine! No open-source author is responsible to anyone for putting more work into that code than they darn well please. There are other ways to use this code than to depend on it.

If I can’t count on code to change responsibly, I don’t want it to change at all. I prefer to freeze the version of the library forever. If it needs to change, that’s my responsibility. I fork the library — or for similar effect, cut and paste the code into my project (with attribution).

This reduces the code that I have to write, but it does not reduce the code that my team is responsible for.

There are two kinds of open source: production-grade code with a community or company behind it, and code that I now own.

Next time you type “npm install,” make a guess as to which one you’re getting.

Taking care of code … more and more code

(This is a shorter version of my talk for DeliveryConf, January 2020. Video of slides+audio; Slides as pdf)

Good software is still alive.

The other day, I asked my twelve year old daughter for recommendations of drawing programs. She told me about one (FireAlpaca?) “It’s free, and it updates pretty often.” She contrasted that with one that cost money “but isn’t as good. It never updates.”

The next generation appreciates that good software is updated regularly. Anything that doesn’t update is eventually terrible.

Software that doesn’t change falls behind. People’s standards rise, their needs change. At best, old software looks dumb. At worst, it doesn’t run on modern devices.

Software that doesn’t change is dead. You might say, if it still runs, it is not dead. Oh, sure, it’s moving around — but it’s a zombie. If it isn’t learning, it’ll eventually fall over, and it might eat your face.

I want to use software that’s alive. And when I make software, I want it to stay alive as long as it’s in use. I want it be “done” when it’s out of production.

Software is like people. The only “done” is death.

Alive software belongs to a team.

What’s the alternative? Keep learning to keep living. Software needs to keep improving, at least in small ways, for as long as it is running.

We have to be able to change it, easily. If Customer Service says, “Hey, this text is unclear, can you change it to this?” then pushing that out should be as easy as updating some text. It should be not be harder than when the software was in constant iteration.

This requires automated delivery, of course. And you have to know that delivery works. So you have to have run it recently.

But it takes more than that. Someone has to know — or find out quickly — where that text lives. They have to know how to trigger the deployment and how to check whether it worked.

More than that, someone has to know what that text means. A developer needs to understand that application. Probably, this is a developer who was part of its implementation, or the last major set of changes.

For the software to be alive, it has to be alive in someone’s head.

And one head will never do; the unit of delivery is the team. That’s more resilient.

Alive software is owned and cared for by an active team. Some people keep learning, keep teaching the software, and the shared sociotechnical system keeps living. The team and software form a symmathesy.

How do we keep all our software alive, while still growing more?

Okay, but what if the software is good enough right now? How do we keep it alive when there’s no big initiative to change it?

Hmm. We can ask, what kind of code is easy to change?

Code needs to be clean and modern.

Well, it’s consistent. It is up-to-date with the language versions and frameworks and libraries that we currently use for development.

It is “readable” by our current selves. It uses familiar styles and idioms.

What you don’t want is to come at the “simple” (from outside perspective) task of updating some text, and find you need to install a bunch of old tools, oh wait, there’s security patches that need to happen before this will pass pre-deployment checks. Oh now we have to upgrade more stuff to the modern versions of those libraries to work. You don’t want to have to resuscitate the software before you can breathe new life into it.

If changing the software isn’t easy enough, we won’t do it. And then it gets terrible.

So all those tool upgrades, security patches, library updates gotta have been done already, in the regular course of business.

Keeping those up to date gives us an excuse to change the code, trigger a release, and then notice any problems in the deployment pipeline. We keep confidence that we can deploy it, because we deploy it every week whether we need to or not.

People need to be on stable teams with customer contact.

More interesting than properties of the code: what are some properties of people who can keep code alive?

The team is stable. There’s continuity of knowledge.

The team understands the reason the software exists. The business significance of that text and everything else.

And we still care. We have contact with people who use this software, so we can check in on whether this text change works for them. We continue to learn.

Code belongs to one team.

More interesting still: what kind of relationship does the alive-keeping team have with the still-alive code?

Ownership. The code is under the care of a single team.

Good communication. We can teach the code (by changing it), so we have good deployment automation and we understand the programming language, etc. And the code can teach us — it has good tests, so we know when we broke something. It is accountable to us, in the sense that it can tell us the story of what happens. This means observability. With this, we can learn (or re-learn) how it works while it’s running. Keep the learning moving, keep the system living.

The team is a learning system, within a learning system.

Finally: what kind of environment can hold such a relationship?

(diagram of code, people, relationship, environment)

It’s connected; the teams are in touch with the people who use software, or with customer support. The culture accepts continued iteration as good, it doesn’t fear change. Learning flows into and out of the symmathesy.

It supports learning. Software is funded as a profit center, as operational costs, not as capital expenditure, where a project is “done” and gets deprecated over years. How the accounting works around development teams is a good indication of whether a company is powered by software, or subject to software.

Then there’s the tricky one: the team doesn’t have too much else on their plate.

How do we keep adding code to our responsibilities?

The team that owns this code also owns other code. We don’t want to update libraries all day across various systems we’ve written before. We want to do new work.

It’s like a garden; we want to keep the flowers we planted years ago healthy, and we also want to plant new flowers. How do we increase the number of plants we can care for?

And, at a higher level — how can we, as people who think about DevOps, make every team in our organization able to keep code alive?

Teams are limited by cognitive load.

This is not: how do we increase the amount of work that we do. If all we did was type the same stuff all the time, we know what to do — we automate it.

Our work is not typing; it’s making decisions. Our limitation is not what we can do, it is what we can know.

In Team Topologies, Manuel Pais and Matthew Skelton emphasize: the unit of delivery of a team, and the limitation of a team is cognitive load.

We have to know what that software is about, and what the next software we’re working on is about. and the programming languages they’re in, and how to deploy them, and how to fill out our timesheets and which kitchen has the best bubbly water selection, and who just had a baby, and — it takes a lot of knowledge to do our work well.

Team Topologies lists three categories of cognitive load.

The germane cognitive load, we want that.

Germane cognitive load is the business domain. It is why our software exists. We want complexity here, because the more complex work our software does, the less the people who use it have to bother with. Maximize the percentage of our cognitive load taken up by this category.

So which software systems a team owns matters; group by business domain.

Intrinisic cognitive load increases if we let code get out of date.

Intrinsic cognitive load is essential to the task. This is our programming language and frameworks and libraries. It is the quirks of the systems we integrate with. How to write a healthy database query. How the runtime works: browser behavior, or sometimes the garbage collector.

The fewer languages we have to know, the better. I used to be all about “the best language for the problem.” Now I recommend “the language your team knows best, as long as it’s good enough.”

And “fewer” includes versions of the language, so again, consistency in the code matters.

Extrinsic cognitive load is a property of the work environment. Work on this

Finally, extrinsic cognitive load is everything else. It’s the timesheet system. The health insurance forms. It’s our build tools. It’s Kubernetes. It’s how to get credentials to the database to test those queries. It’s who has to review a pull request, and when it’s OK to merge.

This is not the stuff we want to spend our brain on. The less extrinsic cognitive load on the team, the more we have room for the business and systems knowledge, the more responsibility we can take on.

And this is a place where carefully integrated tools can help.

DevOps is about moving system boundaries to work better. How can we do that?

We can move knowledge within the team, and we can move knowledge out to a different team.

We can move work below the line.

Within the team, we can move knowledge from the social side to the technical side of the symmathesy. We can package up our personal knowledge into code that can be shared.

Automations encapsulate knowledge of how to do something

Automate bits of our work. I do this with scripts.

The trick is, can we make sharing it with the team almost as easy as writing it for ourselves?

Especially automate anything we want to remain consistent.

For instance, when I worked on the docs at Atomist, I wrote the deployment automation for them. I made a glossary, and I wanted it in alphabetical order. I didn’t to put it in alphabetical order; I wanted it to constantly be alphabetical. This is a case for automation.

I wrote a function to alphabetize the markdown sections, and told it to run with every build and push the changes back to the repository.

Autofixes like this also keep the third party licenses up to date (all the npm dependencies and their licenses). This is a legal requirement that a human is not going to do. Another one puts the standard license header on any code that’s committed without it. So I never copied the headers, I just let the automation do that. Formatting and linting, same thing.

If you care about consistency, put it in code. Please don’t nag a human.

Some of that knowledge can help with keeping code alive

Then there’s all that drudgery of updating versions and code styles etc etc — weeding the section of the garden we planted last year and earlier. how much of that can we automate?

We can write code to do some of our coding for us. To find the inconsistencies, and then fix some of them.

Encapsulate knowledge about -when- to do something

Often the work is more than knowledge of -how- to do something. It is also -when-, and that takes requires attentiveness. Very expensive for humans. When my pull request has been approved, then I need to push merge. Then I need to wait for a build, and then I need to use that new artifact in some other repository.

Can we make a computer wait, instead of a person?

This is where you need an event stream to run automations in response to.

Galo Navarro has an excellent description of how this helped smooth the development experience at Adevinta. They created an event hub for software development and operations related activities, called Devhose. (This is what Atomist works to let everyone do, without implementing the event hub themselves.)

We can move some of that to a platform team.

Yet, every automation we build is code that we need to keep alive.

We can move knowledge across team boundaries, with a platform team. I want my team’s breadth of responsibility to increase, as we keep more software alive, so I want its depth to be reduced.

Team Topologies describes this structure. The business software teams are called “stream aligned” because they’re working in a particular value stream, keeping software alive for someone else. We want to thin out their extrinsic cognitive load.

Move some it to a platform team. That team can take responsibility for a lot of those automations. And deep knowledge of delivery and operational tooling. Keep the human judgement of what to deploy when in the stream-aligned teams, and a lot of the “how” and “some common things to watch out for” in the platform team.

Some things a platform team can do:

  • onboarding
  • onboarding of code (delivery setup)
  • delivery
  • checks every team needs, like licenses

And then, all of this needs to stay alive, too. Your delivery process needs to keep updating for every repository. If delivery is event-based, and the latest delivery logic responds to every push (instead of what the repo was last configured for), then this keeps happening.

But keep thinning our platforms.

Platforms are not business value, though. We don’t really want more and more software there, in the platform.

We do want to keep adding services and automation that helps the team. But growing the platform team is not a goal. Instead, we need to make our platforms thinner.

There is such a thing as “done”

The best way to thin our software is outsourcing to another company. Not the development work, not the decisions. But software as a service, IaaS, logging, tooling of all sorts — hire a professional. Software someone else runs is tech debt you don’t have.

So maybe Galo could move Devhose on top of Atomist and retire some code.

Because any code that isn’t describing business complexity, we do want to die. As soon as we can move onto someone else’s service, win. Kill it, take it out of production. Then, finally, it’s done.

So yeah. There is such a thing as done. “Done” is death. You don’t want it for your value-producing code. You do want it for all other code you run.

Don’t do boring work.

If keeping software alive sounds boring, then let’s change that. Go up a level of abstraction and ask, how much of this can we automate?

Writing code to change code is hard. Automating is hard.

That will challenge your knowledge of your own job, as you try to encode it into a computer. Best case, you get the computer doing the boring bits for you. Worst case, you learn that your job really is hard, and you feel smart.

Keep learning to keep living. Works for software, and it works for us.

How can we develop and operate increasingly useful software?

Most software gets harder to change as it ages. Making modern applications, it is not enough to write a system and put it out there. We need continual improvement and adaptation to the growing world.

How can we develop and operate increasingly useful software?

To answer this, I need a mental model of how software teams work.

My model is symmathesy: a learning system of learning parts. A great software team is made of people, software, and tools. The people are always learning, the software is changing, the tools are improving.

With this view, let’s look at the question.

How can we (the people, tools, & software in a symmathesy) develop & operate (for learning, these must be together) increasingly (the ongoing quest) useful (the purpose; to learn, we need connection to its use) software (output of our system to the wider world)? People are products of our interactions; the future matters, so we need healthy growth; and code is a product of our interactions with it — so the route we take, or the process, matters.

The external output of the team is: useful software, running in production. To make it increasingly useful, grow the symmathesy. Growth in a system is defined as an increase in flow. In a symmathesy, that means growth is an increase in learning.

The people learn about the software by operating it, mediated by tools. Then we teach the software and tools by developing. Unless development & operations are in the same team, learning is blocked.

To increase usefulness, the team needs to learn about use. We can get input from outside, and study events emitted by the running software, mediated by tools.

What we know about usefulness leads the team to design change, and move the system to the next version of itself. That next version may be more useful to the outside, or it may be better at learning and teaching, the better to increase knowledge inside the team.

Someone today told me, “All great things in life come from compounding interest” — from feedback loops. (Compound interest is the simplest familiar example of a feedback loop.)

In a great software team, that “compound interest” is our relevant knowledge. Of the software, of its environment, of the people and systems who use it.

Maximize learning to increase the usefulness of software, at an accelerating pace.

Keynote: Collective Problem Solving in Music, Science, Art, and Software

(originally titled: “On the Origins of Opera and the Future of Programming”)

Blog write-up


There’s a story to tell, about musicians, artists, philosophers, scientists, and then programmers.

There’s a truth inside it that leads to a new view of work, that sees beauty in the painful complexity that is software development.

Starting from The Journal of the History of Ideas, Jessica traces the concept of an “invisible college” through music and art and science to programming. Along the way she learns what teams are made of, where great developers come from, why changing software is harder for some of us than others, and why this is a fantastic time to be alive.

Past public deliveries

Learning as becoming

businesses… need a new worldview… that shifts the emphasis… from success as accomplishment to success as learning.

Jeff Sussna, in Designing Delivery

“Success as learning” doesn’t mean learning lessons. This is not learning as facts.

This is learning as becoming. It is learning that bakes into who we are and how we do things, bakes into people and software.

Success is becoming a person, a symmathesy, an organization that can do more than before.

Developers are system changers

Some people work in a system, and some people work on a system.

Like, you can be the person who washes the dishes, or the person who installs and maintains the dishwasher.

You can be the person who assembles the reports every week, or the person who automates that report assembly. (Jacob Stoebel told this story on >Code #148 today. That’s how he got into software development.)

You can conform to a system, or you can participate fully — part of serving the system is changing the system to better serve you.

Developers are inherently system changers. That’s what we do. No wonder we’re hard to manage!

No wonder software communities are full of turmoil and rabble-rousing and shifting technologies: we are a whole industry full of system changers.

Also on >Code today, we talked about personal automation. Chante Thurmond remarked on the tools that exist today to let people (not just developers!) customize, tweak, and automate their work. We can all craft the systems we operate in. More and more system changers.

This is the real change software makes in the world.


Generativity is about caring about what comes after you.

In a social context, it’s when old people care about subsequent generations, instead of maximizing their own happiness for the last few years.

In business, it’s when executives care about the health and outcomes of the whole executive team, instead of their own career after this job.

Generativity is building the next version of yourself, your team, and your organization. This includes current work and all the work you’ll be capable of in the future.

For software development teams (among many others), companies care about the work they’re capable of doing. They care about productivity — the outward-facing work of the team.

productivity is externally visible; generativity improves the team

If we look at productivity as a property of individuals, if we measure and reward “how many tickets did you personally close?” then we give people reasons to work alone, to rush, to keep information to themselves. This cuts away at the potential of the team.

If we care about the team as a whole, both now and in the future, then we encourage generativity. Teach each other, communicate, craft software so that people can understand and work with it.

Generativity is hard to measure, but not so hard to detect. Ask, who helped you today? Who answered your questions and taught you something useful? What scripts or documentation or comments or variable naming made your work smoother?

long-term productivity comes from a healthy team

If productivity is the externally visible work that is attributed to you, then I define:

Generativity – the difference between your team’s outcomes with you, vs without you.

It is not about me, and it is not about right now. I want to make my team and my company better for the future. I want to be generative.

The artifact and the process

again: Every action has two results: a set of side effects on the world, and the next version of ourselves.

Sometimes in the creation of an artifact, the artifact itself is less important than the process of creating it. Wardley Maps are like that too – it’s about the thought that goes into creating them, and the conversations they enable. (video by @catswetel)

Most diagrams, especially on the white board, communicate very well to the people who participated in their creation, and very little to anyone else.

It’s the co-creation that matters. The participation in creating the artifact. The output is the next version of our team, with this common ground we established in the process.

Every action has two results (Erlang edition)

Every action has two results: a set of side effects on the world, and the next version of ourselves.

I learned this from Erlang, a purely functional yet stateful programming language. Erlang uses actor-based concurrency. The language is fully immutable, yet the programs are not: every time an actor receives a message, it can send messages to other actors, and then return a different version of itself, instantiated with different state.

Here’s an example from the tutorial in the Erlang docs:

%%% This is the server process for the "messenger"
%%% the user list has the format [{ClientPid1, Name1},{ClientPid22, Name2},...]
server(User_List) ->
        {From, logon, Name} ->
            New_User_List = server_logon(From, Name, User_List),
%%% ...

This defines a server process (that is, an actor) which receives a logon message. When it does that, it builds a new list of users including the one it just received and all the ones it knew about before. Then it constructs a new version of itself with the new list! (That’s implicitly returned from receive.) The next message will be received by the new server.

It’s like that with us, too. Today I made coffee, with a side effect of some dirty cups and fewer coffee beans, and a next version of me that was more alert. Today I checked twitter, with a side effect of nothing observable, and a next version of me ready to write this post. Now I’m writing this post, which will have side effects with unknown consequences, depending on what y’all do with it.

This works in our teams, too. Every task we complete changes the world, and it changes us. Maybe we add tests or implement a feature. In the process, we learn about the software system we participate in. Did we do this as a team, or will we catch each other up later? Is changing the software more safe or harder than before?

When “productivity” measures focus on externally-visible outcomes, sometimes the internal system is left in a terrible state. Burnout in people, “technical debt” in code, and a degeneration of the mental models that connect us with the code we care for.

The consequences of our work matter now. The next version of us matters for the whole future, for everything after now.

An old idea in a new context is new.

“Ideas don’t spring fully formed from the mind of an individual. Ideas emerge between people.”

Avdi Grimm, in his closing keynote at Southeast Ruby, told how when he sets out to write a talk, he wants to go out on his deck and walk back and forth until he squeezes the ideas out of his brain. “That works a little, but it’s a slow trickle. Then I phone a friend, and the ideas gush out.”

People think together. Through direct conversations, and through what we write and what we make.

Also at Southeast Ruby, Brandon Weaver talked about how programming languages like Ruby evolve by incorporating ideas from other languages (with a magic book! and 62 original drawings! of lemurs!). When people write horrifying gems to make Ruby to look like Scala, that’s a step in the evolution. Why do it? because we can. To let people see something new. That’s art, man.

And in the opening keynote, I talked about how ideas don’t belong to one canonical source. If some idea has been around (published) since the 70s, and someone recently made it useful in a new library, that is new work! If you find an idea in an article and apply it in your context, that is new work! If you explain a concept to someone who didn’t understand it before, new work! Heck, if you send a link to someone that gives them the explanation that they needed in this situation, that contributes to the idea. It draws a line between the idea and a person, in a context where it is useful.

But what about attribution, about credit?

If you find use in work published by someone in academia, please go to lengths to give them credit, and link to their work publicly. Attribution is currency in academia; it’s scarce and necessary to careers.

My favorite part about naming people who carried an idea to me is that it shows that nothing springs fully formed out of my brain. Everything is a synthesis, a reapplication in a new situation, a restatement — all these are new work.

For me personally, attribution is not scarce. Spreading ideas has intrinsic value. That value also appears between people, in conversation. The reward is who I get to talk with.

In his keynote, Avdi quoted Abeba Birhane‘s work, on “A person is a person through other persons.” None of this is worth anything, alone.