Our job as developers is to change software. And that means that when we decide what to do, we’re not designing new code, we’re designing change.
Our software (if it is useful) does not work in isolation. It does not poof transition to a new state and take the rest of the world with it.
If our software is used by people, they need training (often in the form of careful UI design). They need support. hint: your support team is crucial, because they talk to people. They can help with change.
If our software is a service called by other software, then that software needs to change to, if it’s going to use anything new that we implemented. hint: that software is changed by people. You need to talk to the people.
If our software is a library imported by other software, then changing it does nothing at all by itself. People need to upgrade it.
Designing change means thinking about observability (how will I know it worked? how will I know it didn’t hurt anything else?). It means progressive delivery. It often means backwards compatibility, gradual data migrations, and feature flags.
Our job is not to change code, it is to change systems. Systems that we are part of, and that are code is part of (symmathesy).
If we look at our work this way, then “Change Management” sounds ridiculous. Wait, there’s a committee to tell me when I’m allowed to do my job? Like, they might as well call it “Work Management.”
It is my team’s job to understand enough of the system context to guess at the implications of a change and check for unintended consequences. We don’t all have that, yet. We can add visibility to the software and infrastructure, so that we can react to unintended consequences, and lead the other parts of the system forward toward the change we want.
Knowledge work has to be done for the right reason to be done well.
If I choose to implement a feature because I’ll get some reward for it, like a gold star on my next review, then I will implement it far enough to check the box. I won’t get it smooth, or optimal. I will optimize the wrong parts or not at all.
If I implement that feature because I believe it’s the next thing I can do to improve our customer’s experience, then I’ll get optimize for that experience. I’ll make all the tiny decisions with that in mind. I’ll check whether I’ve slowed the site down, or interfered with other features, or cluttered the interface.
Creation is a process, not a transaction.
We are guided throughout that process by an intention. If that intention is to check a box, we’ll check the box, and leave the rest of the system to fall as it will.
In Dynamics in Action, Alicia Juarrero models human action as a process: we start with an intention, which shapes the probability space of what we do next. We do something, and then we check the results against our intention, and then we respond to difference.
Action is a process, not an atomic motion.
Development is action, and it is knowledge work not hand work. To do it well, to shape the system we work in and work on, we line our intentions up with the goals of the system. Not with some external reward or punishment. Intention shapes action, over time, in response to feedback. Reasons matter.
Naturally intelligent systems do not passively await sensory stimulation.
They are constantly active, trying to predict (and actively elicit) the streams of sensory information before they arrive…. Systems like that are already (pretty much constantly) poised to act.
We act for the the evolving streams of sensory information that keep us viable and serve our ends… perception and action in a circular causal flow.
Andy Clark, Surfing Uncertainty
This book is about how human brains perceive based on predictions and prediction errors. I used to think we take in light from our eyes, assemble all that into objects that we perceive, and then respond to those perceptions. But it’s different: we are always guessing at what we expect to perceive, breaking that down into objects and then into expected light levels, and then processing only differences from us that expectation. We turn that into possibilities for our next action.
We don’t sit around waiting to notice something and then respond! We’re always predicting, and then thinking what would we like to see or explore next, and then acting toward that imagined reality.
Teams are intelligent systems like this too. Development teams (symmathesies) operating software in production have expectations of what it should be outputting. We are constantly developing better visibility, and stronger expectations. Then we want that output (data, usage volume) to change, so we take action on the software.
Teams and developers, as intelligent systems in conjunction with software, need to choose our own actions. Acting and perceiving are interlinked. Some of that perception is from the higher levels of organization, like: what does our company want to accomplish? What are people asking us to change? But in the end we have to act in ways we choose, and look for differences in what we perceive, to learn and grow.
Notice that “act as I choose” is very different from “do what I want” or worse, “do what I feel like doing” (which rarely corresponds to what will make me happy). I choose what to do based on input from people and systems around me, according to what might be useful for the team and organization and world.
If my boss wants something done in the code, they’d better convince me that it’s worth doing. Because only if I understand the real “why” can I predict what success looks like, and then I can make the million big and tiny decisions that are the work of software development. Does memory use matter? What is permanent API and what internal? Which off-happy-path cases are crucial, and how can I make the rest fall back safely? Where should this new function or service live?
If I choose to do the work only because they said to, only so that I can check off a box, I am not gonna make these umpteen decisions in ways that serve future-us.
Our job is making choices. We need to the background and understanding to choose our high-level work, so that we can make skillful choices at the low levels.
Intelligent systems don’t wait around to be told what to do. We are constantly looking for the next input that we like better, and creating that input. Act in order to perceive in order to act. This is living.
The other day there was a tweet about a Chief Happiness Officer.
Later someone remarked about their Agile Transformation Office.
It seems like we (as a culture) think that we can add qualities to systems the way we add ingredients to a recipe.
Systems, especially symmathesies, aren’t additive! Agility, happiness, these are spread throughout the interactions in the whole company. You can’t inject these things, people.
You can’t make a node responsible for a system property. Maybe they like having a human to “hold accountable”? Humans are great blame receptacles; there’s always something a human physically could have done differently.
Gregory Bateson talks about distinct levels of learning. From behavior to enlightenment, each level represents change in the previous level.
Zero Learning: this is behavior, responding in the way you always do. The bell rings, oh it’s lunchtime, eat. This does not surprise you, so you just do the usual thing.
Learning I: this is change in behavior. Different response to the same stimulus in a given context. Rote learning is here, because it is training for a response to a prompt. Forming or removing habits.
Learning II: this is change in Learning I; so it’s learning to learn. It can be a change in the way we approach situations, problems, relationships. Character traits are formed here: are you bold, hostile, curious?
For example — you know me, so when you see me you say “Hi, Jess” — zero learning. Then you meet Avdi, so next time you can greet him by name — Learning I. Lately at meetups Avdi is working on learning everyone’s names as introductions are happening, a new strategy for him: Learning II.
Bateson sees learning in every changing system, from cells to societies.
In code — a stateless service processes a request: zero learning. A stateful application retains information and recognizes that user next time: Learning I. We change the app so it retains different data: Learning II.
Learning III: This is change in Learning II, so it is change in how character is formed. Bateson says this is rare in humans. It can happen in psychotherapy or religious conversions. “Self” is no longer a constant, nor independent of the world.
Letting go of major assumptions about life, changing worldviews, this makes me feel alive. The important shift is going from one to two, and accepting that both are cromulent: my model is, there are many models. It is OK when a new model changes me; I’m not important (for whatever version of “I” is referenced).
Learning IV: would be a change in Learning III. Evolution achieves this. It doesn’t happen in individual humans, but in a culture it could. Maybe this is development of a new religion?
I wonder where team and organizational changes fall in this.
Zero learning: “A bug came in, so we fixed it.”
Learning 1: “Now when bugs come in, we make sure there is a test to catch regressions.”
Learning II: “When a bug comes in, we ask: how could we change the way we work so that this kind of bug doesn’t happen?”
Learning III: “Bugs will always happen, so we continually improve our monitoring and observability in production, and we refine our delivery pipeline so rolling forward is smoother and easier all the time.”
Learning IV: a framework for agile transformation! hahahahahaha
“Code, as a medium, is unlike anything humans have worked with before. You can almost design right into it.”
me, in my Camerata keynote
But not totally, because we always find surprises. Complex systems are always full of surprises. That is their frustration and their beauty.
We live in complex systems. From biology up through cultures and nations and economies, we breathe complexity. And yet in school we learned science as reductive.’
In software, we now have seriously complex systems that we can play with on a time scale that helps us learn. We have incidents we can learn from, with many clues to the real events, to the rich causalities, and sometimes we can trace those back to social pressures in the human half of our software systems. What is more, we can introduce new clues. We can add tracing, and we can make better tools that help the humans (and also provide a trail of what we did). So we have access to complex systems that are (1) malleable and (2) observable.
My work in automating delivery increases that malleability. My speaking about collaborative automation aims to increase observability.
My quest is: as people, let’s create software systems that are complex and malleable and observable enough that we learn how to work with and within complex systems. That we develop instincts and sciences to change systems from the inside, in ways that benefit the whole system as well as ourselves. And that we apply that learning to the systems we live and breathe in: biology, ecology, economy, culture.
The other day, AWS announced its latest plans to work around the license of ElasticSearch, a very useful open source project cared for by Elastic.
“At AWS, we believe that maintainers of an open source project have a responsibility to ensure that the primary open source distribution… does not advantage any one company over another. “
Can I call BS on this?
I mean, I’m sure AWS does believe that, because it’s in their interest. But not because it’s true.
Great pieces of software are not made of code alone. They’re made of code, which is alive in the mental models of people, who deeply understand this software and its place in the wider world. These people can change the software, keep it working, keep it safe, keep it relevant, and grow it as the growing world needs.
These people aren’t part-time, volunteer hobbyists. They’re dedicated to this project, not to a megacorporation. And they eat, and they have healthcare.
Reusable software is not made of code alone. It includes careful APIs, with documentation, and of communication lines. The wider world has to hear about the software, understand its purpose, and then contribute feedback about problems and needs. These essential lines of communication are known as marketing, developer relations, sales, and customer service.
Useful software comes out of a healthy sociotechnical system. It comes out of a company.
If a scrappy, dedicated company like Elastic has a license that leans customers toward paying for the services that keep ElasticSearch growing, secure, and relevant — great! This benefits everyone.
Poor AWS, complaining that it doesn’t have quite enough advantages over smaller, specialized software companies.
Then the next sentence: “This was part of the promise the maintainer made when they gained developers’ trust to adopt the software.” Leaving aside the BS that a person who gives away software owes anyone anything —
The best thing a maintainer can do for the future of all the adopters is to keep the software healthy. That means watching over the sociotechnical system, and that means building a company around it. A giant company fighting against that is not on the side of open source.
The other day in Iceland, a tiny conference on the Future of Software Development opened with Michael Feathers addressing a recurring theme: complexity. Software development is drowning in accidental complexity. How do we fight it? he asks. Can we embrace it? I ask.
Complexity: Fight it, or fight through it, or embrace it? Yes.
Here, find tidbits from the conference to advance each of these causes, along with photographs from a beautiful cemetery I walked through in Reykjavik.
One way we resist complexity: keep parts small, by creating strong boundaries and good abstractions. Let each part change separately. The question is, what happens outside these boundaries?
Feathers complained that developers spend about ten percent of their time coding, and the rest of it configuring and hooking together various tools. This makes sense to me; we’ve optimized programming languages and libraries so that coding takes less time, and we’ve built components and services so that we don’t have to code queuing or caching or networking or databases. Hooking these together is our job as software developers. Personally, I want to do more of that work in code instead of screens or configuration, which is part of my mission at Atomist. Josh Stella of Fugue also says we should be programming the cloud, not configuring it.
Paul Biggar at Dark has another way to attack complexity: wall it off. Cross the boundaries, so that developers don’t have to. Or as Jason Warner put it, “do a lot more below the waterline.” The Dark programming system integrates the runtime and the database and the infrastructure and the code, so that system developers can respond to what happens in the real world, and change the whole system at once. This opens backend development to a whole realm of people who don’t have time to learn the a dozen parts and their interconnections. The Future of Software Development is: more people will be doing it! People whose primary job is not coding.
In any industry, we can fight complexity through centralization. If everyone uses GitHub, then we don’t have to integrate with other source code management. Centralization is an optimization, and the tradeoffs are risk and stagnation. Barriers to entry are high, options are limited, and growth is in known dimensions (volume) not new ones (ideas).
Decentralization gives us choices, supports competing ideas, and prevents one company from have enough power to gain all the power. Blockchain epitomizes this. As Manuel Araoz put it: “Blockchain is adding intentional inefficiency to programming” in order to prevent centralization.
Centralization is also rear-facing: this thing we know how to do, let’s do it efficiently. Decentralization is forward-facing: what do we not yet know how to do, but could?
Building one thing very well, simply, is like building a stepping stone through the water we’re drowning in. But stones don’t flow. Exploration will always require living in complexity.
Fight through it
Given that complexity surrounds us, as it always will when we’re doing anything new, can we learn more ways to cope with it?
To swim forward in this complexity, we need our pieces to be discoverable, to be trouble-shoot-able, and to be experiment-with-able.
Keith Horwood from stdlib is working on the democratization of APIs, those pieces of the internet that make something happen in the real world. They’re making APIs easy to document and standardize. Stdlib aims to supplement, not replace developer workflows: the tools pile higher, and this is normal. Each tool represents a piece of detailed know-how that we can acquire without all the details.
Keith Horwood and Rishabh Singh both pointed out that humans/programmers go from seeing/reading, to speaking/writing, and then to executing/programming: we observe the world, we speak into the world, and then we change the world. (I would argue that we hop quickly to the last step, both as babies and as developers.) To learn how to change a complex system is to change it, see what happens, change it again.
We use type systems and property tests to reason about what can’t happen. Example tests and monitoring reassure us what happens in anticipated circumstances. We get to deduce what really does happen from observability and logs.
If we accept that we are part of this complex system that includes our code, perhaps we can find buoyancy: we can sail instead of drown.
Complexity is not anarchy; when it gels, a complex system is a higher form of order. It is an order that operates not in linear deductions, but in circles and spirals. These circles operate both within the system and between the system and its environment.
Feathers and I both spoke about the symbiosis of a development team with its code and its tools. I call it a symmathesy. We learn from our code, from the clues it leaves us in data and logs; and our code learns from us, as we change it. Both these forms of communication happen only through other software: observability tools to see what is happening in the software, and delivery tools to change what will happen. Once we view the system at this level, we can think about growing our whole team: people, running software, tools for visibility and control.
Rishabh Singh, Miltos Allamanis, and Eran Yahav showed machine-learning backed tooling that makes programs that offer useful suggestions to humans who are busy instructing the computer. The spiral goes higher.
Kent Beck said that nothing has higher leverage than making a programming system while using those same tools to build a system. His talk suggested that we: (1) make very small changes in our local system; (2) let those changes propagate outwards gradually; and (3) reflect on what happened, together. We learn from the system we are changing, and from each other.
McLuhan’s Law: We shape our tools, and then our tools shape us.
Our tools don’t shape our behavior violently and inflexibly, the way rules and punishment do. They shape us by changing the probability of each behavior. They change what is easy. This is part of my mission at Atomist: enable more customization of our own programming system, and more communication between our tools and the people in the symmathesy.
As developers, we are uniquely able to shape our own world and therefore ourselves by changing our tools. Meanwhile, other people are gaining some of this leverage, too.
I believe there will be a day when no professional says “I can’t code” — only “coding is not my specialty.” Everyone will write small programs, what Ben Scofield calls idiomatic software, what I call personal automation. These programs will remain entwined with their author/users; we won’t pretend that the source code has value outside of this human context. (Nathan Herald had a good story about this, about a team that tried to keep using a tool after the not-professional-developer who wrote it left.)
This is a problem in development teams, when turnover is high. “Everyone who touches it is reflected in the code.” (who said that? Rajeev maybe?) I don’t have a solution for this.
The path forward includes more collaboration between humans and computers, and between computers and each other, guided by humans. It includes building solid steps on familiar ground, swimming lessons for exploration, and teamwork in the whole sociotechnical system so that we can catch the winds of complexity and make them serve us.