Today I brought up a load of laundry. When doing chores, I practice keeping WIP (work in progress) to a minimum. Finish each thing, then do another one. This is good training for code.
For instance, on the way up the stairs with the basket, I saw a tumbleweed of cat hair. I didn’t pick it up. Right now I’m doing laundry.
I put the basket on the bed, pulled out a pair of pants, folded it, and then stopped.
Do I put the pants on the bed and fold the rest? Or do I put the pants away right now, then start the next piece of clothing?
It depends which one thing I’m doing: folding this load of laundry? or putting a piece of clothing in its place?
It’s like in software development. Do we slice work by functionality or by layer?
Feature slicing, where we do all components (front end, back end, database, etc) of one small change and release that before moving on: this is like folding the pants and putting them away before picking up another item.
Layered work, where we first make the whole database schema, then develop the back end, then create the front end: this is like folding all the clothes and then putting them all away.
Pants on the bed are WIP. When clothes are on the bed, the cat sits on them and I can’t put them away. Then when I want to nap, my bed still has clothes on it. WIP is liability, not value. I can’t access my bed, and no one has clean pants.
Yet, folding the laundry and then putting it away is more efficient. I might fold three pairs of pants, and then put them away all at once. Four towels, one trip to the bathroom closet. The process as a whole is faster and smoother (excluding the cats).
Is layered work more efficient in software? NO. It always takes far longer, with worse results. A lot of rework happens, and then we settle for something that isn’t super slick.
Why is laundry different?
Because I’ve folded and put away this same pair of pants many times before. On the same shelf. Household chores are rarely a discovery process.
If I hadn’t done this before, then I might fold all of Evelyn’s pants in fourths. That is standard practice, and my pants fit nicely in my cabinet that way. When I go to put Evelyn’s pants away, I’d find that her shelf is deeper. It’s just right for pants folded in thirds. Folded in fourths, they don’t all fit; I run out of height.
Now it’s time for rework: fold all her pants again, in thirds this time.
With feature slicing, I would fold one pair of pants in fourths, put it on the shelf, notice that it doesn’t fit well, refold it in thirds, and find that it fits perfectly. Every subsequent pair of her pants, I’d fold in thirds to begin with.
Completing a thin feature slice brings learning back to all future feature slices.
For repetitive chores, we can choose efficiency. For new work, such as all software development, aim for learning. That will make us fast.
Games aren’t much “fun” when rules, rather than relationships, dominate the activity, when there is no attention to “flow,” “fairness,” “respect” and “nice.”
Dr. Linda Hughes, “Beyond the Rules of the Game: Why Are Rooie Rules Nice?”
At a past job, we played Hearts every day at lunch. Out of a core group of 6-8, there were always at least four to participate. I worked there for about five years; we played over a thousand games together. We wore out dozens of decks of cards.
On top of the core rules of Hearts, we accrued a whole culture. There was the “Slone shooter” (the worst possible score, named for a former member of the group). We said “Panda Panda” (winning a trick with the highest card) and “Where’s the Jerboa?” (the two of clubs comes out to start the game) — both originated in our favorite deck, which had a different animal on each card. Long after that deck retired, the cards retained their animal names.
We had rules of etiquette. The player in the lead was the target, and everyone else works together to damage their score. Everyone was expected to make a logically justifiable play, except when succumbing to other players chanting “Panda Panda!” to summon the Ace of Hearts.
I’ve never since had that much fun at cards. The rules of the game are only the beginning.
See, you can think about games, or you can observe gaming. There are the rules as written, and then there’s the experience of the players.
It’s the same with work: you can talk about work-as-imagined, or you can look at work-as-done.
Work-as-imagined is the official process. It is how you’re supposed to get your work done. Work-as-done is real life.
This is why tabletop board games are more fun than their electronic equivalents. Everyone sees how the rules work, because we execute them ourselves. House rules evolve. When there’s ambiguity, the group decides what’s fair. Lovely traditions grow, jokes get funnier with repetition, and the game becomes richer than its rules.
It’s the same at work. Post-its on the wall give shared physical context, and they’re more flexible than any ticket-tracking software. (Software usually limits reality to what was imagined by its developers.)
Each collaborating team eventually develops its own small culture. Vocabulary, jokes, etiquette. These exist on top of (sometimes in spite of) decreed processes of work.
These interactions make work “fun.” They care about “fairness,” “respect,” and “nice.” They also lead to “flow” — flow of work through the team. Communication is smooth, collaboration is joyful and productive. This is how we win.
Throwing food in the trash feels wasteful. Sometimes I feel compelled to eat the food instead. Then I feel lethargic, I gain weight, and everything I do is slower.
Sometimes waste isn’t a problem. The world is not better for that food passing through my digestive system. Sometimes it’s preventing waste that hurts us.
Inefficient code uses compute and costs money. Even when the response time and throughput don’t impact the user, that’s still wasteful, right?
Waste that speeds you up
Say we optimize all our code. But then it’s harder to read and to change. We slow ourselves down with our fear of waste.
Duplicate code takes time to write and test. Maybe many teams use some handy functions for formatting strings and manipulating lists. It’s a waste to maintain this more than once!
Say we put that in a shared utility library. Now every change I make impacts a bunch of other teams, and every change they make surprises me. To save a bit of typing, we’ve taken on coupling and mushed up the code ownership. Everything we do is slower.
Waste that slows you down
On the other hand, duplicated business logic means every future change takes more work and coordination. That is some waste that will bite you.
In Office Space, there’s that one person who takes the requirements from one floor to another. His salary is a waste. Much worse: he wants to preserve his meager responsibilities, so he’ll prevent the people who use the software from talking to the developers. Everything is slower forever.
When you spot a distasteful waste, ask: does this waste speed me up, or does this waste slows me down forever?
I can be wasteful, and it’s okay sometimes. I’ll waste compute for faster and safer code changes. I’ll spend time on utility code to skip tripping up other teams.
Some waste is just waste. It is time spent once, or money spent ongoing, and that’s it. Some waste makes us more effective, saves us cognitive load of having to think about another thing. Some inefficiencies let us be more effective.
Other waste is sticky. It drags you into spending more time in the future. It pulls you into handoffs and queues and coupling.
Fight the sticky waste that spirals into more drag in the future. Let the other waste be. Throw the extra food in the trash; your future self will move lightly for it.
In Dungeons & Dragons (the tabletop game), there are universal laws. These are published in the Player’s Guide. They set parameters for the characters, like how powerful they should be relative to monsters. The Player’s Guide outlines weapons, combat procedures, and success rates. It describes spells, what they do and how long they last. What is a reasonable amount of gold to pay for a weapon, and how much learning (XP) comes from a fight.
The Players Guide does not tell you: everything else. What happens when a player attempts to save a drowning baby using a waffle?
The Player’s Guide represents the universal laws of D&D. The rules exist because they’ve been shown (over time, this is the 5th edition) to enable games that are fun.
Yet the prime directive of D&D is: what the DM says, goes. (The DM is the dungeon master, the person telling the story in collaboration with the players.) The DM can override the rules when necessary. More often, the DM makes up rules to suit the situation. The rulebooks do not cover everything the players might choose to do, that that’s both essential and by design.
In D&D, the DM sets the stage with a situation. Then the players respond, describing how the characters they control act in this situation. The DM determines what happens as a result of their actions.
In our game today, Tyler was DM. Tyler DMs by the “Rule of Cool”: “If it’s cool, let them do it. If it’s not cool, don’t make them do it.” One character, TDK Turtle, ran out of the inn with a waffle in hand. On his next turn, he tried to use the waffle to save a drowning baby.
Could that ever work? The DM decides. How unlikely is this? More unlikely than Turtle rolled. And yet Tyler came up with a consequence: Turtle threw the waffle in the river, our dog jumped in to eat the waffle, the baby grabbed onto the dog, and thus the dog saved the baby.
Every D&D campaign (series of games with the same DM and roughly the same players) has its own contextual rules. These build up over time. Our party has a dog because yesterday we rescued this pet from a Kuo-toa tribe that was trying to worship it as a Doge. (The Kuo-toa worship gods of random construction. Where by random I mean, DM’s choice. This DM chose Doge, because it advanced the plot.)
What works for a group of players, we stick with. What doesn’t, we leave behind. If it’s cool, do it. If not, don’t. Results drive future practices.
Our teams are like this. Humans work within universal laws of needing to eat and sleep and commute. Organizations impose constraints. Within these bounds, we come up with what works for us, what makes us laugh, and what helps us advance the plot of the system we are building.
Not every baby-saving-waffle-toss is the same. Not every party has this dog. Let teams build their own process, and don’t expect it to transfer. Do look for the wider rules that facilitate a productive game, and try those more broadly.
The other day there was a tweet about a Chief Happiness Officer.
Later someone remarked about their Agile Transformation Office.
It seems like we (as a culture) think that we can add qualities to systems the way we add ingredients to a recipe.
Systems, especially symmathesies, aren’t additive! Agility, happiness, these are spread throughout the interactions in the whole company. You can’t inject these things, people.
You can’t make a node responsible for a system property. Maybe they like having a human to “hold accountable”? Humans are great blame receptacles; there’s always something a human physically could have done differently.
Gregory Bateson talks about distinct levels of learning. From behavior to enlightenment, each level represents change in the previous level.
Zero Learning: this is behavior, responding in the way you always do. The bell rings, oh it’s lunchtime, eat. This does not surprise you, so you just do the usual thing.
Learning I: this is change in behavior. Different response to the same stimulus in a given context. Rote learning is here, because it is training for a response to a prompt. Forming or removing habits.
Learning II: this is change in Learning I; so it’s learning to learn. It can be a change in the way we approach situations, problems, relationships. Character traits are formed here: are you bold, hostile, curious?
For example — you know me, so when you see me you say “Hi, Jess” — zero learning. Then you meet Avdi, so next time you can greet him by name — Learning I. Lately at meetups Avdi is working on learning everyone’s names as introductions are happening, a new strategy for him: Learning II.
Bateson sees learning in every changing system, from cells to societies.
In code — a stateless service processes a request: zero learning. A stateful application retains information and recognizes that user next time: Learning I. We change the app so it retains different data: Learning II.
Learning III: This is change in Learning II, so it is change in how character is formed. Bateson says this is rare in humans. It can happen in psychotherapy or religious conversions. “Self” is no longer a constant, nor independent of the world.
Letting go of major assumptions about life, changing worldviews, this makes me feel alive. The important shift is going from one to two, and accepting that both are cromulent: my model is, there are many models. It is OK when a new model changes me; I’m not important (for whatever version of “I” is referenced).
Learning IV: would be a change in Learning III. Evolution achieves this. It doesn’t happen in individual humans, but in a culture it could. Maybe this is development of a new religion?
I wonder where team and organizational changes fall in this.
Zero learning: “A bug came in, so we fixed it.”
Learning 1: “Now when bugs come in, we make sure there is a test to catch regressions.”
Learning II: “When a bug comes in, we ask: how could we change the way we work so that this kind of bug doesn’t happen?”
Learning III: “Bugs will always happen, so we continually improve our monitoring and observability in production, and we refine our delivery pipeline so rolling forward is smoother and easier all the time.”
Learning IV: a framework for agile transformation! hahahahahaha
At the end of this post is an audacious idea about the present and future of software development. In the middle are points about mental models: how important and how difficult they are. But first, a story of the origins of Opera.
Part 1: Examples
The Florentine Camerata were a group of people who met in Florence in the 16th century. They had a huge impact on history and on each other.
Camerata literally means a small orchestra or choir. This Camerata was a diverse group of people who gathered and worked on a common problem: they were bored with polyphony, the esteemed music of their day. (Sample: Palestrina) Polyphony is very pretty: it has around four melodies, each of equal importance. Each has a logic of its own, rather than melody and accompaniment. Polyphony is intellectually rewarding, but you need technical understanding to appreciate it fully. What feeling it conveys comes through auditory qualities.
The Camerata asked the revolutionary question: what if you could understand the words?
The Camerata included (all quotes in this style are from Katz)
musicians, artists, poets, astrologers, philosophers, scientists who met informally under the aegis of Bardi and Corsi.
People with diverse skills and perspectives worked together. They had sponsorship; Giovanni d’Bardi was Count of Venoa, and loved to surround himself with interesting people.
Their aim was to reform the polyphonic music of the day and they believed that the best way to do so was to renovate the ancient Greek practice of setting words to music
They shared a common goal; they were unsatisfied with what Vincenzo Galilei (member of the Camerata, father of Galileo) called “that corrupt and incomprehensible contemporary music.”
And they had a common strategy. They didn’t really know what the Greeks did, but this lent legitimacy to their ideas. Like citing computer science papers from the 70s.
Their real high-level objective was horizonal, and more specific than moving away from polyphony:
Their principal aim was to find the optimum formula for wedding words and music.
Here, “optimum” is measured as “maximally effective in communicating… the specific meanings and emotions appropriate to the text.”
The Camerata talked a lot, and listened to each other talk.
“I learned more from their learned discussions than I learned from descant in over thirty years” — Caccini, renowned singer
(I had to look up “descant.” It means long-winded discourse. Like you’re experiencing now, sorry.)
But they weren’t all talk.
the Camerata constituted not only a forum for theoretical discussions, but also a workshop, a “laboratory” for the creation and performance of music.
They practiced together! And experimented. The beginning of the scientific method is a big part of the Renaissance, and it intertwines with art. We have a new way of thinking, ways of asking the universe questions. Vincenzo Galilei varied lengths and tensions of strings and found the ratios that make chords.
The Camerata didn’t always get along. There was rivalry between Bardi and Corsi, the two chief sponsors. Bardi preferred talking, Corsi wanted to play more music. These feed each other. There was disagreement over style between Caccini and Peri, the two musical stars. Peri wanted to focus on the words, with a bit of music; Caccini wanted the singing to stand out, while also understanding the words. These tensions lead to a balance.
They did code review!
presentations made… were commented on formally by “defenders” and “censors” who were nominated for the occasion.
I like that criticism came from people designated to the role, not the asshole who doesn’t like you and takes it out on your code. (Technically, this took place in the Alterati, another meetup with a lot of the same people.)
Over the years, this team changed history. They invented the music-drama, and a style of music that conveyed more meaning. (Sample: Monteverdi, a first composer to adopt the Camerata’s style. If you know Italian, you can probably understand the words.)
What about the individuals? Their outcomes were exceptional too! Here are some of their publications:
As composers of operas and authors of scientific treatises, these half-dozen people are fewer than half of the Camerata members who have Wikipedia articles. Really, what are the chances, if you’re alive in the sixteenth century, that you have a Wikipedia article today? These people did pretty well for themselves. They grew out of the Camerata.
Also in Science
This pattern of a group of people coming together to solve a problem is not unique to music — it’s the common case.
the Camerata resembles the kind of “invisible college” which is the key to creativity in science.
This “invisible college” is an association of people who share ideas. Who build a new reality together, then spread it to advance the wider culture.
We like to give the Nobel Prize to one or two people. But who worked in their lab? Who did they correspond with?
When Jon Von Neumann went to Los Alamos for the Manhattan Project, so did two or three mathematicians that he went to high school with. Really, you grow up in Hungary, what are your chances of getting to Los Alamos? They built each other up.
These invisible colleges share:
tacit understandings concerning appropriate methods of research
(processes and values)
(this means they fight over who was first; more on this later)
and the shorthand communication which shared work implies.
We can move quickly together because we share common ground, compatible mental models. This is super fun, when I get to this point with my team.
Also in Art
People work together to develop their individual styles. Usually in Paris, it seems.
the salon, the coffeehouse, the café as breeding places of artistic creativity
In the nineteenth century, a group of artists broke from the mainstream and developed Impressionism.
coping with a common puzzle which they, separately and as a group, tried to solve
Van Gogh lived in the Montmartre district with the other artists and dealers and critics. When I visited his museum in Amsterdam, my favorite part was all the paintings by his friends and associates; they developed each other as painters.
One of these (my personal favorite) was Paul Gauguin, the one Van Gogh cut his ear off over. Gauguin went on to influence Picasso.
Picasso was at the center of many social circles in Paris over the decades. Writers, photographers, philosophers.
One painter who dipped in and out of his camerata was Aleksandra Ekster, who took the ideas of Cubism back to Kiev, where her studio was its own place of idea exchange.
One of her high school classmates and friends was Alexander Bogomazov, and a print of his lives on my bedroom wall.
This brings us to the modern day, where we can find examples of this phenomenon in software teams.
Also in Software
One camerata emerged in London around 2003–2006. This group gave us Continuous Integration, Continuous Delivery, DevOps. Many of these people worked at ThoughtWorks. They weren’t all on the same project, but they talked to each other, and they solved the problem of: Does deployment have to be so hard?
Jez and Dan and Chris Read produced The Deployment Production Line. Later, Dan went to invent BDD, Sam Newman became a prophet of microservices, and more. I keep meeting conference speakers who were part of this group.
Another example: the early Spring team, around the same time. They came together online, from all around the world, to solve the problem of: do we really have to use J2EE? and made Java development less painful for everyone. Today, Java development is (approximately) everywhere, and Spring is everywhere Java is.
That group of developers and businesspeople produced an inordinate number of founders and CEOs and partners.
Personally, I’ve been part of three teams that grew me as a developer and as a person. The tech meetup scene in St Louis is a source of growth too. We have several groups of about twenty, and a lot of overlap so we can talk to the same people multiple times a month about interesting things.
In all of these examples, we can observe a phenomenon:
Great Teams Make Great People.
You don’t hire star developers, put them together, and poof get a great team. It’s the other way around. When developers form a great team, the team makes us into great developers.
Part 2: Theory
Why? Why do great teams make great people?
I have a theoretical framework to explain this.
It starts with Gregory Bateson. His father, William Bateson, invented the term genetics and pulled Mendel’s work out of obscurity (with the help of a team of people: his wife and other women, since proper biologists at the time scoffed at this work). Gregory Bateson had a camerata: the Macy Cybernetics Conferences, a series of ten conferences over several years sponsored by the Macy Foundation to advance medical research. Gregory Bateson was a pioneer of cybernetics, now known as Systems Thinking.
Systems thinking is to the present era what the scientific method was for the Renaissance. It is a new way of analyzing problems and experimenting.
Another of his contributions: his daughter, Nora Bateson.
She takes systems thinking a step past where it was in her father’s day. In the 1950s-1970s, scientists put tons of numbers into computers and attempted to model entire ecosystems. Catalogue all the parts, and all the interrelationships between the parts, and we can understand the system and predict outcomes, right?
Not so. And it isn’t for lack of numbers, and details, either. Nora Bateson points out that there’s more to a living system than parts and interrelations: the parts aren’t constant. We grow and learn within the system, so that the parts don’t stay the same and the interrelationships don’t stay the same. She gave a word to this, something deeper than any mechanical or model-able system, a learning system composed of learning parts: symmathesy (sim-MATH-uh-see).
When we look at a living system (a team, say, or a camerata), we can see the parts. People, and our environment (like our desks and computers).
Systems thinking says, we are more than the components; we are also the interrelationships. Agile recognizes that these matter.
But there’s more! Each member of a team learns every day. our environment learns and adapts too, because we change it.
This is a symmathesy. “Sym” = together, “mathesy” = learning.
We work and grow in a symmathesy. This growth results in great people.
This also explains why it is important to bring our whole selves to work: to build this living system, we need to be alive in it.
Teams Developing Software
There’s something extra special about development teams: software is the most malleable material we’ve ever used in engineering, by thousands of times. There’s nothing else like it, and this changes the meaning of “team.”
A team consists of everyone required for me to be successful, regardless of what the org chart says. Success is running useful software that impacts the world outside my team.
My team includes the people I work with, and also the software itself. And the servers it runs on, the databases it uses, and all the tools that we use to interact with the running software: the code, version control, automated tests, deployment scripts, logging.
The software is part of the symmathesy. We learn from it, from its error messages and logs and the data is collects. It learns from us, because we change it!
As long as we can change it, the software is vitae in our symmathesy. We form a sociotechnical system, people and running code. We build this system from within. No wonder this work is challenging!
Programming wasn’t always this hard. Back when I started, I worked on one program, making the changes someone asked for. For data questions, I asked, “How do we use the database?”
These days, I ask, “which database shall we use?” (Or databases.) And how will we get from here to there? I spend more time on, which work should we do? how will we know it’s useful? My perspective includes our whole software system and team, and surrounding systems too. The scope of the system we can hold in our head is the portion of the system we can change.
In order to change our system, we need a mental model of it. Each developer has a mental model of the software we work on. Each developer’s mental model is necessarily incomplete and out of date. And, they’re each different from everyone else’s. (this is called Woods’ Law, if I recall correctly.)
We spend time reconciling our mental models enough to communicate with each other; this is a coherence penalty.
Accurate mental models are incredibly powerful. They let us decide how to change a system, how to know whether it worked, and where to watch for extra consequences. They’re also incredibly hard to form and validate. I’ll call out two reasons in particular for this difficulty.
Line of Representation
A developer can’t look down and see the software running. It is made of dynamic processes running in some digital space that we can’t see into. Until TRON is a thing, we can’t go in there and talk to it. This barrier is called the Line of Representation. (DD Woods et al, STELLA Report)
We can only look at screens and type on keyboards; we use software to observe and control other software. Each of these tools is part of our symmathesy. Grow them, and they can grow the accuracy of our mental models.
I like this diagram because it lets me explain my job at Atomist in one sentence: we make tools to help you move tasks from above the line to below the line. Continuous integration is a step in this direction; Atomist takes it farther.
Downhill Invention, Uphill Analysis
This is counterintuitive: it’s easier to build a system from scratch, constructing the mental model as you go along, than to form an understanding of how an already-built system works.
Valentino Braitenberg illustrates this principle in his book Vehicles. He calls it the Principle of Downhill Invention, Uphill Analysis.
Not Invented Here syndrome? laziness. Greenfield development? Of course people like it! it’s much easier than getting your head around legacy code.
When you do have a decent mental model of a system, sharing that with others is hard. You don’t know how much you know. If you’re the purple developer in this picture:
then you have a mental model of this system because you built it. The green and blue developers have been assigned to help, but they can’t change the system because they don’t understand it. Meanwhile, you may be changing the system fast enough that it’s impossible for them to get a grasp on it, no matter how smart they are. (I’ve been in their situation.) The solution is to work together on the system, and invest attention in transferring your mental model. Until then, Blue and Green get in your way.
Which developer in this picture is ten times more productive than the others?
Individual vs Group Interests
This brings us to the conflict between advancing your own reputation and contributing to the group.
The race to be first has to be reconciled in science with the need and the norm of sharing.
Following the Camerata’s work, Caccinni argued with Peri and Runiccini and Cavalieri (who was not a member, but corresponded with the Camerata) over who invented the stile rappresentativo, that amazing innovation of one melody plus some accompaniment.
When people are evaluated as individuals, there’s incentive to hold back from sharing. To hoard recognition.
Recognition and esteem accrue to those who have… made genuinely original contributions to the common stock of knowledge.
This is dangerous, because the useful ideas are the ones we contribute. The mental models we hoard make us look good; those we share make the whole team powerful.
Productivity is my personal output. Generativity is the difference between the team’s output with me and without me.
It’s possible to be quite productive personally and have negative generativity, holding the rest of the team back. It’s common to see a person whose own work is slow, but who helps and teaches everyone else. If we start recognizing and crediting generative actions, we build our symmathesy.
The output of everything we do is some side effect on the world and the next version of ourselves. Generativity is about growing the team’s output over time, and each member of the team grows at the same time.
It’s counterintuitive: to become great, put the team first. Aiming first for my own greatness, I limit myself.
Great developers aren’t born. They aren’t self-made. Great developers are symmathesized!
Part 3: Predictions
There’s one more piece of the Camerata’s story that surprised me, that gave me a new idea about today’s world.
The Camerata existed in the late Renaissance.
there was a sense of innovation in the air
When the world is ready for an idea, it doesn’t come to just one person.
the Camerata “were like midwives to a sixteenth century which was pregnant with the peculiar conjunction of social, ideological, and cultural ideas and practices from which the opera emerged.”
It was a time of increasing variation; regional and local distinctions emerged, in spite of the Church aiming for uniformity.
the very existence of such groups as social institutions was a product of the time
Right now, the software industry is letting teams try things. Developers are hard to hire, and the work we’re doing is hard, so we get to experiment, even though companies prefer uniformity.
We contribute to the larger world by learning and spreading new practices. Ideas from agile development have spread into other business areas and improved lives.
I’m glad to be alive at a time of cameratas.
Recognition of Art
Somewhere around the beginning of the Renaissance, there was a shift in the way painters, poets, and musicians worked. They used to be tradesmen: they had guilds and apprenticeships. Painters learned how to paint, and musicians how to play the instrument. This guaranteed competence; if you hired someone to paint a scene, they could paint something reasonable. It didn’t guarantee talent; guild membership was more about whose kid you were.
During the Renaissance these guilds lost power. Competence was harder to guarantee, but individual talent was recognized. Painters and poets specialized in subject matter. People noticed that there was some common factor to music, painting, poetry, drama — some indefinable essence that was more than competency with a brush.
Some spark. some Art!
Before, as tradesmen, painters hung out with painters, sculptors with bronzeworkers etc. After Art became a thing, artists studied in academies, and they talked to intellectuals of many kinds.
transformation of homogeneous artistic circles into “cultured” circles: poets, artists, amateurs, and laymen alike.
It reminds me of a move from typing Java in a cubicle into an agile team, with discussions including designers and testers and businesspeople. I’m no longer painting in acrylics. I’m painting something.
Software is not a craft.
We aren’t housepainters. Competence is not supreme in a shifting, complex system. Our users’ needs change, our technologies change, and every change we undertake is one we’ve never made before. We cannot possess all the skills we need; our secret weapon is learning, never perfection.
Software is not an art.
Neither are we creating a portrait or a poem, where there is such a thing as “done.” What we build affects people more directly than works of art. To do this it must keep running, it must work on their devices, it must keep changing. Software has a greater power to change people by changing the world, but these systems are more than one person can make, and they never complete.
Serious software development is the practice of symmathesy.
Software is not Art. Software is the next thing after art. It is something new, that we’ve never been able to do before.
And that implies that our current time is the next thing after the Renaissance. We are developing whole new ways of being human together. I am thrilled to be a part of this, and to be in the thick of it. To be more than a craftsperson, more even than a developer.
I am a Symmathecist, in the medium of software. (sim-MATH-uh-sist)
I want to practice symmathesy, to grow as part of a growing system, a system with people of many different skills. I want to care for and change software that changes the world outside my symmathesy, even as it changes me.
The only problem with this idea is that it has my name on it. Ideas are only valuable when they’re shared, spread, contributed to the common knowledge. Every person who spreads an idea is part of it. So join me, spread this.
Do you have a camerata? Can you help build one?
How do you practice symmathesy, consciously? How can we educate ourselves and each other in this practice? Are you a symmathecist, too?
There’s something magical about post-its falling off walls.
I think about this as my team spends time pruning tickets, closing ones that seemed important at the time, but now aren’t worth doing. These are ideas that need to fall through the cracks, to slide down the wall behind the file cabinet, to let us focus on the few that matter.
Instead we’re spending time and decisionpower on eliminating old ideas. Me, I’d rather dump the lot of them and resurrect the crucial few.
Then consider the connections between tasks: Christian says that before we create this command form, we need the http endpoint story completed. I could set up a relationship between these tickets, we could make a whole diagram and tie our current priorities into a line. Yet, interdependencies change and shift. We switch priorities quickly as we learn about customers.
The tenacious task tracker is stronger that sticky notes. At best, it’s out of date. At worst, it leads to ossification, to attachment to old plans. If it’s too hard to change this tracking…
Wait. I think it’s more than this. “Easy to change” isn’t enough. “Trivial to change” isn’t enough. The post-its fall on the floor, we pick them up, we put them back, “Oh actually this belongs over here. This one doesn’t even matter anymore. Ah, that was done weeks ago.” The wall of stickies requires active maintenance even to stay the same. It naturally degrades. This leaves room for evolution. Every post-it on the floor is an opportunity for a new start.
So we don’t set up dependencies between tasks. We leave those in our heads, or individual notebooks. I put them on seamaps sometimes, which I use to summarize the daily standup; these only last a day.
Embrace degradation and dump those piles of precious, stale ideas. Be cautious about weaving tight plans and documenting them: persistence has a hidden price.
TL;DR: Study all the interactions between people, code, and our mental models; gather data and we can make real improvements instead of guessing in our retros.
Software is hard to change. Even when it’s clean, well-factored, and everyone working on it is sharp and nice. Why?
Consider a software team and its software. It’s a sociotechnical system; people create the code and the code affects the people.
When we want to optimise this system to produce more useful code, what do we do? How do we make the developer->code interactions more productive?
As a culture, we started by focusing on the individual people: hire those 10x developers! As the software gets more complex, that doesn’t go far. An individual can only do so much.
The Agile software movement shifted the focus to the interactions between the people. This lets us make improvements at the team level.
The technical debt metaphor let us focus on how the code influences the developers. Some code is easier to change than other code.
We shape our tools, and thereafter our tools shape us. – McCluhan
Test-driven development focuses on a specific aspect of the developercode interaction: tightening the feedback loop on “will this work as I expected?” Continuous Integration has a similar effect: tightening the feedback loop on “will this break anything else?”
All of these focuses are useful in optimizing this system. How can we do more?
Thereʼs a component in this system that we haven’t explicitly called out yet. It lives in the heads of the coders. Itʼs the developerʼs mental model of the software.
Each developerʼs mental model of the software matches the code (or doesn’t)
Every controller must contain a model of the process being controlled. – Nancy Leveson, Engineering a Safer World
When you write a program, you have a model of it in your head. When you come to modify someone else’s code, you have to build a mental model of it first, through reading and experimenting. When someone else changes your code, your mental model loses accuracy. Depending on the completeness and accuracy of your mental model of the target software, adding features can be fun and productive or full of pain.
Janelle Klein models the developer⟺code interaction in her book Idea Flow. We want to make a change, so we look around for a bit, then try something. If that works, we move forward (the Confirm loop). If it doesn’t work, we shift into troubleshooting mode: we investigate, then experiment until we figure it out (the Conflict loop). We update our mental model. When weʼre familiar with the software, we make forward progress (Confirm). When weʼre not, pain! From the book:
to make a change, start with learn; modify; validate. If the validation works, Confirm! back to learn. If the validation is negative, Conflict! on to troubleshooting; rework; validate.
That 10x developer is the one with a strong mental model of this software. Probably they wrote it, and no one else understands it. Agile (especially pairing) lets us transfer our mental model to others on the team. Readable code makes it easier for others to construct an accurate mental model. TDD makes that Confirm loop happen many more times, so that Conflict loops are smaller.
We can optimize this developer⟺code interaction by studying it further. Which parts of the code cause a lot of conflict pain? Focus refactoring there. Who has a strong mental model of each part of the system, and who needs that model? Pair them up.
Idea Flow includes tools for measuring friction, for collecting data on the developer⟺code interaction so we can address these problems directly. Recording the switch from Confirm to Conflict tells us how much of our work is forward progress and how much is troubleshooting, so we can recognize when we’re grinding.
Even better, we have data on the causes of the grinding.
We can reflect and choose actions based on what’s causing the most pain, rather than on gut feel of what we remember on the day of the retrospective.
Picturing those internal models as part of the sociotechnical system changes my actions in subtle ways. For instance I now:
observe which of my coworkers are familiar with each part of the system.
refactor and then throw it away, because that improves my mental model without damaging anyone else’s.
avoid writing flexible code if I don’t need it yet, because alternatives inflate the mental model other people have to build.
spending more time reviewing PRs in order to keep my model up-to-date.
We can’t do this by focusing on people or code alone. We have to optimize for learning. Well-factored code can help, but it isn’t everything. Positive personal interactions help, but they aren’t everything. Tests are only one way to minimize conflict. No individual skill or familiarity can overcome these challenges.
If we capture and optimize our conflict loops, consciously and with data, we can optimize the entire sociotechnical system. We can make collaborative decisions that let us change our software faster and faster.
This is a raging debate in our industry today. I think the answer depends strongly on the kind of problem a developer is trying to solve: is the problem contracting or expanding? A contracting problem is well-defined, or has the potential to be well-defined with enough rigorous thought. An expanding problem cannot; as soon as you’ve defined “correct,” you’re wrong, because the context has changed.
A contracting problem: the more you think about it, the clearer it becomes. This includes anything you can define with math, or a stable specification: image conversion, what do you call it when you make files smaller for storage. There are others: ones we’ve solved so many times or used so many ways that they stabilize: web servers, grep. The problem space is inherently specified, or it has become well-defined over time. Correctness is possible here, because there is such a thing as “correct.” Programs are useful to many people, so correctness is worth effort. Use of such a program or library is freeing, it scales up the capacity of the industry as a whole, as this becomes something we don’t have to think about.
An expanding problem: the more you think about it, the more ways it can go. This includes pretty much all business software; we want our businesses to grow, so we want our software to do more and different things with time. It includes almost all software that interacts directly with humans. People change, culture changes, expectations get higher. I want my software to drive change in people, so it will need to change with us. There is no complete specification here. No amount of thought and care can get this software perfect. It needs to be good enough, it needs to be safe enough, and it needs to be amenable to change. It needs to give us the chance to learn what the next definition of “good” might be.
Safety I propose we change our aim for correctness to an aim for safety. Safety means, nothing terrible happens (for your business’s definition of terrible). Correctness is an extreme form of safety. Performance is a component of safety. Security is part of safety.
Tests don’t provide correctness, yet they do provide safety. They tell us that certain things aren’t broken yet. Process boundaries provide safety. Error handling, monitoring, everything we do to compensate for the inherent uncertainty of running software in production, all of these help enforce safety constraints.
In an expanding software system, business matters (like profit) determine what is “good enough” in an expanding system. Risk tolerance goes into what is “safe enough.” Optimizing for the future means optimizing our ability to change.
In a contracting solution, we can progress through degrees of safety toward correctness, optimal performance. Break out the formal specification, write great documentation.
Any piece of our expanding system that we can break out into a contracting problem space, win. We can solve it with rigor, even make it eligible for reuse.
For the rest of it – embrace uncertainty, keep the important parts working, and make the code readable so we can change it. In an expanding system, where tests are limited and limiting, documentation becomes more wrong every day, the code is the specification. Aim for change.