Complication for the win

“How can we make our programs simpler?”
I don’t want to make them simpler. I want to them make them complicated instead of complex.

Simplicity is not the goal.

Simpler implies they do less, they hold less rich domain knowledge. I want them to do a lot, to take work and thinking away from the user. Yet I still want them to be understandable by my team.

Like, say I’ve been to the grocery store and I’m stocking up on a lot of stuff. There’s a lot of stuff to bring in from the car.

How can I make this task simpler? Easy. Buy less stuff.

But I don’t want to buy less stuff. I’m gonna carry all this in. It feels like the simplest thing is to make one trip, load it all up in my arms. Head for the house and… shoot. Maybe I can kick the front door until someone opens it.

What seems “simple” might be the most complex.

Carrying all the groceries at once is a complex task. Every item I’m already carrying affects how I pick up the next one. Every extra pound feels significant, since I’m already encumbered. Every step is careful. And then I get to the kitchen and try to put it all down and crack the laundry detergent just came down hard on a carton of eggs.

Now that I’m older and wiser and have studied this phenomenon in software, I don’t try this anymore. I carry a few items in, then carry a few items in, etc etc until all the groceries are in the kitchen. This is more complicated because there are way more trips. It is less complex because each independent trip is simple.

Oodles of simple tasks are collectively complicated, but not complex.

I can carry two bags and a laundry detergent. One bag and the giant package of TP, easy. I can open the door for myself. I can put them safely on the counter.

I can take the detergent straight to the basement; every task doesn’t have to be the same as long as each one is simple.

As a bonus, my partner might start putting groceries away while I’m still carrying. The kids might join in and grab a few bags. When I’m carrying the whole mass they couldn’t grab one from me, because that would destabilize it all. But they can grab some from the car. Many simple tasks can be shared; one complex one cannot.

I’ll take complication over complexity any day.

In programs, it is tempting to see something as simpler if it has smaller parts. If one Order class has all the behaviors we need for an Order, that’s sounds simple. Then all those behaviors get intertwined and the Order class needs to change for many reasons and yet it’s scary to change because there are so many things we might break!

If we have a frobozz.purchasing.Order class that has behavior for building the order, and a frobozz.fulfillment.Order class with behavior around shipping the order, and a frobozz.billing.Order class with behavior around getting paid — gah! Three classes for one thing? Too complicated!

Yes, complicated. But far less complex.

Complication grows linearly, complexity… ugh.

twelve piles of groceries
twelve trips might be twelve times as complicated

Complicatedness can be measured (sort of) by a count of parts. Complicated systems can be broken down, broken down and each part understood alone, although that may take a lot of work because there are so many of them.

TP balanced on cases of beer held by arms barnacled in plastic bags — each part is leaning on another, interdependent.

Complexity is different. It exists in the interdependence between parts, which stops you from safely changing any of them independently. A complex system can only be broken down so far; you have to consider much of it in one lump. The most complex part determines the complexity of the system. It’s a maximum, not a sum.

Don’t be afraid of programs with a lot of parts. As long as those parts are independent — they communicate only with data, for instance, not interdependent database calls — then it doesn’t matter how many they are. Complication is not our enemy.

Aiming for “simplicity” by reducing complication leads us further into the morass.

Complexity bites us. Embrace complication, and keep your eggs safe.

From complicated to complex

Avdi Grimm describes how the book Vehicles illustrates how simple parts can compose a very complex system.

Another example is Conway’s Game of Life: a nice organized grid, uniform time steps, and four tiny rules. These simple pieces combine to make all kinds of wild patterns, and even look alive!

These systems are complex: hard to predict; forces interact to produce surprising behavior.

They are not complicated: they do not have a bunch of different parts, nor a lot of little details.

In software, a monolith is complicated. It has many interconnected parts, each full of details. Conditionals abound, forming intricate rules that may surprise us when we read them.

But we can read them.

As Avdi points out, microservices are different. Break that monolith into tiny pieces, and each piece can be simple. Put those pieces together in a distributed system, and … crap. Distributed systems are inherently complex. Interactions can’t be predicted. Surprising behaviors occur.

Complicated programs can be understood, it’s just a lot to hold in your head. Build a good enough model of the relevant pieces, and you can reason out the consequences pf change.

Complex systems can’t be fully understood. We can notice patterns, but we can’t guarantee what will happen when we change it.

Be careful in your aim for simplicity: don’t trade a little complication for a lot of complexity.

Moving beyond simplicity

We prefer simple models to complicated ones. Circles to ellipses, a single ancestor to a soup. But is that really because the simple explanation is more likely?

The value of keeping assumptions to a minimum is cognitive.

Philip Ball, The Tyranny of Simple Explanations

Simpler theories are more useful because we can think about them more easily. We can pass them from person to person.

But now we have software, which can encapsulate complicated theories almost as easily as simple ones. We can package them up into products and make use of them, without taking up brainspace with all those complications.

Maybe you still find simple models most likely in physics. Simple models are not accurate in biology – much less in human interactions.

In biology, everything going on had some reason it got that way (might be randomness) and many reasons that it stays that way. Most adaptations are exaptive — used for some purpose other than their original purpose. Most have many purposes, the proteins participate in multiple pathways, the fur is warm and also distinguishing, the brain anticipates danger and builds social structure.

In our own lives, every action has multiple stories. I’m playing this game to relax, I’m playing this game to avoid writing, I’m playing this game to learn from it.

We choose based on probability spaces from many models, stacked on top of each other. I buy this dress because I feel powerful in it, because I deserve it, because my date will like it, because I am sloppy with money. I value this relationship because I don’t know how to be alone, because they’re a kind person, because we belong together, because he reminds me of the father figure from my childhood.

This is right, this is healthy, to have all these stories. In real life, the simple explanation is never complete. Taking action for exactly one reason is unnatural. Like gaming a metric, it fails to account for the whole rest of the system.

Everything we do in a complex system has many effects, so it is right that we have many overlapping reasons. This doesn’t make life easy, only real.

When I started in software I loved that it was simple and predictable. Now I love that it is complex and messy, because that’s where I learn the important stuff.

Soft, or hard like mud

Soft skills are hard. “They take work to build and work to apply.”

@ruthmalan

The word “hard” describes sciences like physics and chemistry. It is confusing that “hard” can mean difficult, because these sciences aren’t more difficult than the “soft” ones like sociology and anthropology. They’re differently difficult.

The “hard” sciences are hard because they’re solid. We can stand on them. Physical laws are universal laws. They demand rigor, and rigor means proving that assertions apply in all cases, both through deduction and by checking against evidence.

The “soft” sciences are differently hard. They study humans systems, far too complex to make universal causal predictions. Conclusions about one culture or group are valid in some other groups and invalid in others. Rigor in complexity means studying which cases your assertions apply to. It means observing, discerning, and wallowing in context.

I describe my career progression from solving puzzles to growing products like this:

Correctness, puzzles, and the “hard” sciences are hard like rocks. Rock climbing is very technical. It takes a lot of skill and strength and hard work to climb. At the top of the cliff, you know you’ve achieved something specific.

Change, people, and the “soft” sciences are hard like mud. Like wading through goopy mud. Techniques can help, but each depends on the kind of mud. Strength helps, but pushing too hard can get you more stuck. It always helps to be okay with getting messy. When you finally reach that piece of relatively solid ground, the view is still a swamp.

Why would you want to wade through mud?

why does a fish want to swim through water?

People, interrelationships, change — this is our world. Sometimes we can carve out puzzles we can solve for real. The few universal laws we can find are priceless. But the rest of it — life is in the mud, in the deep context. The skills to navigate it are not easy. They are not satisfying in the same way, either. But they are how we find meaning, how we participate in a system bigger than our own self.

I am okay with getting messy.

Reasons, heuristics, and revealed intentions

In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.

So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.

There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.

Will these predictions come true? can’t know until we try it.

Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.

Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)

That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.

Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.

We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?

Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.

We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.

And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.

Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.

Growth Loops: circular causality is real

A few hundred years ago, we decided that circular causality was a logical fallacy. All causes are linear. If something moves, then something else pushed it. If you want to see why, you have to look smaller; all forces derive from microscopic forces.

Yet as a human, I see circular causality everywhere.

  • autocatalytic loops in biology
  • self-reinforcing systems (few women enter or stay in tech because there are few women in tech)
  • romantic infatuation (he likes me, that feels good, I like him, that feels good to him, he likes me 🔄)
  • self-fulfilling prophecies (I expect to fail, so I probably will 🔄)
  • self-regulating systems (such as language)
  • cohesive groups (we help out family, then we feel more bonded, then we help out 🔄)
  • downward spirals (he’s on drugs, so we don’t talk to him, so he is isolated, so he does drugs 🔄)
  • virtuous cycles (it’s easier to make money when you have money 🔄)

These are not illusory. Circular causalities are strong forces in our lives. The systems we live in are self-perpetuating — otherwise they wouldn’t stay around. The circle may be initiated by some historical accident (like biological advantage in the first days of agriculture), but the reason it stays true is circular.

Family therapy recognizes this. (In the “soft” sciences you’re allowed to talk about what’s right in front of you but can’t be derived from atomic forces 😛.) When I have a bad day and snip at my spouse, he withdraws a bit; this hurts our connection, so I’m more likely to snip at him 🔄. When I wonder if my kid is hiding something, I snoop around; she sees this as an invasion of privacy, she starts hiding things 🔄. When my partner tells me something that bothers him, and I say “thank you for expressing that” and then change it, next time he’ll tell me earlier, and it’ll be easier to hear and fix 🔄.

Note that nobody is forcing anyone to do anything. This kind of causality acts on propensities, on likelihoods. We still have free will, but it takes more work to overcome tendencies that are built into us by these interactions.

Or at work: When I think a coworker doesn’t respect me, I make sarcastic remarks; each time he respects me less 🔄. As a team, when we learn together and accomplish something, we feel a sense of belonging; this leads us to feel safe, which makes it easier to learn together and accomplish more 🔄.

Some of these cycles are merely self-sustaining. Many spiral us further and further in a particular direction. These are growth loops, which Kent Beck describes: “the more it grows, the easier it is to grow more.” There is power for us in setting up, nourishing, or thwarting our own cycles. Growth loops are more powerful than individual, discrete incentives. The most supportive families, the most productive teams have them.

Because a growth loop moves a system in a particular direction, it’s more of a spiral than a circle. I want to draw a z-axis through it. Like snipping at my spouse:

a spiral toward disconnection, of me getting snippy and him getting sarcastic

As my spouse and I get snippy with each other, we spiral toward disconnection. When we talk early and welcome gentle feedback, we spiral toward connection. Whereas, when my team bonds and accomplishes things, we spiral toward belonging — with a side effect of accomplishments.

a spiral toward connection, of a team learning together and accomplishing stuff and bonding

I like to use this to explain why JavaScript is the most important programming language. It might be an inferior language by objective standards, but “objective standards” are like linear causality: limited. Reality is richer.

JavaScript started by being the first runtime built into the browser. This made it useful enough for people to learn it, and that’s key: a language is only useful if the know-how of using it is alive inside people. Then all these people create tools, resources, and businesses that make JavaScript more useful 🔄.

a spiral toward usefulness, from the first runtime to people learning and language improvements and tooling

Call this “the network effect” if you want to; network effects are one form of growth loop.

In our situation, JavaScript is the most useful language and it’s only getting more useful. It may not have the best syntax, or the best guarantees. It does have the runtime available to the most humans, the broadest community and a huge ecosystem. Thanks to all the web-based learning resources, it is also the most accessible language to learn.

When I start looking for these growth loops, I see them everywhere, even inside my head. I’m low on sleep, so I’m tired, so I don’t exercise, so I sleep poorly, so I’m tired 🔄. I’m not sleeping, so I get mad at myself for being awake, which is agitating and makes it harder to sleep 🔄. Once I recognize these, I can intervene. I notice that what I feel like doing and what will make me happy are not the same things. I step back from my emotions and feel compassion for myself and then let them go instead of feeding them. I stop negative loops and nourish positive ones.

Circular causality is real, and it powerful. In biology, in our lives, and in our teams, it feels stronger than linear causality; it can override individual competition and incentives. It forms ecosystems and symmathesies and cultures. Linear causality is valuable to understand, because its consequences are universal, while every loop is contextual. But can we stop talking like the only legitimate explanations start from atoms? We influence each other at every level of systems. Circular causality is familiar to us. I want to get better at seeing this and harvesting it in my work.

When knowledge is the limiting factor

In Why Information Grows (my review), physicist César Hidalgo explains that the difference between the ability to produce tee shirts vs rockets is a matter of accumulating knowledge and know-how inside people, and weaving those people into networks. Because no one person can know how to build a rocket from rocks. No one person understands how a laptop computer works, at all levels. There’s a limit to the amount of knowledge a single person can cram into their head in a lifetime; Hidalgo calls this limit one personbyte.

Our software systems are more complicated than one person can hold. You can deal with this by making each system smaller (hello microservices), but then someone has to understand how they fit together. And someone cannot. It takes teams of people.

You can deal with the personbyte limitation by forming teams that work together closely. The more complex the product we’re building, the more knowledge we need, so the less duplicated knowledge we want on the team. Team size is limited too, because of coordination overhead.

If you think about it this way, you might recognize that in many software teams, our limitation is not how much we can do, but how much we can know. To change a sufficiently complex system, we need more knowledge than one or two people can hold. Otherwise we are very slow, or we mess it up and the unintended effects of our change create a ton more work.

If the limitation of my team is how much we can know, then mob programming makes a ton of sense. In mob programming, the whole team applies their knowledge to the problem, and each person takes turns typing. This way we’re going to do the single most important thing in the most efficient and least dangerous way our collective knowledge and know-how can provide. In the meantime we spread knowledge among the team, making everyone more effective.

Mob Programming applies all our knowledge and know-how to one task. (Maybe we never tried it before because screens this big used to be expensive?)

If your software system has reached the point where changing it is one step forward, two steps back — it might be a good time to try working as one unit, mob-programming style.

Which came first, the chicken or the egg?

a telling question.

This puzzler says something about our culture. It says we think in terms of causes that happen before their effects. That we don’t believe in reflexive causality.

In life, everything interesting is a circle. The mitochondria breaks down sugar, the proteins use the energy, they keep up the cell wall, the cell wall protect the mitochondria. The thriving of each one feeds the thriving of the others. I feel cranky, so I work with less patience, so I get way more errors, so I get more cranky. There aren’t many women in programming, women don’t picture themselves in programming, there are fewer women in programming. Poverty eats time and brainpower, leaving less for study and thinking, and stereotypes reinforce themselves. Suspect an employee is underperforming, and you can drive them to underperform. Or a nation’s economy is tight, citizens get paranoid and vote to increase barriers, their economy shrinks. A family helps each other, becomes a tighter and more successful unit, and helps each other even more.

Circles like these form systems. Enduring systems like a family, or short-lived whirlpools like me being cranky. Biologically, these emerge as organisms and ecosystems. As people, organizations and religions and markets and successful companies emerge from circles of mutual interdependence.

most causality is circular. Example: when I go to conferences, it means I learn cool stuff by talking to people, which means I think of interesting things to research and talk about (extra help from conversations with people I met), which means I get invited to speak (extra help for learning cool stuff at the speakers dinner), which means I go to conferences; circle repeats. Emergent property: Jessitron, international speaker.

Love has a circular causality. I find you interesting, that feels good, which makes you like me, which makes me more interesting, which feels good to me, so I like who I am with you, so I tell you sweet things. This cycles into a relationship.

There is no single cause of this phenomenon. That spark of compatibility when we met, we can point to a beginning in this case, to a “before.” but our whole life together wasn’t there. We built that moment by moment in the feedback loop of caring for each other.

“Which came first, the chicken or the egg?” is a fallacy: the fallacy of root cause. Of linear causality.

Linear causality happens, but in real life, it is the exception. An edge case. Everything interesting is a circle.

Reading: Why Information Grows

Yesterday in a zine, I read an in-process book review of The Rise and Fall of the Roman Empire. Reading that was a project for the review author; they wrote the review while still in the thralls of the book. That seems like the best time, not after I’ve closed the book. So quick! before I listen to the epilogue:

César Hidalgo is a statistical physicist. His book, Why Information Grows, is about biology and economics. It scales from the quantum to molecules to life to ecosystems to human systems. The common theme at every level is: information, embodied in matter. Processed by systems of increasing complexity to produce even more information. This is how our planet keeps being interesting, while the universe as a whole marches toward the boringness of maximum entropy.

Traditional thermodynamics says entropy always increases, OK, and then it uses this principle to predict the behavior of closed systems near equilibrium.

Which is no system ever.

There is only one closed system: the universe. And it is nowhere near equilibrium. When you get into a planet like Earth, which then develops feedback loops like self-replicating proteins, which develop into feedback loops like ecosystems, and eventually into human systems — we’re only moving farther from equilibrium. Energy is constantly injected into the system, and we use that to process and create higher and higher levels of information. More and more interesting stuff.

Hidalgo extends this into economics, and uses it to explain geographic income disparity.

The trick is: information by itself is useless. To use it, a system/entity/person needs enough knowledge to know what it means, and enough know-how to do something with it.

His key metaphor is that products are embodied information. To create any product takes knowledge and know-how. From a little, like garments, to a lot, like airplanes. That knowledge and know-how must exist inside a country for that country to make such products. And knowledge and know-how are way, way harder to move around than raw materials. They exist inside people, and networks of people, and networks of networks of people. So a person can make a rug, a firm can make a jersey, and many firms together can make a satellite.

Furthermore, Knowledge and know-how create knowledge and know-how. There’s a feedback loop: I know how some aspects of how to build software, so I build it, and I work with other people, and we learn from each other and the knowledge in our particular piece of the world increases. We also create new k&kh, both in ourselves, plus embodied in our software, which can give us and other people new tools to build new things. It spirals.

So software concentrates in Silicon Valley. Movies in Hollywood. Financial instruments in New York. Cars in Germany and Japan.

I’m also reading a book now, Cybernetic Revolutionaries, about a sophisticated cybernetic computer system built in Chile in 1971–72, a collaboration between many Chileans and a few key foreigners. Stafford Beer and some other British computer scientists worked with them on cybernetics, economic modelling, and programming. Guy Bonsiepe (German) worked with them on industrial design. The project and the whole country were greatly held back by US embargos which deprived them of parts and computers that Chile didn’t have the knowledge and know-how to build for themselves. The story illustrates the value of acquiring knowledge in a nation, and the limitations of not already having it.

Personally, this concept of knowledge and know-how being embodied in people — not in text! text only helps if you already have the knowledge to attach meaning to the text. And the way it transfers is by working together. By pair programming, not by documenting your code.

I can build a thing; we build the thing together; now you can build the thing (and better). We know this is how we learn best. Why do we pretend that text is sufficient?

There’s also the part where, he looks at cars or pillows as embodied information: products have the value that you can get benefit from the information without having to understand all of it. Software is very obviously that. The value in the software is what people can now do, that they couldn’t before without possessing a whole lot of knowledge and know-how that the creators of the software put into it. Note that here, software is a medium, and the value is in the business, in whatever users can do with it.

In the Three Body Problem series, one of the characters describes inhabited planets as “low-entropy systems,” and humans and aliens as “low-entropy beings.” We get there by using and creating information, by means of the knowledge and know-how embodied in the organization of our components — proteins, cells, neurons, employees, firms, nations. This book has me thinking hard about how knowledge is created and moved. Because it is not through books or the internet — printed language is the feeblest form of knowledge transfer. Working alongside each other is the richest. Can we do more of that?

the future of software: complexity

The other day in Iceland, a tiny conference on the Future of Software Development opened with Michael Feathers addressing a recurring theme: complexity. Software development is drowning in accidental complexity. How do we fight it? he asks. Can we embrace it? I ask.

Complexity: Fight it, or fight through it, or embrace it? Yes.

Here, find tidbits from the conference to advance each of these causes, along with photographs from a beautiful cemetery I walked through in Reykjavik.

Fight it

One way we resist complexity: keep parts small, by creating strong boundaries and good abstractions. Let each part change separately. The question is, what happens outside these boundaries?

Carefully bounded grave sites, each very different, each organized in its own way. You wish your software system was this pretty.

Feathers complained that developers spend about ten percent of their time coding, and the rest of it configuring and hooking together various tools. This makes sense to me; we’ve optimized programming languages and libraries so that coding takes less time, and we’ve built components and services so that we don’t have to code queuing or caching or networking or databases. Hooking these together is our job as software developers. Personally, I want to do more of that work in code instead of screens or configuration, which is part of my mission at Atomist. Josh Stella of Fugue also says we should be programming the cloud, not configuring it.


Paul Biggar at Dark has another way to attack complexity: wall it off. Cross the boundaries, so that developers don’t have to. Or as Jason Warner put it, “do a lot more below the waterline.” The Dark programming system integrates the runtime and the database and the infrastructure and the code, so that system developers can respond to what happens in the real world, and change the whole system at once. This opens backend development to a whole realm of people who don’t have time to learn the a dozen parts and their interconnections. The Future of Software Development is: more people will be doing it! People whose primary job is not coding.

In any industry, we can fight complexity through centralization. If everyone uses GitHub, then we don’t have to integrate with other source code management. Centralization is an optimization, and the tradeoffs are risk and stagnation. Barriers to entry are high, options are limited, and growth is in known dimensions (volume) not new ones (ideas).

Decentralization gives us choices, supports competing ideas, and prevents one company from have enough power to gain all the power. Blockchain epitomizes this. As Manuel Araoz put it: “Blockchain is adding intentional inefficiency to programming” in order to prevent centralization.

Centralization is also rear-facing: this thing we know how to do, let’s do it efficiently. Decentralization is forward-facing: what do we not yet know how to do, but could?

Building one thing very well, simply, is like building a stepping stone through the water we’re drowning in. But stones don’t flow. Exploration will always require living in complexity.

Fight through it

Given that complexity surrounds us, as it always will when we’re doing anything new, can we learn more ways to cope with it?

Plants and moss reach between grave sites. Complexity happens.

To swim forward in this complexity, we need our pieces to be discoverable, to be trouble-shoot-able, and to be experiment-with-able.

Keith Horwood from stdlib is working on the democratization of APIs, those pieces of the internet that make something happen in the real world. They’re making APIs easy to document and standardize. Stdlib aims to supplement, not replace developer workflows: the tools pile higher, and this is normal. Each tool represents a piece of detailed know-how that we can acquire without all the details.

Keith Horwood and Rishabh Singh both pointed out that humans/programmers go from seeing/reading, to speaking/writing, and then to executing/programming: we observe the world, we speak into the world, and then we change the world. (I would argue that we hop quickly to the last step, both as babies and as developers.) To learn how to change a complex system is to change it, see what happens, change it again.

We use type systems and property tests to reason about what can’t happen. Example tests and monitoring reassure us what happens in anticipated circumstances. We get to deduce what really does happen from observability and logs.

Embrace it

If we accept that we are part of this complex system that includes our code, perhaps we can find buoyancy: we can sail instead of drown.

Chaos is not unordered. It took us a long time to understand its order.

Complexity is not anarchy; when it gels, a complex system is a higher form of order. It is an order that operates not in linear deductions, but in circles and spirals. These circles operate both within the system and between the system and its environment.

Feathers and I both spoke about the symbiosis of a development team with its code and its tools. I call it a symmathesy. We learn from our code, from the clues it leaves us in data and logs; and our code learns from us, as we change it. Both these forms of communication happen only through other software: observability tools to see what is happening in the software, and delivery tools to change what will happen. Once we view the system at this level, we can think about growing our whole team: people, running software, tools for visibility and control.

Rishabh Singh, Miltos Allamanis, and Eran Yahav showed machine-learning backed tooling that makes programs that offer useful suggestions to humans who are busy instructing the computer. The spiral goes higher.

Kent Beck said that nothing has higher leverage than making a programming system while using those same tools to build a system. His talk suggested that we: (1) make very small changes in our local system; (2) let those changes propagate outwards gradually; and (3) reflect on what happened, together. We learn from the system we are changing, and from each other.

McLuhan’s Law: We shape our tools, and then our tools shape us.

Our tools don’t shape our behavior violently and inflexibly, the way rules and punishment do. They shape us by changing the probability of each behavior. They change what is easy. This is part of my mission at Atomist: enable more customization of our own programming system, and more communication between our tools and the people in the symmathesy.

a cute little idiomatic grave site

As developers, we are uniquely able to shape our own world and therefore ourselves by changing our tools. Meanwhile, other people are gaining some of this leverage, too.

I believe there will be a day when no professional says “I can’t code” — only “coding is not my specialty.” Everyone will write small programs, what Ben Scofield calls idiomatic software, what I call personal automation. These programs will remain entwined with their author/users; we won’t pretend that the source code has value outside of this human context. (Nathan Herald had a good story about this, about a team that tried to keep using a tool after the not-professional-developer who wrote it left.)

This is a problem in development teams, when turnover is high. “Everyone who touches it is reflected in the code.” (who said that? Rajeev maybe?) I don’t have a solution for this.

a smooth path through the cemetery

The path forward includes more collaboration between humans and computers, and between computers and each other, guided by humans. It includes building solid steps on familiar ground, swimming lessons for exploration, and teamwork in the whole sociotechnical system so that we can catch the winds of complexity and make them serve us.