The word “hard” describes sciences like physics and chemistry. It is confusing that “hard” can mean difficult, because these sciences aren’t more difficult than the “soft” ones like sociology and anthropology. They’re differently difficult.
The “hard” sciences are hard because they’re solid. We can stand on them. Physical laws are universal laws. They demand rigor, and rigor means proving that assertions apply in all cases, both through deduction and by checking against evidence.
The “soft” sciences are differently hard. They study humans systems, far too complex to make universal causal predictions. Conclusions about one culture or group are valid in some other groups and invalid in others. Rigor in complexity means studying which cases your assertions apply to. It means observing, discerning, and wallowing in context.
Correctness, puzzles, and the “hard” sciences are hard like rocks. Rock climbing is very technical. It takes a lot of skill and strength and hard work to climb. At the top of the cliff, you know you’ve achieved something specific.
Change, people, and the “soft” sciences are hard like mud. Like wading through goopy mud. Techniques can help, but each depends on the kind of mud. Strength helps, but pushing too hard can get you more stuck. It always helps to be okay with getting messy. When you finally reach that piece of relatively solid ground, the view is still a swamp.
Why would you want to wade through mud?
why does a fish want to swim through water?
People, interrelationships, change — this is our world. Sometimes we can carve out puzzles we can solve for real. The few universal laws we can find are priceless. But the rest of it — life is in the mud, in the deep context. The skills to navigate it are not easy. They are not satisfying in the same way, either. But they are how we find meaning, how we participate in a system bigger than our own self.
In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.
So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.
There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.
Will these predictions come true? can’t know until we try it.
Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.
Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)
That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.
Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.
We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?
Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.
We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.
And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.
Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.
A few hundred years ago, we decided that circular causality was a logical fallacy. All causes are linear. If something moves, then something else pushed it. If you want to see why, you have to look smaller; all forces derive from microscopic forces.
Yet as a human, I see circular causality everywhere.
cohesive groups (we help out family, then we feel more bonded, then we help out 🔄)
downward spirals (he’s on drugs, so we don’t talk to him, so he is isolated, so he does drugs 🔄)
virtuous cycles (it’s easier to make money when you have money 🔄)
These are not illusory. Circular causalities are strong forces in our lives. The systems we live in are self-perpetuating — otherwise they wouldn’t stay around. The circle may be initiated by some historical accident (like biological advantage in the first days of agriculture), but the reason it stays true is circular.
Family therapy recognizes this. (In the “soft” sciences you’re allowed to talk about what’s right in front of you but can’t be derived from atomic forces 😛.) When I have a bad day and snip at my spouse, he withdraws a bit; this hurts our connection, so I’m more likely to snip at him 🔄. When I wonder if my kid is hiding something, I snoop around; she sees this as an invasion of privacy, she starts hiding things 🔄. When my partner tells me something that bothers him, and I say “thank you for expressing that” and then change it, next time he’ll tell me earlier, and it’ll be easier to hear and fix 🔄.
Note that nobody is forcing anyone to do anything. This kind of causality acts on propensities, on likelihoods. We still have free will, but it takes more work to overcome tendencies that are built into us by these interactions.
Or at work: When I think a coworker doesn’t respect me, I make sarcastic remarks; each time he respects me less 🔄. As a team, when we learn together and accomplish something, we feel a sense of belonging; this leads us to feel safe, which makes it easier to learn together and accomplish more 🔄.
Some of these cycles are merely self-sustaining. Many spiral us further and further in a particular direction. These are growth loops, which Kent Beck describes: “the more it grows, the easier it is to grow more.” There is power for us in setting up, nourishing, or thwarting our own cycles. Growth loops are more powerful than individual, discrete incentives. The most supportive families, the most productive teams have them.
Because a growth loop moves a system in a particular direction, it’s more of a spiral than a circle. I want to draw a z-axis through it. Like snipping at my spouse:
As my spouse and I get snippy with each other, we spiral toward disconnection. When we talk early and welcome gentle feedback, we spiral toward connection. Whereas, when my team bonds and accomplishes things, we spiral toward belonging — with a side effect of accomplishments.
Call this “the network effect” if you want to; network effects are one form of growth loop.
When I start looking for these growth loops, I see them everywhere, even inside my head. I’m low on sleep, so I’m tired, so I don’t exercise, so I sleep poorly, so I’m tired 🔄. I’m not sleeping, so I get mad at myself for being awake, which is agitating and makes it harder to sleep 🔄. Once I recognize these, I can intervene. I notice that what I feel like doing and what will make me happy are not the same things. I step back from my emotions and feel compassion for myself and then let them go instead of feeding them. I stop negative loops and nourish positive ones.
Circular causality is real, and it powerful. In biology, in our lives, and in our teams, it feels stronger than linear causality; it can override individual competition and incentives. It forms ecosystems and symmathesies and cultures. Linear causality is valuable to understand, because its consequences are universal, while every loop is contextual. But can we stop talking like the only legitimate explanations start from atoms? We influence each other at every level of systems. Circular causality is familiar to us. I want to get better at seeing this and harvesting it in my work.
In Why Information Grows (my review), physicist César Hidalgo explains that the difference between the ability to produce tee shirts vs rockets is a matter of accumulating knowledge and know-how inside people, and weaving those people into networks. Because no one person can know how to build a rocket from rocks. No one person understands how a laptop computer works, at all levels. There’s a limit to the amount of knowledge a single person can cram into their head in a lifetime; Hidalgo calls this limit one personbyte.
Our software systems are more complicated than one person can hold. You can deal with this by making each system smaller (hello microservices), but then someone has to understand how they fit together. And someone cannot. It takes teams of people.
You can deal with the personbyte limitation by forming teams that work together closely. The more complex the product we’re building, the more knowledge we need, so the less duplicated knowledge we want on the team. Team size is limited too, because of coordination overhead.
If you think about it this way, you might recognize that in many software teams, our limitation is not how much we can do, but how much we can know. To change a sufficiently complex system, we need more knowledge than one or two people can hold. Otherwise we are very slow, or we mess it up and the unintended effects of our change create a ton more work.
If the limitation of my team is how much we can know, then mob programming makes a ton of sense. In mob programming, the whole team applies their knowledge to the problem, and each person takes turns typing. This way we’re going to do the single most important thing in the most efficient and least dangerous way our collective knowledge and know-how can provide. In the meantime we spread knowledge among the team, making everyone more effective.
If your software system has reached the point where changing it is one step forward, two steps back — it might be a good time to try working as one unit, mob-programming style.
This puzzler says something about our culture. It says we think in terms of causes that happen before their effects. That we don’t believe in reflexive causality.
In life, everything interesting is a circle. The mitochondria breaks down sugar, the proteins use the energy, they keep up the cell wall, the cell wall protect the mitochondria. The thriving of each one feeds the thriving of the others. I feel cranky, so I work with less patience, so I get way more errors, so I get more cranky. There aren’t many women in programming, women don’t picture themselves in programming, there are fewer women in programming. Poverty eats time and brainpower, leaving less for study and thinking, and stereotypes reinforce themselves. Suspect an employee is underperforming, and you can drive them to underperform. Or a nation’s economy is tight, citizens get paranoid and vote to increase barriers, their economy shrinks. A family helps each other, becomes a tighter and more successful unit, and helps each other even more.
Circles like these form systems. Enduring systems like a family, or short-lived whirlpools like me being cranky. Biologically, these emerge as organisms and ecosystems. As people, organizations and religions and markets and successful companies emerge from circles of mutual interdependence.
Love has a circular causality. I find you interesting, that feels good, which makes you like me, which makes me more interesting, which feels good to me, so I like who I am with you, so I tell you sweet things. This cycles into a relationship.
There is no single cause of this phenomenon. That spark of compatibility when we met, we can point to a beginning in this case, to a “before.” but our whole life together wasn’t there. We built that moment by moment in the feedback loop of caring for each other.
“Which came first, the chicken or the egg?” is a fallacy: the fallacy of root cause. Of linear causality.
Linear causality happens, but in real life, it is the exception. An edge case. Everything interesting is a circle.
Yesterday in a zine, I read an in-process book review of The Rise and Fall of the Roman Empire. Reading that was a project for the review author; they wrote the review while still in the thralls of the book. That seems like the best time, not after I’ve closed the book. So quick! before I listen to the epilogue:
César Hidalgo is a statistical physicist. His book, Why Information Grows, is about biology and economics. It scales from the quantum to molecules to life to ecosystems to human systems. The common theme at every level is: information, embodied in matter. Processed by systems of increasing complexity to produce even more information. This is how our planet keeps being interesting, while the universe as a whole marches toward the boringness of maximum entropy.
Traditional thermodynamics says entropy always increases, OK, and then it uses this principle to predict the behavior of closed systems near equilibrium.
Which is no system ever.
There is only one closed system: the universe. And it is nowhere near equilibrium. When you get into a planet like Earth, which then develops feedback loops like self-replicating proteins, which develop into feedback loops like ecosystems, and eventually into human systems — we’re only moving farther from equilibrium. Energy is constantly injected into the system, and we use that to process and create higher and higher levels of information. More and more interesting stuff.
Hidalgo extends this into economics, and uses it to explain geographic income disparity.
The trick is: information by itself is useless. To use it, a system/entity/person needs enough knowledge to know what it means, and enough know-how to do something with it.
His key metaphor is that products are embodied information. To create any product takes knowledge and know-how. From a little, like garments, to a lot, like airplanes. That knowledge and know-how must exist inside a country for that country to make such products. And knowledge and know-how are way, way harder to move around than raw materials. They exist inside people, and networks of people, and networks of networks of people. So a person can make a rug, a firm can make a jersey, and many firms together can make a satellite.
Furthermore, Knowledge and know-how create knowledge and know-how. There’s a feedback loop: I know how some aspects of how to build software, so I build it, and I work with other people, and we learn from each other and the knowledge in our particular piece of the world increases. We also create new k&kh, both in ourselves, plus embodied in our software, which can give us and other people new tools to build new things. It spirals.
So software concentrates in Silicon Valley. Movies in Hollywood. Financial instruments in New York. Cars in Germany and Japan.
I’m also reading a book now, Cybernetic Revolutionaries, about a sophisticated cybernetic computer system built in Chile in 1971–72, a collaboration between many Chileans and a few key foreigners. Stafford Beer and some other British computer scientists worked with them on cybernetics, economic modelling, and programming. Guy Bonsiepe (German) worked with them on industrial design. The project and the whole country were greatly held back by US embargos which deprived them of parts and computers that Chile didn’t have the knowledge and know-how to build for themselves. The story illustrates the value of acquiring knowledge in a nation, and the limitations of not already having it.
Personally, this concept of knowledge and know-how being embodied in people — not in text! text only helps if you already have the knowledge to attach meaning to the text. And the way it transfers is by working together. By pair programming, not by documenting your code.
There’s also the part where, he looks at cars or pillows as embodied information: products have the value that you can get benefit from the information without having to understand all of it. Software is very obviously that. The value in the software is what people can now do, that they couldn’t before without possessing a whole lot of knowledge and know-how that the creators of the software put into it. Note that here, software is a medium, and the value is in the business, in whatever users can do with it.
In the Three Body Problem series, one of the characters describes inhabited planets as “low-entropy systems,” and humans and aliens as “low-entropy beings.” We get there by using and creating information, by means of the knowledge and know-how embodied in the organization of our components — proteins, cells, neurons, employees, firms, nations. This book has me thinking hard about how knowledge is created and moved. Because it is not through books or the internet — printed language is the feeblest form of knowledge transfer. Working alongside each other is the richest. Can we do more of that?
The other day in Iceland, a tiny conference on the Future of Software Development opened with Michael Feathers addressing a recurring theme: complexity. Software development is drowning in accidental complexity. How do we fight it? he asks. Can we embrace it? I ask.
Complexity: Fight it, or fight through it, or embrace it? Yes.
Here, find tidbits from the conference to advance each of these causes, along with photographs from a beautiful cemetery I walked through in Reykjavik.
One way we resist complexity: keep parts small, by creating strong boundaries and good abstractions. Let each part change separately. The question is, what happens outside these boundaries?
Feathers complained that developers spend about ten percent of their time coding, and the rest of it configuring and hooking together various tools. This makes sense to me; we’ve optimized programming languages and libraries so that coding takes less time, and we’ve built components and services so that we don’t have to code queuing or caching or networking or databases. Hooking these together is our job as software developers. Personally, I want to do more of that work in code instead of screens or configuration, which is part of my mission at Atomist. Josh Stella of Fugue also says we should be programming the cloud, not configuring it.
Paul Biggar at Dark has another way to attack complexity: wall it off. Cross the boundaries, so that developers don’t have to. Or as Jason Warner put it, “do a lot more below the waterline.” The Dark programming system integrates the runtime and the database and the infrastructure and the code, so that system developers can respond to what happens in the real world, and change the whole system at once. This opens backend development to a whole realm of people who don’t have time to learn the a dozen parts and their interconnections. The Future of Software Development is: more people will be doing it! People whose primary job is not coding.
In any industry, we can fight complexity through centralization. If everyone uses GitHub, then we don’t have to integrate with other source code management. Centralization is an optimization, and the tradeoffs are risk and stagnation. Barriers to entry are high, options are limited, and growth is in known dimensions (volume) not new ones (ideas).
Decentralization gives us choices, supports competing ideas, and prevents one company from have enough power to gain all the power. Blockchain epitomizes this. As Manuel Araoz put it: “Blockchain is adding intentional inefficiency to programming” in order to prevent centralization.
Centralization is also rear-facing: this thing we know how to do, let’s do it efficiently. Decentralization is forward-facing: what do we not yet know how to do, but could?
Building one thing very well, simply, is like building a stepping stone through the water we’re drowning in. But stones don’t flow. Exploration will always require living in complexity.
Fight through it
Given that complexity surrounds us, as it always will when we’re doing anything new, can we learn more ways to cope with it?
To swim forward in this complexity, we need our pieces to be discoverable, to be trouble-shoot-able, and to be experiment-with-able.
Keith Horwood from stdlib is working on the democratization of APIs, those pieces of the internet that make something happen in the real world. They’re making APIs easy to document and standardize. Stdlib aims to supplement, not replace developer workflows: the tools pile higher, and this is normal. Each tool represents a piece of detailed know-how that we can acquire without all the details.
Keith Horwood and Rishabh Singh both pointed out that humans/programmers go from seeing/reading, to speaking/writing, and then to executing/programming: we observe the world, we speak into the world, and then we change the world. (I would argue that we hop quickly to the last step, both as babies and as developers.) To learn how to change a complex system is to change it, see what happens, change it again.
We use type systems and property tests to reason about what can’t happen. Example tests and monitoring reassure us what happens in anticipated circumstances. We get to deduce what really does happen from observability and logs.
If we accept that we are part of this complex system that includes our code, perhaps we can find buoyancy: we can sail instead of drown.
Complexity is not anarchy; when it gels, a complex system is a higher form of order. It is an order that operates not in linear deductions, but in circles and spirals. These circles operate both within the system and between the system and its environment.
Feathers and I both spoke about the symbiosis of a development team with its code and its tools. I call it a symmathesy. We learn from our code, from the clues it leaves us in data and logs; and our code learns from us, as we change it. Both these forms of communication happen only through other software: observability tools to see what is happening in the software, and delivery tools to change what will happen. Once we view the system at this level, we can think about growing our whole team: people, running software, tools for visibility and control.
Rishabh Singh, Miltos Allamanis, and Eran Yahav showed machine-learning backed tooling that makes programs that offer useful suggestions to humans who are busy instructing the computer. The spiral goes higher.
Kent Beck said that nothing has higher leverage than making a programming system while using those same tools to build a system. His talk suggested that we: (1) make very small changes in our local system; (2) let those changes propagate outwards gradually; and (3) reflect on what happened, together. We learn from the system we are changing, and from each other.
McLuhan’s Law: We shape our tools, and then our tools shape us.
Our tools don’t shape our behavior violently and inflexibly, the way rules and punishment do. They shape us by changing the probability of each behavior. They change what is easy. This is part of my mission at Atomist: enable more customization of our own programming system, and more communication between our tools and the people in the symmathesy.
As developers, we are uniquely able to shape our own world and therefore ourselves by changing our tools. Meanwhile, other people are gaining some of this leverage, too.
I believe there will be a day when no professional says “I can’t code” — only “coding is not my specialty.” Everyone will write small programs, what Ben Scofield calls idiomatic software, what I call personal automation. These programs will remain entwined with their author/users; we won’t pretend that the source code has value outside of this human context. (Nathan Herald had a good story about this, about a team that tried to keep using a tool after the not-professional-developer who wrote it left.)
This is a problem in development teams, when turnover is high. “Everyone who touches it is reflected in the code.” (who said that? Rajeev maybe?) I don’t have a solution for this.
The path forward includes more collaboration between humans and computers, and between computers and each other, guided by humans. It includes building solid steps on familiar ground, swimming lessons for exploration, and teamwork in the whole sociotechnical system so that we can catch the winds of complexity and make them serve us.
TL;DR: the most productive development happens when one person knows the system intimately because they wrote it; this is in conflict with growing a system beyond what one person maintains.
Let’s talk about why some developers, in some situations, are ten times more productive than others.
hint: it isn’t the developers, so much as the situation.
When do we get that exhilarating feeling of hyperproductivity, when new features flow out of our fingertips? It happens when we know our tools like the back of our hands, and more crucially, when we know the systems we are changing. Know them intimately, like I know the contents of my backpack, when I packed it and I tuned the items in each pouch over years of travel. Know the contents of every module, both what they are and what we’d like them to be if we ever finish that refactoring. Know the edges, who uses every API and which changes will break whom, and we’re friends with all of the stakeholders. Know the underpinnings, which database fields are indexed and which are obsolete and which have quirky special values. Know the infrastructure, where it runs in production and how to ssh in; where it runs in test and what version is deployed and when it is safe to push a new one. Know the output, what looks normal in the logs and what’s a clue. We have scripts, one-liners that tail the logs in all three prod instances to our terminals so our magic eyes can spot the anomaly.
We know it because we wrote it, typically. It is extremely difficult to establish this level of intimacy with an existing system. Braitenberg calls this the Law of Downhill Invention, Uphill Analysis. Complex systems are easier to build than to figure out after they’re working.
We know it because we are changing it. The system is alive in our head. It’s a kind of symbiosis: we help the system run and grow, and the system works the way we wish. If we walk away for a month or two, begin a relationship with a different system, the magic is lost. It takes time to re-establish familiarity.
Except, I’m suspicious of this description. “We” is a plural pronoun. This depth of familiarity and comfort with a system is personal: it’s usually one person. The one person who has conceived this solution, who holds in their head both the current state and where they are aiming.
If you are this person, please realize that no one else experiences this work the way you do. For other people, every change is scary, because they don’t know what effect it will have. They spend hours forming theories about how a piece works, and then struggle to confirm this with experiment; they don’t have the testing setup you do. They study every log message instead of skimming over the irrelevant ones, the ones you’ve skipped over so often you don’t even see them anymore. By the time they do figure something out, you’ve changed it; they can’t gain comprehension of the system as quickly as you can alter it. When they do make a change, they spend lots of time limiting the scope of it, because they don’t know which changes will cause problems. They get it wrong, because they don’t know the users personally; communication is hard.
If you are this person, please go easy on everyone else. You are a synthetic biologist and can alter the DNA of the system from within; they are xenosurgeons and have to cut in through the skin and try not to damage unfamiliar organs.
If you work with this person, I’m sorry. This is a tough position to be in, to always feel inferior and like you’re breaking everything you touch. I’m there now, in some parts of our system. It’s okay for me because the host symbiont, the author and manipulator of that software, is super nice and helpful. He doesn’t expect me to work with it the same way he does.
If your team looks like this, here are some steps to take:
Consider: don’t change it. This really is the fastest way to develop software. One person, coordinating with no other developers, can move faster than a whole team when the system is small enough. Until! we need the system to grow bigger. Or! the system is crucial to the business (it’s an unacceptable risk for only one person to have power over it).
As the host symbiont who lives and breathes the system: strike the words “just”, “easy,” “obvious,” “simple,” and “straightforward” from your vocabulary. These words are contextual, and no other human shares your context.
Please write tests. Tests give people who are afraid of unintentional breakages a way to test their theories. Experimentation is crucial to learning how a system works, and tests make experiments possible. They also serve as documentation of intended behavior.
Pair program! By far the best way to transfer understanding of the system to another human is to change it together. (Or boost a whole team at once: mob program!)
Make a README for other developers. Describe the purpose of the system briefly, and document how you develop, test, and troubleshoot the system. Specify the command lines for running tests, for deployment, for accessing logs. Describe in detail how to obtain the necessary passwords. Write down all the environments where it runs, and the protocol around changing them.
Do you know your users better than anyone else? Remedy that. Bring other team members into the discussion. (There’s a sweet spot of a single developer-type who works within a business unit. When the software becomes too important for this scale, it gets harder.) Let all the devs get to know the users. Have happy hours. Form redundant communication channels. It’ll pay off in ways you never detect.
Slow down. Like seriously, if one person is developing at maximum speed on a project, no one else can get traction. You can’t move at full speed and also add symbionts. When it is important to bring in new people, don’t do anything alone. Pair on everything. Yes, this will slow you down. It will speed them up. Net, we’ll still be slower than you working alone. This is an inherent property of the larger system, which now includes interhuman coordination. There’s more overhead than when it was just you and your program. That’s OK; it’s a tradeoff for safety and scale from sheer speed.
Let’s acknowledge that there really are developer+situations that are 10x more productive than others. Let’s acknowledge that they don’t scale. Make choices about when we can take advantage of the sweet spot of local or individual automation, and when the software we’re building is too important for a bus factor of one.
Distinguish between an experimental prototype, when speed of change and redirection is crucial, versus a production app which needs backwards compatibility guarantees and documentation and all that seriousness — this requires a solid team.
Recognize that the most productive circumstance for development is a rare circumstance. Most of the time, I need to work on a system that someone else wrote. (that “someone else” could be “me, months or years ago.”) The temptation to rewrite is strong, because if I rewrite it then I’ll understand it.
There’s a time for 10x development, and a time for team development. When you want to be serious, the 10x developer prevents this. If that’s your situation, please consider the suggestions in this post.
This is a raging debate in our industry today. I think the answer depends strongly on the kind of problem a developer is trying to solve: is the problem contracting or expanding? A contracting problem is well-defined, or has the potential to be well-defined with enough rigorous thought. An expanding problem cannot; as soon as you’ve defined “correct,” you’re wrong, because the context has changed.
A contracting problem: the more you think about it, the clearer it becomes. This includes anything you can define with math, or a stable specification: image conversion, what do you call it when you make files smaller for storage. There are others: ones we’ve solved so many times or used so many ways that they stabilize: web servers, grep. The problem space is inherently specified, or it has become well-defined over time. Correctness is possible here, because there is such a thing as “correct.” Programs are useful to many people, so correctness is worth effort. Use of such a program or library is freeing, it scales up the capacity of the industry as a whole, as this becomes something we don’t have to think about.
An expanding problem: the more you think about it, the more ways it can go. This includes pretty much all business software; we want our businesses to grow, so we want our software to do more and different things with time. It includes almost all software that interacts directly with humans. People change, culture changes, expectations get higher. I want my software to drive change in people, so it will need to change with us. There is no complete specification here. No amount of thought and care can get this software perfect. It needs to be good enough, it needs to be safe enough, and it needs to be amenable to change. It needs to give us the chance to learn what the next definition of “good” might be.
Safety I propose we change our aim for correctness to an aim for safety. Safety means, nothing terrible happens (for your business’s definition of terrible). Correctness is an extreme form of safety. Performance is a component of safety. Security is part of safety.
Tests don’t provide correctness, yet they do provide safety. They tell us that certain things aren’t broken yet. Process boundaries provide safety. Error handling, monitoring, everything we do to compensate for the inherent uncertainty of running software in production, all of these help enforce safety constraints.
In an expanding software system, business matters (like profit) determine what is “good enough” in an expanding system. Risk tolerance goes into what is “safe enough.” Optimizing for the future means optimizing our ability to change.
In a contracting solution, we can progress through degrees of safety toward correctness, optimal performance. Break out the formal specification, write great documentation.
Any piece of our expanding system that we can break out into a contracting problem space, win. We can solve it with rigor, even make it eligible for reuse.
For the rest of it – embrace uncertainty, keep the important parts working, and make the code readable so we can change it. In an expanding system, where tests are limited and limiting, documentation becomes more wrong every day, the code is the specification. Aim for change.