Domain-specific laws

“there appear new laws and even new kinds of laws, which apply in the domain in question.”

David Bohm, quoted by Alicia Juarrero

He’s talking about the qualitative transformation that happens in a system when certain quantitative transition points are passed.

Qualitative transformation

I notice this when something that used to be a pain gets easier, sufficiently easier that I stop thinking about it and just use it. Like git log. There is such a thing as svn log but it’s so slow that I used it once ever in my years of svn. The crucial value in git log is that it’s so fast I can use it over and over again, each time tweaking the output.

  • git log
  • git log --oneline
  • git log --oneline | grep test
  • etc.

Now git log has way more functionality, because I can combine it with other shell commands, because it’s fast enough. This changes the system in more ways than “I use the commit log”: because I use the log, I make more commits with better messages. Now my system history is more informative than it used to be, all since the log command is faster.

The REPL has that effect in many languages. We try stuff all the time instead of thinking about it or looking it up, and as a result we learn faster, which changes the system.

Non-universal laws

I love the part about “laws, which apply in the domain in question.” There are laws of causality which are not universal, which apply only in specific contexts. The entire system history (including all its qualitative transformations) contribute to these contexts, so it’s very hard to generalize these laws even with conditions around them.

But can we study them? Can we observe the context-specific laws that apply on our own team, in our own symmathesy?

Can we each become scientists in the particular world we work in?

Designing Change vs Change Management

Our job as developers is to change software. And that means that when we decide what to do, we’re not designing new code, we’re designing change.

Our software (if it is useful) does not work in isolation. It does not poof transition to a new state and take the rest of the world with it.

If our software is used by people, they need training (often in the form of careful UI design). They need support. hint: your support team is crucial, because they talk to people. They can help with change.

If our software is a service called by other software, then that software needs to change to, if it’s going to use anything new that we implemented. hint: that software is changed by people. You need to talk to the people.

If our software is a library imported by other software, then changing it does nothing at all by itself. People need to upgrade it.

The biggest barriers to change are outside your system.

Designing change means thinking about observability (how will I know it worked? how will I know it didn’t hurt anything else?). It means progressive delivery. It often means backwards compatibility, gradual data migrations, and feature flags.

Our job is not to change code, it is to change systems. Systems that we are part of, and that are code is part of (symmathesy).

If we look at our work this way, then “Change Management” sounds ridiculous. Wait, there’s a committee to tell me when I’m allowed to do my job? Like, they might as well call it “Work Management.”

It is my team’s job to understand enough of the system context to guess at the implications of a change and check for unintended consequences. We don’t all have that, yet. We can add visibility to the software and infrastructure, so that we can react to unintended consequences, and lead the other parts of the system forward toward the change we want.

Reductionism with Command and Control

In hard sciences, we aim to describe causality from the bottom up, from elementary particles. Atoms form molecules, molecules form objects, and the reason objects bounce off each other is reduced to electromagnetic interactions between the molecules in their surfaces.

Molecules in DNA determine production of proteins which result in cell operations which construct organisms.

This is reductionism, and it’s valuable. The elementary particle interactions follow universal laws. They are predictable and deterministic (to the omits of quantum mechanics). From this level we learn fundamental constraints and abilities that are extremely useful. We can build objects that are magnetic or low friction or super extra hard. We can build plants immune to a herbicide.

Bottom-up causality. It’s science!

In Dynamics in Action, Juarrero spends pages and pages asserting and justifying that causality in systems is not only bottom-up; the whole impacts the parts. Causality goes both ways.

Why is it foreign to us that causality is also top-down?

In business, the classic model is all top-down. Command and control hierarchies are all about the big dog at the top telling the next level down what to do. Intention flows from larger (company) levels to smaller (division), and on down to the elementary humans at the sharp end of work.

Forces push upward from particles to objects; intentions flow downward through an org chart

Of course when life is involved, there is top-down causality as well as bottom-up. Somehow we try to deny that in the hard sciences.

Juarrero illustrates how top-down and bottom-up causality interact more intimately than we usually imagine. In systems as small as a forming snowflake, levels of organization influence each adjacent level.

We see this in software development, where our intention (design) is influenced by what is possible given available building blocks (implementation). A healthy development process tightens this interplay to short time scales, like daily.

Software design in our heads learns from what happens in the real world implementation

Now that I think about how obviously human (and organization) intention flows downward, impacted by limitations and human psychology pushing upward; and physical causality flows upward, impacted by what is near what and what moves together mattering downward; why is it even strange to us that causality moves both ways?

Mission Statement

“Code, as a medium, is unlike anything humans have worked with before. You can almost design right into it.”

me, in my Camerata keynote


But not totally, because we always find surprises. Complex systems are always full of surprises. That is their frustration and their beauty. 

We live in complex systems. From biology up through cultures and nations and economies, we breathe complexity. And yet in school we learned science as reductive.’

In software, we now have seriously complex systems that we can play with on a time scale that helps us learn. We have incidents we can learn from, with many clues to the real events, to the rich causalities, and sometimes we can trace those back to social pressures in the human half of our software systems. What is more, we can introduce new clues. We can add tracing, and we can make better tools that help the humans (and also provide a trail of what we did). So we have access to complex systems that are (1) malleable and (2) observable. 

My work in automating delivery increases that malleability. My speaking about collaborative automation aims to increase observability.

My quest is: as people, let’s create software systems that are complex and malleable and observable enough that we learn how to work with and within complex systems. That we develop instincts and sciences to change systems from the inside, in ways that benefit the whole system as well as ourselves. And that we apply that learning to the systems we live and breathe in: biology, ecology, economy, culture.

That’s my mission as a symmathecist.

Growth Loops: circular causality is real

A few hundred years ago, we decided that circular causality was a logical fallacy. All causes are linear. If something moves, then something else pushed it. If you want to see why, you have to look smaller; all forces derive from microscopic forces.

Yet as a human, I see circular causality everywhere.

  • autocatalytic loops in biology
  • self-reinforcing systems (few women enter or stay in tech because there are few women in tech)
  • romantic infatuation (he likes me, that feels good, I like him, that feels good to him, he likes me 🔄)
  • self-fulfilling prophecies (I expect to fail, so I probably will 🔄)
  • self-regulating systems (such as language)
  • cohesive groups (we help out family, then we feel more bonded, then we help out 🔄)
  • downward spirals (he’s on drugs, so we don’t talk to him, so he is isolated, so he does drugs 🔄)
  • virtuous cycles (it’s easier to make money when you have money 🔄)

These are not illusory. Circular causalities are strong forces in our lives. The systems we live in are self-perpetuating — otherwise they wouldn’t stay around. The circle may be initiated by some historical accident (like biological advantage in the first days of agriculture), but the reason it stays true is circular.

Family therapy recognizes this. (In the “soft” sciences you’re allowed to talk about what’s right in front of you but can’t be derived from atomic forces 😛.) When I have a bad day and snip at my spouse, he withdraws a bit; this hurts our connection, so I’m more likely to snip at him 🔄. When I wonder if my kid is hiding something, I snoop around; she sees this as an invasion of privacy, she starts hiding things 🔄. When my partner tells me something that bothers him, and I say “thank you for expressing that” and then change it, next time he’ll tell me earlier, and it’ll be easier to hear and fix 🔄.

Note that nobody is forcing anyone to do anything. This kind of causality acts on propensities, on likelihoods. We still have free will, but it takes more work to overcome tendencies that are built into us by these interactions.

Or at work: When I think a coworker doesn’t respect me, I make sarcastic remarks; each time he respects me less 🔄. As a team, when we learn together and accomplish something, we feel a sense of belonging; this leads us to feel safe, which makes it easier to learn together and accomplish more 🔄.

Some of these cycles are merely self-sustaining. Many spiral us further and further in a particular direction. These are growth loops, which Kent Beck describes: “the more it grows, the easier it is to grow more.” There is power for us in setting up, nourishing, or thwarting our own cycles. Growth loops are more powerful than individual, discrete incentives. The most supportive families, the most productive teams have them.

Because a growth loop moves a system in a particular direction, it’s more of a spiral than a circle. I want to draw a z-axis through it. Like snipping at my spouse:

a spiral toward disconnection, of me getting snippy and him getting sarcastic

As my spouse and I get snippy with each other, we spiral toward disconnection. When we talk early and welcome gentle feedback, we spiral toward connection. Whereas, when my team bonds and accomplishes things, we spiral toward belonging — with a side effect of accomplishments.

a spiral toward connection, of a team learning together and accomplishing stuff and bonding

I like to use this to explain why JavaScript is the most important programming language. It might be an inferior language by objective standards, but “objective standards” are like linear causality: limited. Reality is richer.

JavaScript started by being the first runtime built into the browser. This made it useful enough for people to learn it, and that’s key: a language is only useful if the know-how of using it is alive inside people. Then all these people create tools, resources, and businesses that make JavaScript more useful 🔄.

a spiral toward usefulness, from the first runtime to people learning and language improvements and tooling

Call this “the network effect” if you want to; network effects are one form of growth loop.

In our situation, JavaScript is the most useful language and it’s only getting more useful. It may not have the best syntax, or the best guarantees. It does have the runtime available to the most humans, the broadest community and a huge ecosystem. Thanks to all the web-based learning resources, it is also the most accessible language to learn.

When I start looking for these growth loops, I see them everywhere, even inside my head. I’m low on sleep, so I’m tired, so I don’t exercise, so I sleep poorly, so I’m tired 🔄. I’m not sleeping, so I get mad at myself for being awake, which is agitating and makes it harder to sleep 🔄. Once I recognize these, I can intervene. I notice that what I feel like doing and what will make me happy are not the same things. I step back from my emotions and feel compassion for myself and then let them go instead of feeding them. I stop negative loops and nourish positive ones.

Circular causality is real, and it powerful. In biology, in our lives, and in our teams, it feels stronger than linear causality; it can override individual competition and incentives. It forms ecosystems and symmathesies and cultures. Linear causality is valuable to understand, because its consequences are universal, while every loop is contextual. But can we stop talking like the only legitimate explanations start from atoms? We influence each other at every level of systems. Circular causality is familiar to us. I want to get better at seeing this and harvesting it in my work.

Symmathecist (n)

A quick definition, without the narrative

Symmathecist: (sim-MATH-uh-sist) an active participant in a symmathesy.

A symmathesy (sim-MATH-uh-see, coined by Nora Bateson) is a learning system made of learning parts. Software teams are each a symmathesy, composed of the people on the team, the running software, and all their tools.

The people on the team learn from each other and from the running software (exceptions it throws, data it saves). The software learns from us, because we change it. Our tools learn from us as we implement them or build in them (queries, dashboards, scripts, automations).


This flow of mutual learning means the system is never the same. It is always changing, and its participants are always changing.

An aggregate is the sum of its parts. 
A system is also a product of its relationships.
A symmathesy is also powered by every past interaction.

I aim to be conscious of these interactions. I work to maximize the flow of learning within the system, and between the system and its environment (the rest of the organization, and the people or systems who benefit from our software). Software is not the point: it is a means, a material that I manipulate for the betterment of the world and for the future of my team.

I am a symmathecist, in the medium of software.

More info:

Ascendency

What makes one system more organized than another? More developed, more … civilized? How can we measure advancement? In Ecology, the Ascendent Perspective, Robert Ulanowicz has an answer. Along the way, he answers even bigger questions, like: how do we reconcile the inexorable increase in entropy with the constant growth, learning, and making going on around us?

Concept: total system throughput

Take an ecosystem, or an economy. We can measure the size (magnitude) of the system by counting its activity. In the economy, this is GDP: how much money changes hands throughout the year? The same dollar might be spent over and over, and it counts toward GDP every time. In an ecosystem, we count carbon exchanges between species and stocks (like detritus on the ground). Or we count nitrogen, or phosphorus, or calories — whichever is the limiting factor for that system. If plankton gets eaten by a minnow which is eaten by a catfish, the carbon transferred counts every time.

The system grows in magnitude when more money/carbon/nitrogen enters it, or when more exchanges happen. But does this mean it’s more developed?

Concept: average mutual information

We count the transfers, the exchanges, to measure total system throughput. But are all transfers created equal? What about the patterns of movement?

Take this imaginary ecosystem that I just made up. There are some plants and some fish. Imagine the nitrogen flows from dirt to plants to fish to fish and back to dirt. I am not an ecologist.

at the bottom, dirt. nitrogen goes into two plants. each of them go into some minnows, which go to a fish, which goes to a bigger fish. Everybody contributes nitrogen back to the soil.

There are some flows, and ecologists can measure them, with lots and lots of work.

Now imagine this same system, except all the flows are equally distributed. Every species is the same as any other, they all eat each other and poop equally.

The same components, but now each one is connected to each one by an arrow of the same thickness.

The first picture looks more organized, right? In the second picture, it is maximally random where the carbon goes. The first ecosystem is more ordered. Nitrogen is moving through these pathways for reasons, not willy-nilly everywhere. Ulanowicz came up with a formula for quantifying the difference, and he calls it average mutual information (AMI). Because, the part where the plants don’t eat the fish is informational, it says something about the system, about which pathway is more efficient.

Concept: ascendency

Both the total system throughput and the un-randomness in the system (AMI) contribute to its significance, its organization. So Ulanowicz multiplies these (or something similar, the math is in the book at a high level and in his other works at a detailed level) and that’s Ascendancy. How much does the system do, and how interestingly does it do it?

Concept: efficiency and overhead

For a given system throughput, a maximally ascendent system would have like one path in a circle. Only the most efficient path would be used. There wouldn’t even be two plants.

Thinking about it this way, you can divide the system throughput into the part that contributes to ascendency, and the rest. The rest is overhead. It’s all those extra pathways, which, what is even the point?

The point is flexibility. Resilience. A perfectly efficient system is maximally fragile. One disruption, one disease kills off the single plant, and the whole thing done, everybody’s dead. No more flows, no more fishes.

Overhead, unpredictability, extra pathways, these are what keeps the ecosystem alive and flourishing under changing conditions. They let it change without dying out.

Efficiency vs Resilience

This illustrates that there is a conflict between efficiency and resilience. Organization is important — it often leads to system growth — and also limiting.

The ascendency of the economy is higher if, for the same GDP, money flows toward fewer corporations. But is that the most resilient?

Our teams are more efficient if there is exactly one path of information flow. But human communication is always partial, and broad social pressures direct us more constructively than a single financial incentive.

The concept of ascendency is useful in many ways, as detailed in the book. It incorporates and raises broader philosophical points, which come up in my other post about this book. “The world as we perceive it is the outcome … of a balanced conflict — the opposition of propensities that build order arrayed against the inevitable tendency for structures to fall apart.”

Reading: Ecology, the Ascendent Perspective

Complexity in Ecological Systems. by Robert E. Ulanowicz

It’s about ecology, complexity, and a considerable bit of philosophy. Very readable. Unfortunately only available in dead-tree.

“Life itself cannot exist in a wholly deterministic world!”

Concept: auto-catalytic loops

This book explains how evolution works. Evolution works by “try shit and see what happens” — perturb and test, quickly discard the not-better.

Competition is one method for test. It is a weak one, unsatisfying to explain (to me) how life got so interesting so fast. And keeps getting more and more interesting, faster than fits the time scale of generations.

Auto-catalytic loops are the next selection method, as described here. A water-plant’s leaves provide surface for bacteria. The water-plant even encourages the bacteria to grow there, why? Tiny zooplankton come to eat the bacteria, and are captured by the plant. The three form an auto-catalytic loop of helping each other; they can’t survive nearly as well alone. Once a loop like this forms, any organism that contributes to the loop, contributes to itself. It is selected for on its own merit, which is increased by other members of the loop: the loop advantages all its members. The loop becomes its own tiny system and emerges as a thing, its own evolutionary power. If another organism can serve the purpose of the zooplankton better, the loop will adopt that one and the zooplankton will miss out, maybe die out.

Certain dogs undergo strong selection pressures from the auto-catalytic loop with dog owners: the better they are at making humans happy (looking cute, convenient size) the more humans take care of them. (That’s all me, the book doesn’t make such stretches)

Everything interesting is a circle. It happens with our ideas too: religion supports unity which supports group advancement which strengthens need for inclusion in that group which strengthens religion. Beliefs turn into communities which reinforce the beliefs.

Or in programming, more frequent deploys lead to safer deploys which leads to more frequent deploys. You can join the TDD virtuous cycle, where you build clearer mental models of the code, which are preserved by automated testing, which leads to better design and more TDD. Or the strongly-typed-functional-programming virtuous cycle: using strong (mathematical) abstraction leads to a solid mental model, which feels good, which leads to further abstraction (which leads you to a place where people outside this cycle can’t understand the code, but you do), and the code is very predictable, which leads you to propound FP and learn more of it.

Concept: propensities

Auto-catalytic loops introduce a new form of causality. Why are things as they are? Because there’s some internally-consistent loop that keeps them that way. (The random historical events that led to the setup of that loop are less useful to understand.) Why are women not in computer programming? Because they don’t think of themselves as the type, because they don’t see other women doing it. Which leads to men don’t think of women as programmers, which leads them to assume women they meet aren’t competent programmers, which drives women out of programming, which means younger women don’t see themselves in programmers and don’t go into it. Or aren’t seen as potential architects, and people push the strongest ones in to management instead. It all perpetuates itself.

But that’s not causal, right? No one literally pushed a woman into management. They might have encouraged her, but she chose to make the switch. She has free will.

Causality as we are used to thinking of it, as forcing, as push, is a weak concept. It doesn’t explain human action. It doesn’t explain biology, it doesn’t explain squat about the systems we live in. Newton’s Laws are satisfying in their determinism, but they’re only an approximation, an edge case.

The bacteria doesn’t have to live on the water-plant. But it finds it useful, and then even its individual demise supports the species by enhancing the auto-catalytic loop. No one forces it to live on the leaf, and it doesn’t always, but more and more often, it does.

Human action is the sum of many influences, many of them random. Including our consciously formed intentions. Including how happy our dog made us this morning. Including the perceived expectations of others, the social pressure we feel from our communities. Including the incentives set up at work. Including what smells good right this moment.

All of these influences on us are not forces in the “push” sense. Yet they matter. They change our propensities, the likelihood of doing one thing or another. And propensities matter. Because we do have free will: the ability to consciously influence our own decisions. (Okay, now I’m totally into the content and message of Dynamics in Action, which is the book whose references led me to Ecology, the Ascendent Perspective). We don’t have control over our decisions, though; our environment and every higher-level system (auto-catalytic loop or community) we are part of also affects what we do. We are not self-sufficient; humans aren’t human without other humans, without our relationships and institutions and cultures, nor without changing our personal environment.

It helps to recognize that push-forces, deterministic forces that compel 100%, are a rare edge case. Useful in analyzing collisions of solid objects. Statistical mechanics, useful in analyzing aggregate behavior of many-many unintelligent particles, is another edge case. The messy middle, where everything interesting happens, works in propensities: a generalization of “force” which changes probabilities without setting them to 1. (Now I’m totally into Popper’s A World of Propensities, which I reached from the references in Ecology: the Ascendent Perspective, and which is now my favorite paper.)

Auto-catalytic loops are more powerful forces of evolution than individual selection. They’re hard for us to see because they work in propensities, not forces. And indirectly, by changing the propensities of other organisms in the loop.

Science as I learned it in school (especially in my Physics major) doesn’t explain real life. No universal, deterministic law can. “There appears to be no irrefutable claims by any discipline to encompass ecological phenomena,” much less human workings! If science insists that “all causes are material and mechanical in origin,” that we can explain politics based on particles if we just push hard enough, science is not believable. It leaves us to supplement it with religion, because it does not explain the world we experience.

If we widen our thinking from forces to propensities, and from individual selection to auto-catalytic loops, we get closer. And there’s math behind this now. This book doesn’t include all of Ulanowicz’s math behind the concept of ascendency, but he references papers that do. There is math behind propensities, rigorous thinking. Scientific thinking, in the broader sense of science, of models that we can test against the real world. We have to move “test” beyond the edge case of controlled, isolated experiments. We have to appreciate knowledge that is not universal, because each auto-catalytic loop creates its own rules. What is beneficial in TDD is different from what is well-fitted in strongly-typed FP. Programming really is a harder field for women; it has nothing to do with the women as individuals and everything to do with the self-reinforcing loops in the culture. Reality is richer than deterministic laws.

Oops, I forgot to explain ascendency. Instead, this book (and its friends) inspired a whole personal philosophy. Win?

REdeploy (for the first time)

The inaugural REdeployConf wrapped up yesterday (as I write this). I’m already feeling withdrawal from intense learning and conversations. I’ll attempt to summarize them in this post.

The RE in REdeploy doesn’t mean “again” (lo, it is the first of its kind). RE stands for Resilience Engineering. It is a newish field, focused on sociotechnical systems that continue to function in shifting, surprising, always-failing-somewhere conditions (aka, reality).

John Allspaw opened the conference with: resilience is in the humans. Your software might be robust, but in the end, it does what it was told. Only humans respond in new ways to new situations. People can be prepared to be unprepared.

John Allspaw is so excited this conference exists

Resilience is the antidote to complexity. Except not a full antidote: the complexity is still there. It just doesn’t kill us. Complexity is not avoidable, because success begets complexity. A successful system has impact, and impact means interdependence, and interdependence means complexity.

What is resilience? Laura Maguire enumerated some definitions. Rebound, robustness, and graceful extensibility are partial definitions that build into the real one: Resilience is sustained adaptive capacity. It’s the ability to find new abilities, to change in response to changing conditions to maintain functioning. Resilient systems are not the same moment to moment, but they keep fulfilling their purpose (even as their purpose morphs).

four definitions of resilience, illustrated

Resilience Engineering is not a computer science discipline. It’s broader than that. Industries like nuclear power and air traffic control have deeper roots in the study of coping with failure. This isn’t your old-school Root Cause Analysis that asked “why did this fail?” This is systems thinking, asking “how does this succeed?” How do systems constantly subject to new failures keep running anyway? (hint: people.)

Avery Regier pointed out that root cause analysis can prevent a specific failure from recurring. But we find new failures all the time. Some new service is going to run out of space. Some new query is going to be slow. Some new customer is going to call a new API a whole lot more than we expected. Prevention is never going to cut it, so don’t spend all your resources there. Grow your powers of recovery, and you mitigate whole classes of failures.

Resilience Engineering recognizes that our systems include software and humans, so half the talks were about code and half about people. Matty Stratton extended trauma therapy to organizations, and Lee Kussmann gave strategies for personal resilience to stress (notes for both). On the code side, Cici Deng spoke about making safer changes at AWS Lambda: like most things in this science, improvement isn’t having the right answers — it’s asking better questions (notes). Aaron Blohowiak talked about speeding recovery and isolating failure domains at Netflix. Then Hannah Foxwell on HumanOps: there is no failover for You. People are more difficult to work with than software, so start there. (notes for both)

Mary Thengvall and J Paul Reed organized this conference to beget conversations, to seed a community in this space. Existing communities exist around the SNAFUcatchers and the Lund program. This new one is an open, informal camerata of people who care about resilience in humans+computer systems within the software industry.

Mary and Paul lead the conversation

They succeeded! The conference was a conversation: speakers referred back to prior talks. Mary and Paul emceed with commentary before and after every talk, weaving them together, sharing their reactions and enthusiasm. At the end of each day, the speakers turned into a panel for Q&A. The questions drew from and among all the talks.

Liz asked, how can we move an organization toward resilience from the bottom? Matt and Cici went back and forth over “use data” and “data won’t convince some people.” Any solution must be opt-in, and then you need to collect stories. Stories move people. When every system is different, stories are what we have. We can’t do controlled experiments. What we can do is: dig into those stories to find the causes of success. This is what researchers like Laura Maguire do.

In one of the last questions, someone asked, “Where is accountability in all this?” Cici said, we have tons of talk about accountability in our culture already. I agree; every movement is relative to the culture it is moving. Other answers suggested: Accountability is assumed, not assigned. Personal theroy: maybe accountability at the individual-human level is too narrow for the larger networks that we require to work with systems of complexity larger than a personbyte. MAYBE teams need to be accountable for working safely and effectively, and people need to be accountable to their teams.

Aaron had a lovely rant during Q&A about the “sufficiently smart engineer.” This is the hypothetical engineer who would not make such mistakes. Who would understand the existing system thoroughly. This person is a myth. Our software is too complex for one person to hold in their head. You can’t hire a sufficiently smart engineer, and don’t feel bad that you aren’t one, because it’s not a thing. Instead, we need to build systems that support our own cognitive work.

Resilience Engineering is a new science. Its research does not take place in a lab, but in the field. “We refuse to simplify.” Laura Maguire closed with a description of next steps in research. In our own jobs, we can do resilience engineering by looking for who and what makes us more safe (learn from success), by keeping the messy details instead of seeking a clean story, and by maximizing for learning in our symmathesy-teams (including software, tools, and people). For instance, when you find a “root cause” of a failure, look for other situations when that trigger occurred and failure didn’t.

RE researchers study DevOps in real situations

Other fun stuff:

We witnessed the first open-source releases from Deere and Co.

Heidi Waterhouse got rate-limited on twitter from quoting the talks.

Paul Carleton told a story of Stripe’s journey from “We should restart old EC2 instances” to “Oh look, we’re chaos engineers now.” Matt Broberg told a scary story about stopping forward motion, about ⟳technical debt and social debt⟲ at Sensu, and the perils of IRC. (notes for Matt, Paul, and Laura)


Atomist sponsored — I hope we can sponsor every edition of this conference! We work on tools to help developers integrate the social and technical parts of our systems, so it’s relevant. This was our first lanyard sponsorship and they were beautiful, in my very biased opinion.

Yesterday (as I write this) we recorded a >Code episode (#95) with Heidi Waterhouse, and she and I brought up topics from REdeploy about a dozen times. Me: “This conference is going to keep coming up, over and over, for the rest of my life.”

Thank you, Mary and Paul and Jeremy and everyone.

Systems and context at THAT Conference

It’s all that

THAT Conference is not THOSE conferences. It’s about the developer as more than a single unit: this year, in multiple ways.

I talked about our team as a system — more than a system, a symmathesy. Cory House said that if you want to change your life, change your systems. As humans, our greatest power lies in changing ourselves by changing our environment. It’s more effective than willpower.

Cory and his family on the grassy stage

Many developers brought family with them; THAT conference includes sessions for kids and partners. It takes place in the Kalahari resort in Wisconsin Dells. My kids spent most of their time in the water parks. Socialization at this conference was different: even though fewer than half of attendees brought family with them, it changes the atmosphere. There’s a reminder that we are more than individuals, and the world will go on after we are gone.

my friend Emma, daughter Linda, and me at the outdoor water park

Technical sessions broaden perspective. Joe Morgan put JavaScript coding styles in perspective: yes, they’re evolving and we have more options to craft readable code. But what is “readable” depends on the people and culture of your team. There are no absolutes, and it is not all about me.

Brandon Minnick told us the nitty-gritty of async/await in C#, and how to do things right. I learned that by default, all the code in an async function (except calls with await) runs on the same thread. This is not the case in Node, which messes with the thread-local variables we use for logging. But in C# it’s easy to lose exceptions entirely; the generated code swallows them. This makes me appreciate UnhandledPromiseException.

Ryan Niemeyer gave us 35 tips and tricks for VSCode. I love this IDE because it is useful right away, and sweetly customizable when you’re ready. Since this session, I’ve got FiraCode set up, added some custom snippets for common imports, enabled GitLens for subtle in-line attributions, and changed several lines in a file simultaneously using multicursor. And now I can “add suggested import” without clicking the little light bulb: it’s cmd-. to bring up the “code actions” menu for arrow keys. Configuring my IDE is a tiny example of setting up my system to direct me toward better work.

Then there was the part where my kids and I goaded each other into the scary water slides. They start vertical. They count down “3, 2, 1, Launch” the floor drops out from under you and you fall into the whooshy tube of water. I am proud to have lived through this.

From personalizing your IDE, to knowing your programming language, to agreeing with your team on a shared style, our environment has a big effect on us.

McLuhan’s Law says: We shape our tools, and our tools shape us. This is nowhere more effective than in programming, where are tools are programs and therefore malleable.

But our tools aren’t everything: we also shape our environment in whom we hang around, whom we listen to. Conferences are a great tool for broadening this. THAT Conference is an unusually wholesome general-programming conference, and I’m very happy to have spoken there. My daughters are also ready to go back (but not to do that scary water slide again).