Safety and Progress

At Papers We Love conference, Dr Heidi Howard described requirements for distributed consensus: Safety and Progress.

In distributed consensus, multiple servers decide on some value and then report that to their clients. Safety means that the clients never learn about different values; the consensus is all correct and consistent. Progress means that clients do eventually get the values. It doesn’t get stuck.

There are lots of ways to guarantee safety. The trick is to find ones that allow progress in all circumstances.

It reminds me of the conflict between security and development. Security teams are responsible for safety: prevention of bad things. Development teams are responsible for progress: making good things happen.

Separate these two responsibilities and you get deadlock. The obvious ways to get safety prevent progress, and the fastest routes to progress erode safety.

Algorithms, processes, designs that give you progress and safety exist, but they’re subtle. You won’t find them by fighting with each other.

In distributed consensus, the algorithm designer holds responsibility for safety and progress. To build the features that advance your business with the security that keeps it safe, put these responsibilities on the same team.

Safety and progress can be at odds. Don’t bake this conflict into your org structure.

Keep them entwined. Guarantee progress while allowing safety.

Certainty, Uncertainty, or the worst of both

Des Cartes looked for certainty because he wanted good grounds for knowledge, a place of fixity to build on, to make predictions.

Juarrero counters that uncertainty allows for novelty and individuation.

In software, we like to aim for certainty. Correctness. Except in machine learning or AI; we don’t ask or expect our algorithms to be “correct,” just useful.

The predictions made by algorithms reproduce the interpretations of the past. When we use these to make decisions, we are reinforcing those interpretations. Black people are more likely to be arrested. Women are less likely to be hired.

Machine learning based on the past, choosing the future — this reinforces bias. It suppresses novelty and individuation. It is the worst of both worlds!

This doesn’t mean we should eschew this technology. It means we should add to it. To combine the fluidity of the human world with the discreteness of machines, as Kevlin Henney puts it. We need humans working in symmathesy with the software, researching the factors that influence its decision and consciously altering them. We can tweak the algorithms toward the future we want, beyond the past they have observed.

Machine learning models come from empirical data. Logical deduction comes from theory. As Gregory Bateson insisted: progress happens in the interaction between the two. It takes a person to tack back and forth.

We can benefit from the reasoning ability we wanted from certainty, and still support novelty and individuation. It takes a symmathesy.

This post is based on Abeba Birhane’s talk at NCrafts this year. Video