Design by implementation

We can develop things faster than we can define them.

@jryanday

I’ve remarked before that @cdupuis can implement new features way faster than I can document them. This is not ridiculous; creating something is often faster than connecting it to existing people who can use it.

Ryan took this farther: we can’t even define what we want faster than we can implement it, sometimes. This is not ridiculous; design informs implementation, and implementation informs design.

Implementation informs design, and the process of implementation builds a mental model in the developer’s head. Making that model concrete and communicating it to someone else is hard, sometimes harder than getting the code working. Communicating that model in a way that oodles of people can use it is harder still, sometimes an order of magnitude harder. It includes documentation, yes, and marketing and customer service and advocacy. If you’re really serious, writing a specification. These all inform the next iterations of design and of implementation.

Developing things is only a beginning.

Distance outside of maps

Distance is seriously strange.

Yesterday on the southern coast of Spain, at an Italian restaurant run by a German, I had tiramisu, because I’ve never had tiramisu this close to Italy. People laughed,
because Spain is farther from Italy than Germany or Poland (geographically) – but for food purposes it’s closer, right?

Geographic distance is so nice, on a map, so clear and measurable.
And it’s almost never relevant.

Sydney is farther from SF than SF is from Sydney, by 2 hours of flying, because of wind.
St Louis is farther than San Francisco from Europe, because there are direct flights to SF.

Today in Frankfurt I went from A gates to Z gates. Sounds far! … except in the map, Z is right on top of A. Which does not make it close, because the path from A to Z goes through passport control.


Forget maps. They’re satisfying, fun, and deceptive, because they give us the feeling we understand distance.

Distance is fluid, inconstant. Gates are closer when the sidewalk is moving, farther when people are bunched up and slow.

In software systems, distance is all kinds of inconsistent. Networks get slow and computers get farther apart. WiFi goes down and suddenly they’re on another planet.

And here’s the thing about distance: it’s crucial to our understanding of time.
One thing distributed systems can teach us about the real world: there is no time outside of place. There is no ordering of events across space. There is only what we see in a particular spot.

Roland Kuhn spoke at J on the Beach about building reliable systems in the face of fluctuating distance like this. The hardest part is coming up with a consistent (necessarily fictional) ordering of events, so programs can make decisions based on those events.

Humans deal all the time with ambiguity, with “yeah this person said stop over here, and also this machine kept going over there.” We don’t expect the world to
have perfect consistency. Yet we wish it did, so we create facsimiles of certainty in our software.  

Distributed systems teach us how expensive that is. how limiting.

Martin Thompson’s talk about protocols had real-life advice for collaborating over fluctuating distance. Think carefully about how we will interact, make decisions locally, deal with feedback and recover.

Distance is a thing, and it is not simple or constant. Time is not universal, it is always located in space. Humans are good at putting ambiguous situations together, at sensemaking. This is really hard to do in a computer.
Software, in its difficulty, teaches us to appreciate our own skills.

Predictability is hard, surprises are everywhere

Today at J on the Beach, Helena Edelson talked about a lot of things that Akka can do for event streaming, microservices,  generally handling a lot of data and keeping it (eventually) consistent across great distances. With opt-in causal consistency! 

I am amazed by how hard it is to have consistent causality and data across geographic distance. It is counterintuitive, because our culture emphasizes logical deduction and universality. Science, as we learned it, is straight-line deterministic causality. The difficulty of making systems that work like that across geographic distance illustrates how little of our human-scale world works that way.

the “hard” sciences are narrow.

It is great that people are working on software that can have these properties of consistency, and that they’re studying the limitations. Meanwhile it is also fun to work on the other side of the problem: letting go of perfect predictability, and building (relative) certainty on top of uncertainty. Right now I’m in a talk by Philip Brisk about the 4th Industrial Revolution of IoT sensors and machine learning sensemaking. It turns out: now that we can synthesize DNA in a little machine, we can also find out what DNA was synthesized using only a microphone placed near the machine (and a lot of domain knowledge). Humans are fascinating.

predictability is hard, and surprises are everywhere, and that is beautiful.

Strong Opinions Strongly Overrated

Today Avdi tweeted about this lovely article debunking “strong ideas, loosely held”.
The gist is, yo! stop talking like you’re sure about stuff. It makes it hard for other people to chime in, and it anchors you, making it hard to let go of your opinion.
The simple trick is to quantify your certainty, below 100%. “I’m 90% sure…” or “At 80% confidence…” This helps you because it’s easier to reduce confidence from 90 to 60, given new information, than to change an absolute (P=1) belief.

Like consequences, beliefs are not yes/no, black/white, all/nothing. We hold beliefs at different confidence levels. There’s a book about this: Probabilistic Knowledge, by Sarah Moss. With academic rigor, she proposes that we don’t believe with 50% confidence that Go is the best language for this. Instead, we believe that “There is a 50% chance that Go is the best language for this.” Modeling belief this way leads to some cool stuff that I haven’t read yet; we can do math on this. And it’s how our brains really work; we are constantly choosing action while uncertain.

This also shows up in How to Measure Anything, which uses experts and confidence intervals to get rough measurements of risk and expected outcomes. This can help with decision making. Dan North talks about this with software estimation: the trick is not to ask for a single number “how long will it take” but instead, “It’ll definitely take at least X” and “I’d be embarrassed if it took more than Y”. This gives you a 90% confidence interval, and then you can spend effort to narrow that interval (reduce uncertainty), if you choose.

Many things become possible when you let go of belief being all or nothing. We can act on 80% confidence, and learning from additional data while we proceed. Real life happens in the gray areas.