Central vs Distributed Decisions

The octopus is my favorite animal.[1] Did you know that most of its neurons are in its arms? It has a distributed central nervous system. Parts of the octopus can respond to local stimuli, without coordinating.

Maybe our organizations should be more like octopi, and less like the central decision-making organism we perceive ourselves to be. It’s a lot more realistic, and responsive, than trying to simulate a centralized, unitary brain.

On humans and octopi: http://www.scilogs.com/a_mad_hemorrhage/man-versus-octopus/


[1] For thirty years my favorite animal has been the armadillo, but sorry armadillos, octopi are far more fascinating.

Moving the Dot

A long time ago at my Mom’s trendy fast-growing church, the preacher talked about Moving the Dot. This means, recalibrating your strategy every year, so that you’re always targeting (say) 23-yr-olds. That’s a different audience every year. It means you aim to please the people you want to join, not the people already there. 

In contrast, my old church was still run by and calibrated for the people who founded it 50 years ago. That church has since folded.

Much of the Open Source community is looking like my old church these days. Insist that the culture stay the same because you like it, and that community will die with you.

Aim for a culture that makes new members feel comfortable and welcome, and the community has a future. What’s more important in your open-source communities – you, or the project’s future?

This article by Elizabeth Naramore explains how it works: making someone feel uncomfortable isn’t wrong, but it can be detrimental.  http://naramore.net/blog/uncomfortable

Gaining new superpowers

When I first understood git, after dedicating some hours to watching a video and reading long articles, it was like I finally had power over time. I can find out who changed what, and when. I can move branches to point right where I want. I can rewrite history!

Understanding a tool well enough that using it is a joy, not a pain, is like gaining a new superpower. Like I’m Batman, and I just added something new to my toolbelt. I am ready to track down latent bug-villains with git bisect! Merge problems, I will defeat you with frequent commits and regular rebasing – you are no match for me now!
What if Spiderman posted his rope spinner design online, and you downloaded the plans for your 3D printer, and suddenly you could shoot magic sticky rope at any time? You’d find a lot more uses for rope. Not like now, when it’s down in the basement and all awkward to use. Use it for everyday, not-flashy things like grabbing a pencil that’s out of reach, or rolling up your laptop power cable, or reaching for your coffee – ok not that! spilled everywhere. Live and learn.
Git was like that for me. I solve problems I didn’t know I had, like “which files in this repository haven’t been touched since our team took over maintenance?” or “when was this derelict function last used?” or “who would know why this test matters?”
Every new tool that I master is a new superpower. On the Mac or linux, command-line utilities like grep and cut and uniq give me power over file manipulation – they’re like the swingy grabby rope-shooter-outers. For more power, Roopa engages Splunk, which is like the Batmobile of log parsing: flashy and fast, doesn’t fit in small spaces. On Windows, Powershell is at your fingertips, after you’ve put some time in at the dojo. Learn what it can do, and how to look it up – superpowers expand on demand! 
Other days I’m Superman. When I grasp a new concept, or practice a new style of coding until the flow of it sinks in, then I can fly. Learning new mathy concepts, or how and when to use types or loops versus recursion or objects versus functions — these aren’t in my toolbelt. They flow from my brain to my fingertips. Like X-ray vision, I can see through this imperative task to the monad at its core.
Sometimes company policy says, “You may not download vim” or “you must use this coding style.” It’s like they handed me a piece of Kryptonite. 
For whatever problem I’m solving, I have choices. I can kick it down, punch it >POW!< and run away before it wakes up. Or, I can determine what superpower would best defeat it, acquire that superpower, and then WHAM! defeat it forever. Find its vulnerability, so that problems of its ilk will never trouble me again. Sometimes this means learning a tool or technique. Sometimes it means writing the tool. If I publish the tool and teach the technique, then everyone can gain the same superpower! for less work than it took me. Teamwork!
We have the ultimate superpower: gaining superpowers. The only hard part is, which ones to gain? and sometimes, how to explain this to mortals: no, I’m not going to kick this door down, I’m going to build a portal gun, and then we won’t even need doors anymore.
Those hours spent learning git may have been the most productive of my life. Or maybe it was learning my first functional language. Or SQL. Or regular expressions. The combination of all of them makes my unique superhero fighting style. I can do a lot more than kick.

Estimates and Our Brain

Why is it so hard to estimate how long a piece of work will take?

When I estimate how long to add a feature, I break it down into tasks. Maybe I’ll need to create a table in the database, add a drop-down in the GUI, connect the two with a few changes to the service calls and service back-end. I picture myself adding a table to the database. That should take about a day, including testing and deployment. And so on for the other tasks.

Maybe it works out like this:

  Create Table     = 1 day
  Service back-end = 2 days
  New drop-down    = 2 days
+ Service call     = 1 day
  New feature      = 6 days

It almost never happens that way, does it? The estimate above is the happy path of feature development. Each component is probably accurate. If there’s a 70% chance that each of four tasks works as expected, then the chance of the feature being completed on time is (0.7^4) = 24%. Those aren’t very good odds.

It’s worse than that. Take the first task: create table. Maybe there’s a 70% chance of no surprises when we get to the details of schema design. And a 70% chance the tests work, nothing bites us. And a 70% chance of no problems in deployment. Then there’s only a 34% chance that Create Table will take a day. Break each of the others into three 70% pieces, and our chance of completing the feature on time is 1%. Yikes! No wonder we never get this right!

We can picture the happy path of development. It’s much harder to incorporate failure paths – how can we? We can’t expect the deployment to fail because some library upgrade was incompatible with the version of Ruby in production (or whatever). The chance of each failure path is very low, so our brains approximate it to zero. For one likely happy path, there are hundreds of low-probability failure paths. All those different failures add up — and then multiply — until our best predictions are useless. The most likely single scenario is still the happy path and 6 days, but millions of different possible scenarios each take longer.

It’s kinda like distributed computing. 99% reliability doesn’t cut it when we need twenty service calls to work for the web page to load – our page will fail one attempt out of five. The more steps in our task, the more technologies involved, the worse our best estimates get.

Now I don’t feel bad for being wrong all the time.

What can we do about this?

1. Smooth out incidental complexity: some tasks crop up in every feature, so making them very likely to succeed helps every estimate. Continuous integration and continuous deployment spot problems early, so we can deal with them outside of any feature task. Move these ubiquitous subtasks closer to 99%.

2. Flush out essential complexity: the serious delays are usually here. When we write the schema, we notice tricky relationships with other tables. Or the data doesn’t fit well in standard datatypes, or it is going to grow exponentially. The drop-down turns out to require multiple selection, but only sometimes. Sensitive data needs to be encrypted and stored in the token service — any number of bats could fly out of this feature when we dig into it. To cope: look for these problems early. Make an initial estimate very broad, work on finding out which surprises lurk in this feature, then make a more accurate estimate.

Say, for instance, we once hit a feature a lot like this one that took 4 weeks, thanks to hidden essential complexity. Then my initial estimate is 1-4 weeks. (“What? That’s too vague!” says the business.) The range establishes uncertainty. To reduce it, spend the first day designing the schema and getting the details of the user interface, and then re-estimate. Maybe the drop-down takes some detail work, but the rest looks okay: the new estimate is 8-12 days, allowing for we-don’t-know-which minor snafus.

Our brains don’t cope well with low-probability events. The scenario we can predict is the happy path, so that’s what we estimate. Reality is almost never so predictable. Next time you make an estimate, try to think about the possible error states in the development path. When your head starts to hurt, share the pain by giving a nice, broad range.

Responsiveness over Control

Running coaches get super wound up about the workout schedule, “because it’s the one thing we can control” and because they studied it for years. “We have this cognitive bias to think that it is the absolute key because we invested so much time and effort trying to figure it out.”

This is us with code. We spent years learning how to program, we continually study ways of writing better and better code. This is completely necessary. And yet, that doesn’t make it the key to doing our jobs well.

What is the key? For great software I suspect the key is understanding the problem. In business, we rarely have enough hammock time, and the problem changes before we can fully grok it.

What is the key? In running it’s “respond to challenges and don’t expect perfection.” That’s essential in business software; it is how everything gets done.

From: http://www.scienceofrunning.com/2015/03/self-importance-and-myth-of-perfect.html