skills and/or understanding

There’s a story about some brilliant design that made carrots more accessible to everyone, and the man who made it happen:

Sam [Farber], a delightful person to work with. He understood the business, but what was important was, he understood design. If he could have been a designer himself he would have been, but he had none of the skills necessary.

Mark Wilson

Understanding design is not coupled to having the skills of design, to being able to do it yourself! And that understanding can combine with other areas — such as business, where Sam has both the understanding and the skills — to make a person really effective.

In software too! A person can understand software development without having coding skills. These people are valuable, when they connect that understanding of software with the business.

And a person can have coding skills without understanding software development. We all started somewhere.

If you move from day-to-day coding into management, architecture, or any business role, you use your understanding of software development. And you can update and deepen that understanding without maintaining your hard-core coding chops.

This is where conference sessions and keynotes shine: at deepening our understanding of software development. We develop our skills at work, in play, or in workshops.

Value this understanding, in yourself and others. And try to gain understanding of other parts of the world, such as design and business, even if you don’t have the skills. It’s the combination that gives us power.

Progress in Learning

Back in the day, artists mixed their own paints. They bought minerals, ground them up, and mixed them with binder. Then in the 1800s, metal paint tubes became a thing, and people bought smooth, bright paint in tubes.

Do they still teach mixing your own pigments in art school? Maybe, in one class. I took one class in Assembler when studying Computer Science. That gave me some useful perspective and some vocabulary, but the baby computer designs we looked at were nothing like the computers we use.

I’m betting that art schools don’t return to pigment-grinding for months out of every year of school. There are new styles to learn, higher-level methods of expression that build on top of paint tubes.

Why do my kids spend months out of every year learning to do arithmetic by hand? They could be learning set theory, new ways of thinking that come from higher-level math.

Why are they learning cursive instead of coding? We can express ourselves at higher levels through computers.

Human culture advances through products. Paint in a tube is a product that lets artists focus on painting. It lets them work in new locations and at new speeds. Monet and the other Impressionists painted as they did because they could finally paint outside, thanks to tubes of paint!

I don’t want new computer programmers to learn the basics the way I had to learn them. They don’t need to program in C, except maybe one class for perspective, to learn the vocabulary of memory overflow. Learn queueing theory, that’s a useful way of thinking. Don’t implement a bunch of queues, except as extra credit.

Some artists still mix their own pigments.

Viewing a painting produced entirely with hand-ground mineral pigments is a completely different experience than looking at one made with modern chemical paints. The minerals scintillate and their vibrations seem to extend from the canvas.

Laura Santi

We need a few specialists to implement data structure libraries and programming languages. There are contests for mental arithmetic, if you enjoy that game. Calligraphy is a great hobby. When I sit down to learn, I want to learn new ways to think.

When younger people skip the underpinnings and learn higher-level concepts, that’s called progress.

Five levels of learning

Gregory Bateson talks about distinct levels of learning. From behavior to enlightenment, each level represents change in the previous level.

Zero Learning: this is behavior, responding in the way you always do. The bell rings, oh it’s lunchtime, eat. This does not surprise you, so you just do the usual thing.

Learning I: this is change in behavior. Different response to the same stimulus in a given context. Rote learning is here, because it is training for a response to a prompt. Forming or removing habits.

Learning II: this is change in Learning I; so it’s learning to learn. It can be a change in the way we approach situations, problems, relationships. Character traits are formed here: are you bold, hostile, curious?

For example — you know me, so when you see me you say “Hi, Jess” — zero learning. Then you meet Avdi, so next time you can greet him by name — Learning I. Lately at meetups Avdi is working on learning everyone’s names as introductions are happening, a new strategy for him: Learning II.

Bateson sees learning in every changing system, from cells to societies.

In code — a stateless service processes a request: zero learning. A stateful application retains information and recognizes that user next time: Learning I. We change the app so it retains different data: Learning II.

Learning III: This is change in Learning II, so it is change in how character is formed. Bateson says this is rare in humans. It can happen in psychotherapy or religious conversions. “Self” is no longer a constant, nor independent of the world.

Letting go of major assumptions about life, changing worldviews, this makes me feel alive. The important shift is going from one to two, and accepting that both are cromulent: my model is, there are many models. It is OK when a new model changes me; I’m not important (for whatever version of “I” is referenced).

Learning IV: would be a change in Learning III. Evolution achieves this. It doesn’t happen in individual humans, but in a culture it could. Maybe this is development of a new religion?

I wonder where team and organizational changes fall in this.

  • Zero learning: “A bug came in, so we fixed it.”
  • Learning 1: “Now when bugs come in, we make sure there is a test to catch regressions.”
  • Learning II: “When a bug comes in, we ask: how could we change the way we work so that this kind of bug doesn’t happen?”
  • Learning III: “Bugs will always happen, so we continually improve our monitoring and observability in production, and we refine our delivery pipeline so rolling forward is smoother and easier all the time.”
  • Learning IV: a framework for agile transformation! hahahahahaha

Key signatures in piano music: the underlying technology

Today, sheet music (at least, the major keys) went from magic to technology for me. Magic is spellcasting, is memorization. Technology is understanding, is derivation. Technology takes up so much less space in my head!

If you can read music enough to pick out a simple song but wonder why chords and their weird names seem so obvious to some people, this post is for you.

Those markings at the beginning of the line that show which notes are played as sharps (called a key signature) – I have been trying to memorize their names. This one is called D Major, and it means that all Fs and Cs are to be played as sharps, hit the black key to the right instead of the white key.

D Major signature

I know how to play music with C# and F#, but why on earth is this called D Major? And why is it a thing?

Today I read a few pages in a book about scales, and now I get it. It takes some history to understand.

See, a long time ago the Greeks made a lyre with four strings, and they tuned the strings into a tetrachord, four notes that sound good. On a piano, a tetrachord is made out of four notes with the following separations: a whole step, then a whole step, then a half step. A whole step goes two keys, counting white and black keys; a half step is one key. Like walking — right foot, left foot makes a whole step. One easy tetrachord starts with middle C:

From C to D is a whole step because there is a black key in between. From E to F is a half-step because there is no key in between. The C tetrachord is C, D, E, F.

The same formula makes a tetrachord starting from any key. The tetrachord starting with D goes D, E, F#, G.

A whole step from D is easy, skip the black key and hit E, a white key. A whole step from E means skipping the next key (F, a white key) and hitting the key after that (F#, a black key). Then a half step means the very next key (G, a white key). This is where the F# in the key signature is coming from.

But wait! There’s more!

Put two tetrachords together to get an Ionian scale, also called a major scale. The tetrachords are separated by a whole step. In C, the first tetrachord is C, D, E, F. Take a whole step to G and start the next tetrachord. It goes G, A, B, C.

Eight notes, the last one the same as the first (one octave higher), make a major scale. The keys have this pattern of separations between them, made up of the Ionian scale and tetrachord patterns. Each scale uses a little over half the keys on the keyboard, and ignores the rest. Songs in C major use all white keys and none of the black keys. You want anything else, gotta put a sharp or flat symbol in front of the note.

What does this mean for D? The D major scale starts with the D tetrachord and adds a second tetrachord: D, E, F#, G; A, B, C#, D.

C# and F#! There they are, the two black keys with numbers on them! The “normal” keys to play in the D scale include C# and F# (black keys), but never C or F (white keys). Putting the D Major key signature in front of the music means that all the keys in the D scale look like ordinary notes.

With the C# and F# handled by the key signature, any special marking (sharp, flat, or natural) points out a note that is unexpected, that does not fit in with the rest.

The same pattern works for other key signatures; try constructing the Ionian scale for G out of two tetrachords separated by a whole step. You’ll find that the only black key used is F#, so this is G major:

G major key signature

These are historical explanations for the structure of major scales, of seven different notes (plus the same notes in other octaves) that sound good together. There are scientific explanations too, even ratios of wavelengths.

On the piano, this means only 7/12 of the keys are used in any song, ordinarily. Why have the other keys? They bring equality to the different notes: any key can start a scale, because the whole-steps and half-steps are there no matter where you start. C is not special, just convenient. The circle is complete. Actually 12 circles for the 12 keys in an octave. So many patterns are visible to me now!

Now I can name the key signatures and say why they have those names. I don’t have to memorize them, because I can derive them. Next I can learn chords and why certain ones like C and F and G7 appear together frequently. All of this from two pieces of vocabulary and some counting. Goodbye magic, hello math.

ElixirConf keynote: Elixir for the world

Video

http://confreaks.tv/videos/elixirconf2015-keynote-elixir-should-take-over-the-world

Slides with notes

Big PDF with notes (15M)

Slides only (on speakerdeck)

References

Camille Fournier on distributed systems: video
Caitie McCaffrey on stateful services: video 
Denise Jacobs, creativity: video
Marty Cagan on what’s better than agile: video
How to Measure Anything: book
Brené Brown on vulnerability, Grounded Theory: video book
Property testing: video
Property testing: QuickCheck CI 
Elm: Richard Feldman’s talk: video
My talk about React and Elm: video
Structure of Scientific Revolutions: about the book

Gaining new superpowers

When I first understood git, after dedicating some hours to watching a video and reading long articles, it was like I finally had power over time. I can find out who changed what, and when. I can move branches to point right where I want. I can rewrite history!

Understanding a tool well enough that using it is a joy, not a pain, is like gaining a new superpower. Like I’m Batman, and I just added something new to my toolbelt. I am ready to track down latent bug-villains with git bisect! Merge problems, I will defeat you with frequent commits and regular rebasing – you are no match for me now!
What if Spiderman posted his rope spinner design online, and you downloaded the plans for your 3D printer, and suddenly you could shoot magic sticky rope at any time? You’d find a lot more uses for rope. Not like now, when it’s down in the basement and all awkward to use. Use it for everyday, not-flashy things like grabbing a pencil that’s out of reach, or rolling up your laptop power cable, or reaching for your coffee – ok not that! spilled everywhere. Live and learn.
Git was like that for me. I solve problems I didn’t know I had, like “which files in this repository haven’t been touched since our team took over maintenance?” or “when was this derelict function last used?” or “who would know why this test matters?”
Every new tool that I master is a new superpower. On the Mac or linux, command-line utilities like grep and cut and uniq give me power over file manipulation – they’re like the swingy grabby rope-shooter-outers. For more power, Roopa engages Splunk, which is like the Batmobile of log parsing: flashy and fast, doesn’t fit in small spaces. On Windows, Powershell is at your fingertips, after you’ve put some time in at the dojo. Learn what it can do, and how to look it up – superpowers expand on demand! 
Other days I’m Superman. When I grasp a new concept, or practice a new style of coding until the flow of it sinks in, then I can fly. Learning new mathy concepts, or how and when to use types or loops versus recursion or objects versus functions — these aren’t in my toolbelt. They flow from my brain to my fingertips. Like X-ray vision, I can see through this imperative task to the monad at its core.
Sometimes company policy says, “You may not download vim” or “you must use this coding style.” It’s like they handed me a piece of Kryptonite. 
For whatever problem I’m solving, I have choices. I can kick it down, punch it >POW!< and run away before it wakes up. Or, I can determine what superpower would best defeat it, acquire that superpower, and then WHAM! defeat it forever. Find its vulnerability, so that problems of its ilk will never trouble me again. Sometimes this means learning a tool or technique. Sometimes it means writing the tool. If I publish the tool and teach the technique, then everyone can gain the same superpower! for less work than it took me. Teamwork!
We have the ultimate superpower: gaining superpowers. The only hard part is, which ones to gain? and sometimes, how to explain this to mortals: no, I’m not going to kick this door down, I’m going to build a portal gun, and then we won’t even need doors anymore.
Those hours spent learning git may have been the most productive of my life. Or maybe it was learning my first functional language. Or SQL. Or regular expressions. The combination of all of them makes my unique superhero fighting style. I can do a lot more than kick.

REST as a debugging strategy

In REST there’s this rule: don’t save low-level links. Instead, start from the top and navigate the returned hyperlinks, as they may have changed. Detailed knowledge is transitory.
This same philosophy helps in daily programming work.

Say a bug report comes in: “Data is missing from this report.” My pair is more familiar with the reporting system. They say, “That report runs on machine X, so let’s log in to X and look at the logs.”

I say, “Wait. What determines which machine a report runs on? How could I figure this out myself?” and “Are all log files in the same place? How do we know?”

The business isn’t in a panic about this report, so we can take a little extra time to do knowledge transfer during the debugging. Hopefully my pair is patient with my high-level questions.

I want to start from sources of information I can always access. Deployment configuration, the AWS console, etc. Gather the context outside-in. Then I can investigate bugs like this alone in the future. And not only for this report, but any report.

“How can we ascertain which database it connected to? How can I find out how to access that database?”
“How can I find the right source repository? Which script runs it, with which command-line options? What runs that script?”

Perhaps the path is:
– deployment configuration determines which machine, and what repository is deployed
– cron configuration runs a script
– that script opens a configuration file, which contains the exact command run
– database connection parameters come from a service call, which I can make too
– log files are in a company-standard location
– source code reveals the rest.

This is top-down navigation from original sources to specific details. It is tempting to skip ahead, and if both of us already knew the whole path and had confidence nothing changed since last week, we might skip into the dirty details, go right to the log file and database. If that didn’t solve the mystery, we’d step back and trace from the top, verifying assumptions, looking for surprises. Even when we “know” the full context, tracing deployment and execution top-down helps us pin down problems.

Debugging strategy that starts from the top is re-usable: solve many bugs, not just this one. It is stateless: not dependent on environmental assumptions that may have changed when we weren’t looking.

REST as more than a service architecture. REST as a work philosophy.

Stacking responsibilities

TL;DR – Support decisions with automation and information; give people breadth of responsibility; let them learn from the results of their choices.
When I started writing software in 1999, The software development cycle was divided into stages, ruled over by project management.

Business people decided what to build to support the customers. Developers coded it. Testers tested it. System Administrators deployed and monitored it. Eventually the customer got it, and then, did anyone check whether the features did any good?

These days Agile has shortened the cycle, and put business, development, and QA in the same room. Meanwhile, with all the tools and libraries and higher-level languages, feature development is a lot quicker, so development broadens into automating the verification step and the deployment. Shorter cycles mean we ask the customer for feedback regularly.

Now developers are implementing, verifying, deploying, monitoring. The number of tools and environments we use for all these tasks becomes staggering. Prioritization – when the only externally-visible deliverable is features, who will improve tests, deployment, and monitoring? We automate the customer’s work; when do we automate ours?

The next trend in development process helps with these: it divides responsibilities without splitting goals. Business works with customers, developers automate for business, and a slice of developers automate our work. Netflix calls this team Engineering Tools; at Outpace we call it Platform. Instead of handoffs, we have frequent delivery of complete products from each team.

Meanwhile, when developers own the features past production, another task emerges: evaluation of results. Automate that too! What is success for a feature? It isn’t deployment: it’s whether our customers find value in it. Gleaning that means building affordances into the feature implementation, making information easy to see, and then checking it out. We’re responsible for a feature until its retirement. Combine authority with information, and people rise to the occasion.[1]

Learning happens when one person sees the full cycle and its effects, and that person influences the next cycle. Experiments happen, our capabilities grow.

In this division of responsibilities, no one delegates decisions. Everyone shares a goal, and supports the rest of the organization in reaching that goal. The platform team doesn’t do deployments. It creates tools that abstract away the dirty details, supplying all information needed for developers to make decisions and evaluate the results. At Outpace, the Platform team is composed of people from the other development teams, so they share background and know each others’ pain. The difference is: the platform team has a mandate to go meta, to improve developer productivity. Someone is automating the automation, and every developer doesn’t have to be an expert in every layer.

The old way was like a framework: project managers take the requirements from the business, then the code from the developers, and pass them into the next step in the process. The new way is like libraries: the platform team provides what the developers need, who provide what the business needs, who provide what the customer needs. Details are abstracted away, and decisions are not.

When a developer’s responsibilities end with code that conforms to a document, it’s super hard to get incentives aligned to the larger needs. Once everyone is responsible for the whole cycle, we don’t need to align incentives. Goals align, and that’s more powerful. Remove misaligned incentives, give us a shared goal to work for, and people achieve. Give us some slack to experiment and improve, and we’ll also innovate.

————————————–
[1] via David Marquet