Progress in Learning

Back in the day, artists mixed their own paints. They bought minerals, ground them up, and mixed them with binder. Then in the 1800s, metal paint tubes became a thing, and people bought smooth, bright paint in tubes.

Do they still teach mixing your own pigments in art school? Maybe, in one class. I took one class in Assembler when studying Computer Science. That gave me some useful perspective and some vocabulary, but the baby computer designs we looked at were nothing like the computers we use.

I’m betting that art schools don’t return to pigment-grinding for months out of every year of school. There are new styles to learn, higher-level methods of expression that build on top of paint tubes.

Why do my kids spend months out of every year learning to do arithmetic by hand? They could be learning set theory, new ways of thinking that come from higher-level math.

Why are they learning cursive instead of coding? We can express ourselves at higher levels through computers.

Human culture advances through products. Paint in a tube is a product that lets artists focus on painting. It lets them work in new locations and at new speeds. Monet and the other Impressionists painted as they did because they could finally paint outside, thanks to tubes of paint!

I don’t want new computer programmers to learn the basics the way I had to learn them. They don’t need to program in C, except maybe one class for perspective, to learn the vocabulary of memory overflow. Learn queueing theory, that’s a useful way of thinking. Don’t implement a bunch of queues, except as extra credit.

Some artists still mix their own pigments.

Viewing a painting produced entirely with hand-ground mineral pigments is a completely different experience than looking at one made with modern chemical paints. The minerals scintillate and their vibrations seem to extend from the canvas.

Laura Santi

We need a few specialists to implement data structure libraries and programming languages. There are contests for mental arithmetic, if you enjoy that game. Calligraphy is a great hobby. When I sit down to learn, I want to learn new ways to think.

When younger people skip the underpinnings and learn higher-level concepts, that’s called progress.

Why is CSS a thing?

All I want is a web page. I want this one thing on the left and this other thing on the right — why is this so hard?? Can I just make a table in HTML like I used to do in the nineties? Why do I have to worry about stylesheets? and, why are they so hard?

As a backend developer, I’m used to giving the computer instructions. Like “put this on the left and this on the right.” But that is not how web development works. For good reason!

As the author of a web page, I do not have enough information to decide how that page should be laid out. I don’t know who is using it, on what device, in what program, on what screen, in what window, with what font sizes.

You know who does know that stuff? The user agent. That’s a technical term for an application that presents documents to people. The browser is a user agent. The user agent could also create printed documents, or it could speak the document to a person whose eyes are unavailable.

The user agent runs on a particular device. Computer, phone, TV, whatever. It knows the limitations of the hardware. It can be configured by the user. The user agent can conform to various CSS specifications.

CSS is not a programming language. It is a syntax for rules, rules which give the browser (that user agent) clues about how to display the document. The browser combines that information with what it knows about the world to come up with a format to display (or speak) the document.

It turns out that rule-based programming is hard. It sounds like it should be easier than imperative code, but it is not.

So no, you don’t get to decide that this thing goes on the left and that thing goes on the right. The browser gets that choice.

But here’s something I learned yesterday: put each thing in a div, and give those divs display: inline-block. then the browser has the option of putting them next to each other, if that fits with those constraints that only it knows.

Implementing all the interfaces

Humans are magic because we are components of many systems at once. We don’t just build into systems one level higher, we participate in systems many levels higher and everywhere in between.

In code, a method while is part of a class which is part of a library which is part of a service which is part of a distributed system — there is a hierarchy, and each piece fits where it does.

An atom is part of one molecule, which combines into one protein which functions in one cell in one tissue in one organ, if it’s lucky to be part of something exciting like a person.

But as a person, I am an individual and a mother and a team member and an employee and a citizen (of town, state, country) and a human animal. I am myself, and I participate in systems from relationship to family to community to culture. We function at all these levels, and often they load us with conflicting goals.

Gregory Bateson (PDF) describes native Bali culture: each full citizen participates in the village council. Outside of village council meetings, they speak for themselves. In the council, the speak in the interests of I Desa (literally, Mr. Village).

Stewart Brand lists these levels of pace and size in a civilization:

  • Fashion/art (changes fastest, most experimental)
  • Commerce
  • Infrastructure
  • Governance
  • Culture
  • Nature (changes slowest, moderates everything else)

Each of these work at different timescales. Each of us participates in each of them.

We each look out for our own interests (what is the fashionable coding platform of the day) and our family and company’s economic interest (what can we deliver and charge for this quarter) and infrastructure (what will let us keep operating and delivering long-term) and so on.

Often these are in conflict. The interests of commerce can conflict with the interests of nature. My personal finances conflict with the city building infrastructure. My nation might be in opposition to the needs of the human race. Yet, my nation can’t continue to exist without the stability of our natural world. My job won’t exist without an economic system, which depends on stable governance.

If we were Java classes, we’d implement twenty different interfaces, none of them perfectly, all of them evolving at different rates, and we’re single-threaded with very long GC pauses.

Tough stuff, being human.

These are not the only options. Wineglass edition

Today I found myself in the kitchen, near the fridge with the wine (it’s an excellent Chardonnay from Sonoma, thanks XYZ Beverages in Tybee Island, you exceed my expectations although you don’t have a website to link to). My empty glass was out on the screened porch.

Do I go outside for the glass? Or take the wine bottle to the glass, and then return it to the fridge?

These are not the only options. I snag another wineglass from the cupboard, fill it with wine, and take that out to the porch.

Now I have two dirty wineglasses, but who cares? The dishwasher washes them all at the same rate.

This is garbage collection in action. The dishwasher acts as a garbage collector for dirty dishes. It adds the capability of “do not worry about how many dishes you dirty. They will all be cleaned for the same fixed cost that you have already incurred.”

This removes one consideration that I need to think about in my actions. I’m free to optimize for my higher-level objectives (“be on the porch, with wine in a glass”) while ignoring the accumulation of resources (dirty wineglasses). It takes some adjustment to benefit from this condition.

It takes some adjustment in a brain to move from scarcity (“Dishes are a resource with a cost”) to abundance (“dirty dishes meh, not a problem anymore”). Once adjusted, the options opened to me are widened, in a way that a clearly optimal path is opened.

Now pardon me while I finish this delicious glass of wine and fetch another, from the nice cold bottle still in the fridge.

For cleaner code, write ugly code

We want to write only clean code, right?

Wrong. I want to write eventually-clean code. It starts exploring a space, and then I refine it to be cleaner and more suited to purpose. Usually, that purpose becomes clearer through writing, reading, and using the code.

That process of refining or tidying up can feel tedious, compared to implementing more features. It can be tempting to leave off error handling. I have a strategy for that: meaningful ugliness.

When I’m prototyping, I make the code so ugly that it will be satisfying to clean up. No attempt at appearing clean. I put a bunch of casts, bad casing maybe, random names instead of plausible-but-inaccurate ones. No null-checking.

for (const gerald of allKindsOfStuff.fingerprints) {
    (gerald as any).displayName =
         allKindsOfStuff.feature.convertinate(gerald.name);            }

(exaggerated to show detail)

When cleaning, I often start by making the code uglier. To move from an OO style toward functional, start by replacing all global or class-level variables with parameters. Twelve parameters on a function?! That’s hideous! Yes. Yes. Let ogres look like ogres.

This lets me feel productive when I come back to the code and clean it up. Later, I know more about what this code is for, what might be null and what won’t, what dependencies I can eliminate and which are meaningful. This is a better time to clean.

Disguising rough code in socially-acceptable clothing prevents cleaning. Appearance-of-good is the enemy of better.

When the db is the interface

There are two huge sources of inertia in software: data, and interfaces.

When two systems connect to the same database, you combine both. Ow.

When some other system is doing reporting on my database, I can’t change my table structure. That severely limits any internal reorganizations I might do. Especially when I don’t know how they use it, so I am afraid to touch it.

Then what if it’s a schemaless database? This means the schema resides in the application. When the schema resides in two applications at once, designing change is an exercise in hope.

Sometimes two systems access the same database because the database is the interface. We have an example of that at Atomist currently: one system (called squirrel, I don’t know why) populates a database, and another system (I call it org-viz) uses that data to make visualizations. How can that be okay?

Database as interface is not horrifying when:

  1. The database is not the system of record. We could repopulate all or part of it from elsewhere, so it’s somewhat disposable.
  2. The database has a schema. Interfaces must be well-defined.
  3. One system updates, the other reads.

In our case, we use PostgreSQL, so we have a schema. Some of our fields are JSON, which gives us extensibility, while the essential fields are guaranteed. The populating system (squirrel) operates from a queue, and we can replay the queue as needed, or play it into multiple databases. There are options for designing change.

Database as an interface is never going to be the cleanest decoupling, but it is not unreasonable when it is carefully designed and the teams don’t mind talking to each other. When the database is accidentally an interface, then you’re horked.

Feature interaction

Why is legacy software so much harder to work in?

Why does development velocity slow down as a system grows?

Lots of reasons, sure, but I suspect the biggest one is oft unspoken.

Every new feature comes with the invisible requirement: “and everything else still works.”

Every existing feature, and even past bugs, makes every new feature harder. Every user with expectations is a drag on change.

Feature interactions are hard! And they surprise you. My favorite examples come from The Sims release notes:

  • Fish are no longer duplicated in the fridge when moving homes.
  • Televisions no longer play video after they are burned or broken.
  • Sims will no longer walk on water to view paintings placed on swimming pool walls.
  • Pianists will no longer continue playing pianos that have been detonated.
  • Sims will no longer receive a wish to “Skinny Dip” with Mummies.
  • Sims who are on fire will no longer be forced to attend graduation before they can put themselves out.
  • Sims can no longer “Try for Baby” with the Grim Reaper.

Your feature interactions are probably not this much fun.

Software gets harder to change as it becomes part of a larger system, with more users and more uses. Development slows down because it’s useful, and we want it to stay useful. This also makes each change more valuable, because it’s helping more people in more ways.

Don’t be discouraged. Do be diligent, and be okay with being slower. This is the price of success.

Teams are like bread

Maybe when companies make you do “team-building” activities, what they’re looking for is a phase transition into a gelled team. Because it is a sudden, magical thing, right? When a group of people turns into a team.

Once you get there, to that feeling of team, it’s self-reinforcing. You trust each other, so y0u don’t take miscommunications personally; you work to restore communication, and so trust increases. You understand each other, so it’s easy to build further understanding. Working together gets smoother and smoother.

But how do you get there? pfft. How do you make sourdough bread? You grow it from sourdough starter, which you got from … someone else’s starter. Or from putting sugar water out on a windowsill and hoping some yeast lands in it. Seriously.

Will Larson suggests that when you have a gelled team, keep it. If you need to adjust how many people are helping with which pieces of software, then shift responsibilities from one team to another, not people.

When you have a gelled team, you can grow it gradually. Let the team reform with the new member incorporated before adding another.

Will suggests that when you want another team, gradually grow a gelled team up to 8-10 (max size) and then fork it into two teams of 4-5 (minimum size). It’s kind of like the sourdough starter: grow it, divide it, make the bread. Keep it alive the whole time.

If you have one team where the magic is flourishing, don’t kill it. Feed it, grow it, and let it be a source of further strong teams. No rushing.

Otherwise – if you take the group to paintball, or get them to mob program, or put them in a team room with sugar and water, maybe the yeast will blow in?

(if you want these little posts as I make them, plus a bit of extra context, sign up for my newsletter!)

Develop before define

First the loose thinking and the building up of a structure on unsound foundations and then the correction to stricter thinking and the substitutions a new underpinning beneath the already constructed mass.

Gregory Bateson on the advance of science. (From Steps to an Ecology of Mind)

This expresses a process I have observed in developers. We can develop something faster than we can define it.

That loose thinking includes the construction of loose code. We think with our fingers and eyes, keyboards and screens, editors and runtimes as well as with our brains. We try things, we draw them out or code them up. This eliminates a lot of impossible paths.

Then afterward, we shore up the useful ones. We put an API around it, error handling within, types throughout. We describe its interface and action in documentation.

Bateson grants permission to code loosely as an extension to thinking loosely, with the responsibility to return with rigor before we rope in other teams.

So do this, play in code the way we play in thought.

Then please realize that putting the foundations under it, defining the functionality so others can use it, is 10-100 times more time-consuming than your happy-path sketch.

When time gets tight, we make it tighter.

What do you think are the personality traits that contribute to being a good mathematician?

Flexibility. A willingness to change course when you see that things should go a different way. The ability to backtrack, and go forward and follow different paths and then come back to where you were. 

Dr Amie Wilkinson

This happens in code, too. When we explore a solution, we need to try one path, then come back and try another. This makes git incredibly valuable, with local branches and reverts.

I see Rod doing this often, when he works on exploring the design space of an API. (Rod is famous for creating the Spring framework for Java.)

If we are rushed, backing up can feel like wasting time. We push forward in directions that are slower. Worse, once someone else is using the API, it takes coordination to change it. That slows all of us down forever.

Yesterday Llewellyn Falco talked about how we tend to be prudent with money, leaving ourselves slack and valuing more-money-in-the-future more than less-money-now.

The multiplier of how much more money we require in the future is called the future discount. If you don’t think you’ll be alive in a month, or that you’ll really get any money at all then, your future discount is zero.

Under pressure, under scarcity, there is a psychological effect that reduces the future discount to zero. Saving money feels utterly pointless. Current needs are too pressing.

Llewellyn pointed out that while developers are prudent with money, they believe in the future, we often aren’t with time. We would not budget every dollar we have for a trip and leave no slack for surprises. But we do with sprints, and then we get under pressure and our future discount invisibly drops to zero and we can’t even think about future us or future whole-company who is stuck with the first design we thought to because backing up is just not an option.

Plow forward slower and slower, because we don’t believe in the future. Or step back and try a few things. Take a breath, take a walk, and maybe you’ll spot a smoother path.