Capturing the World in Software

TL;DR – we can get a complete, consistent model of a small piece of the world using Event Sourcing. This is powerful but expensive.

Today on twitter, Jimmy Bogard on the tradeoffs of Event Sourcing:

If event sourcing is not scalable, faster, or simpler, why use it?

Event Sourcing gives you a complete, consistent model of the slice of the world modeled by your software. That’s pretty attractive.

We want to model the real world in software.

You can think about the present world as a sum of everything that happened before. Looking around my room, I can say that my bookshelf is the sum of various purchases, some moving around, a set of decisions about what to read and what to keep.

my bookshelf has philosophy, math, visualization, and a hippo

I can think of myself as the sum of everything that has happened, plus the stories I told myself about that. My next action is an outcome of this, plus my present surroundings, plus incoming events. That action itself is an event in the world.

In life, in biology, we don’t get to see all these inputs. We don’t get to change the response algorithm and try again. But in software, we can!

Of course we want perfect modeling and traceability of decisions! This way we can always answer “why,” and we can improve our understanding and decisionmaking strategies as we learn.

This is what Event Sourcing offers.

We want our model to be complete and consistent.

It’s impossible to model the entire world. Completeness and consistency are in conflict, sadly. Still, if we limit “complete” to a business domain, and to the boundaries of our company, this is possible. Theoretically.

Event Sourcing offers a way to do that.

In event sourcing, every piece of input is an event. Someone requests a counseling appointment, event. Provider signs up for available hours, event. Appointment scheduled, event. Customer notified, event. Customer shows up, event. Session report filed, event.

We can sum past events to get the current state

Skim the timeline of all events for the relevant ones. Sum these up (there are other definitions of “sum” besides adding numbers). From this we calculate the state of the world.

From appointment-was-scheduled events, we construct a provider’s calendar for the day.

At the end of the month, we construct reports on customers served and provider utilization. Based on that, we might seek more providers or have a talk with the less active ones. Headquarters ranks the performance of our office compared with others.

We need to allow corrections

To accurately model the real world, we need to allow for all the stuff that happens in the real world.

Appointments are cancelled. Customers don’t show up. Session reports are filed late. (“Where’s that session report from last week?” “Oh right, they were too late, because the gate to the parking lot malfunctioned. Don’t charge them for it.”)

Data is late or lost. If you insist that this doesn’t happen (“Every provider must enter the session reports by the end of the day”) then your model is blind to reality. The weather turns bad, people go home. There’s a bomb threat, or an active shooter. Reality intrudes.

Events outside your careful model will happen. Accommodate corrections, incorporate events that arrive late, accept partial data. The more of reality you allow into your model, the more accurate it can be.

We can evaluate past decisions based on the information available at the time

When data arrives late, reports change after they are printed. An event sourced system handles this.

As new data comes in about past days, it gets summed in with the data about those days. Reports get more accurate.

A friend of mine works at a counseling center, and he gets calls from headquarters like “Why is your utilization so low for December?” and he’s like “What? It was fine” and then he runs the report again and sure enough, it’s different. After he ran the report, more data about December came in, and now the totals are different. He can’t reproduce the reports he saw, which makes it hard to explain his actions to HQ.

If their software used event sourcing, he could say, “Please run the report as of January 2, and you’ll see why I didn’t take any action.”

Each event records a received timestamp, for when we learned about it, and an effective timestamp, for the real-world happening it represents. Then the software can sum only the events received before January 2 to reproduce the report as it was seen that day.

We can re-evaluate the world with new logic

Not only can an event-sourced system reproduce the same report as on an earlier day, we can ask: what if we changed the report logic? Then what would it look like?

Maybe we want to report unreported appointments as “possibly cancelled” to reflect uncertainty. We can run the new logic against the same events and compare it to the old results.

This means we can run tests against the event stream and detect behavior changes.

We need to record externally-visible decisions for consistency

When we change the software, we endanger consistency.

If we update the report logic in February, then when HQ runs the report “as of January 2” they’ll see something different than my friend saw when he ran it on that date. For consistency, both the data and code need to match what existed on January 2.

Or, we can model the report itself as an event. “On January 2, I said this about December.” Then we can incorporate that into the reporting logic.

Anything our system does that is visible to the outside world is itself an event, because it changes the way external people and software act. To reproduce our behavior consistently, our system can either record its own behavior, or retain all the data and the code that went into choosing it.

So far, this is nice and deterministic. But the real world isn’t.

Reproducing behavior is possible in an event-sourced system, if that behavior is deterministic. In human behavior, we don’t get that luxury. Our choices come from many influences, some of them contradictory. One tweet inspired me to write this article. Thousands of other tweets distract me from it.

Conflicting information comes in from real life.

Event sourcing gets tricky when the real world we are modeling is inconsistent, according to the events that come in.

Now say we’re a shipping company. We model the movement of goods in containers as they move across the world. It is an event when a container is loaded on a ship, and an event when it is unloaded. An event when a ship’s itinerary is scheduled, and when it arrives at each port.

One event says that container 1418 was loaded onto the vessel Enceladus in Auckland. Another event says that Enceladus is scheduled for its next stop in Beijing. Another event says that container 1418 was unloaded in San Francisco. Another says that container 1418 was emptied in Beijing. Which do you believe?

This example comes from a real story. Weird things happen. Does your system let people report reality? Is there a fallback for “Ask a person to go look for that container. Is it really 1418?”

Decisions made in ambiguity are events

Whatever decision the system makes, it needs to record that as an event. Perhaps that shows up as a footnote in reports about Enceladus, Beijing, and San Francisco. Does anybody hear about it in Auckland?

We can see the provenance of each report and decision

If some report comes out uneven, and that feeds back to the development team as a bug, then event sourcing gives us excellent tools for tracking it down.

Each “I made this decision” or “I produced this report” event can record the set of events that were input, and the version of code that ran to produce the output. You can have complete provenance.

This kind of software is accountable. It can tell the story of its decisions, what it did and why. What its world was like at that time.

This is a beautiful property. With full provenance, we can understand what happened. We can tell the story to each other. With replayability, we can change the code and see whether we can improve it for next time.

Recording everything gets ridiculous

Yet, data about provenance gets big very quickly. Each report consumed thousands of events. Each decision that was based on a current-state sum of events now has a dependency on all of those past events, plus the code that defines the current state, plus all the other states it took input from, plus their code and set of events.

Meanwhile some of those events are old, and no longer fit the format expected by the latest code. Meanwhile, we’re still ignoring everything that happened outside the system, so we’re completely blind to a lot of causality. “A person clicked this button.” Why? What information did they see on the screen as input to their decision to click “Container 1418 is in San Francisco”?

In real life, most information is lost. History will never be fully written; the writing is itself history. We’re always generating new actions. The system could theoretically report on all the reports it has reported. It never ends.

Completeness is limited to very small systems. Be careful where you invest this effort. Consciously select the boundaries, outside of which you don’t know what happened. You don’t know what really happened in the shipyard, or in a person’s head, or in the software that another company runs. The slice of the world we see is tiny.

Provenance is precious but difficult. Then again, it is at least as hard to do well in designs other than event sourcing. The painful realities that make event sourcing are painful in other models, too.

There are reasons we don’t model the whole world.

Event sourcing makes a best effort to model the world in its fullness. We try to remember everything significant that happens, sum that up into our own current-state world in the software, make decisions and act.

But events come in out of order. Events are lost. Events contradict each other. Events have partial data, or old data formats. Logic changes. We can’t remember everything.

Sometimes it pays to think about what you would do in an event-sourced system, and then implement just enough of that. Keep copies of produced reports, so that people can retrieve them without re-generating them. Record difficult decisions in a place that lives longer than logs.

Event sourcing is powerful. But it is not easy. Expect to think really hard about edge cases you didn’t want to handle. Expect to deal with storage and speed and up-to-dateness tradeoffs. Allow a human to enter corrections, because the real world will always surprise you.

In the real world, we don’t have all the information, and that’s OK. We can’t model everything in our heads, because our heads are inside everything. This keeps it interesting.

Rebase on the World: personal shell choice

“Why use bash when you have PowerShell?” <– words I did not expect to hear from my own mouth.

Over the past few weeks I’ve begun learning PowerShell, and it’s an improvement over the UNIX (and family) shells, bash and ksh etc.

PowerShell is newer. It builds on what those shells did right, and then gets it more right.

The UNIX-y shell commands have this magical power of composition: you can pipe the output of one to the input of another, and chain these to build tiny programs right on your command line. One sends text to STDOUT, the next reads it on STDIN.

PowerShell does this piping, except with objects instead of lines of text. The data is structured and interrogable (I can ask it what fields it has). In bash, you output your data as text, and the next program parses that text.

Here’s an example from Chapter 1 of PowerShell in Action. In bash, the sort command can parse the piped text and sort on it. You can specify numeric sorting.

ls -l | sort -k 5 -r -n

In PowerShell, the sort command works on named properties of the piped object. And it knows the type of the property, so nobody has to tell it what’s a number.

ls | sort -Property length -Descending

More: PowerShell standardizes parsing of command-line arguments. This gives me consistency when I use the command, and saves painful work when I write a command.

More: PowerShell gives me multiple output paths, one for data (to the next program) and another to the user who typed the command. In bash, commands like git abuse STDERR to send info back to the user.

When PowerShell was only for Windows, by far the most powerful command-line shell for me was bash (or fish or zsh, pick your favorite). Bash worked in the worlds I lived in, and that matters most. Now that PowerShell runs on Mac and Linux, it is the most powerful command-line shell for me … except for one thing.

The biggest thing bash has on PowerShell is: I already know bash. I can type stuff there and it just works. With PowerShell I have to look things up all the time.

But every time I look something up, I increase my abilities. PowerShell raises the floor I build on, compared to bash.

It is always a tradeoff between sharpening the tools we have, vs trading them in for a better model. In this case, I’ll take some slowness in the beginning for a faster top speed.

By switching, I gain advantages that the software world has created since I started programming twenty years ago. I rebase myself on the latest version of the world.

Development aesthetic: experiments

Software development is a neverending string of “What happens if I…?” Each new runtime, language, or tool is a new world with its own laws of nature. We get to explore each one with experiments.

Objective

Today I added another alias to my PowerShell $profile:

echo "Good morning!"
# ...

Function ListFilesWithMostRecentAtBottom {
    Get-ChildItem | Sort-Object -Property LastWriteTime
}
Set-Alias ll ListFilesWithMostRecentAtBottom

To use that alias in my current shell, I need to source the profile again. I googled how to do this. The first useful page said:

& $profile

So I typed that. It echoed “Good morning!” but the alias did not work.

Hmm, did it not save?

I can test that. I changed the echo to “Good morning yo!” and tried again.

It printed the new text, but still didn’t get the alias.

Hmm, is something wrong with the alias?

I opened a new shell window to test it.

The new alias works in the new window. Therefore, it’s the & $profile command that is not doing what I want.

Investigation

I could ignore the problem and continue work in the new window. My alias is working there. But dang it, I want to understand this. I want to know how to reload my $profile in the future.

Time for more googling. The next post had a new suggestion:

. $profile

I typed that, and it worked. yay!

But wait, was that the old window or the new window? What if it only worked becuase I was in the new window?

I want to be certain that running . $profile brings in any new aliases I just added. For a proper experiment, I need to see the difference.

Experiment

I add a new alias to my $profile, and also change the echo so that I’ll be sure it’s running the new version.

echo "Good morning yo tyler!"
# ...

Function ListFilesWithMostRecentAtBottom {
    Get-ChildItem | Sort-Object -Property LastWriteTime
}
Set-Alias ll ListFilesWithMostRecentAtBottom
Set-Alias tyler ListFilesWithMostRecentAtBottom

In my terminal, I run tyler as a test case, then the command I’m investigating (. $profile), then the test case tyler again.

Now I can see the before and after, and they’re different. I can tell that . $profile has the desired effect. Now I have learned something about PowerShell.

Epilogue

I remove the extra tyler stuff from $profile.

As far as I can tell, & runs the script in a subshell, and . runs the contents of the script in the current shell. The . command works like this in bash too, so it’s easy for me to remember.

Today I took a few extra minutes and several extra steps to make an experiment and figure out what PowerShell was doing. Now I know how to reload my $profile. Now you know how to run a tiny experiment to ascertain that what just happened, happened for the reason you think it did.

This verbosity makes me happy

Today I learned how to create aliases in PowerShell. I’m switching from Mac to Windows, and I want the terminal in VS Code to do what I want.

No terminal will work for me until it interprets gs as git status. I type that compulsively.

In bash, setting that up looks like this:

alias gs='git status'

But in PowerShell, aliases can only refer to single words. No parameters. Wat.

You can make a function with the whole command in it, and then set an alias to that function.

Function GitStatus { git status }
Set-Alias gs GitStatus

The first time I did this it felt kinda silly. But then the second time …

Function CommitDangit { 
    git add .
    git commit -m "temp" 
}
Set-Alias c CommitDangit

This alias c makes a crappy commit as quickly as possible. I use it when live coding, to make insta-savepoints when stuff works. (I’m a bit compulsive about committing, too. Just commit, dangit!)

The PowerShell syntax requires a long name for my command before I give it a short one. This is more expressive than the bash:

alias c='git add . && git commit -m "temp"'

My CommitDangit function is named for readability, plus a tiny alias for fast typing.

This is a win. I like it more than the bash syntax. PowerShell is a more modern scripting language, and it shows.

Bonus: in bash I put those aliases in a file like .bashrc or .bash_profile or sometimes another one, it depends. In PowerShell, I put the aliases in a file referenced by $profile. Edit it with: code $profile, no figuring out which file it is.

Next: reload the $profile in an existing window with . $profile

Rebase on the World

We build our software in a particular world, a world of technologies that we link together. We choose a programming system (language, runtime, framework), libraries, and environment. We integrate components: databases, logging, and many different services.

Perhaps we built it on Java 8 running on VMs in our datacenter, connecting to a proprietary queuing service we bought years ago. We start with what is available and stable at the time.

But do we stay there?

The outside world moves capabilities toward commodity.

At some point, new businesses start building cloud applications instead of racking their servers.

An opportunity appears, and our enterprise can get out of the infrastructure business. When we shift our application onto AWS, there are whole areas of expertise we don’t need in-house. There are layers of infrastructure that Amazon maintains and upgrades, and we rarely even notice.

At some point, we integrate with new systems. They don’t speak our proprietary queuing protocol, so we move to Kafka, something that people and programs everywhere can understand. And at some point, new businesses don’t run Kafka; they rent it as a service.

When we move to SaaS, there’s a layer of expertise we don’t need to retain, pages we don’t have to answer, and upgrades we don’t have to manage. Or even better, maybe our needs have changed, or SQS has improved until it’s good enough. We get free integration with other AWS services and billing.

Is our software simpler? I don’t know, but it’s thinner. The layer we maintain is closer to the business logic, with integration code to link in SaaS solutions that other companies support.

All code is technical debt.

Every line of code written is in a context. Those contexts change, and expectations rise. New tools appear, and integrating them gives us unique abilities. Security vulnerabilities go noticed.

For the software we operate, we are responsible for upgrades. It is our job to keep libraries up to date, shift to modern infrastructure every few years, and add the features that everyone now expects.

What you get for operating custom software — you control the pace of change.  
What you pay  — you are responsible for the pace of change.

Maybe it’s authorization, or network configuration, or caching, or eventing. You wrote it back when your needs were exceptional, and now it’s your baby, and you’re changing its diapers. It takes effort to shift to anything else.

Incorporate the modern world into our software’s world.

When capabilities become commodities, it becomes cheaper to rent than to babysit them. It’s probably monetarily less expensive, and indeed, it’s less costly in knowledge. People and teams are limited by how much experience we can hold. We can only have current expertise in so many things.

On a development team, we can increase our impact by overseeing more and more business capabilities, but we can only operate so much software. If we thin that software by shifting our underpinnings to SaaS offerings, then we can keep up more of the software that matters to our particular business.

All code is technical debt. Let it be someone else’s technical debt. Move it off your balance sheet, to a company that specializes in this capability.

Rebase on the world

In git, sometimes I add some features in a branch, while other people improve the production branch. When I rebase, I put my changes on top of theirs, and remove any duplicate changes.

I want to do this with software infrastructure and capabilities. The outside world is the production branch. When I rebase my custom software on top of it, it takes work to reconcile similar capabilities. But it’s worth it.

When we rebase our software on the world, we get everything the world has improved since we started, we get integrations into other systems and tools, and we get learnings from experts in those capabilities. SaaS, in particular, has a bonus — we keep getting these things, for no extra work!

If we don’t rebase on the world, a startup will.

How can a scrappy little company defeat a powerful incumbent?

Every piece of software and infrastructure that the big company called a capital investment, that they value because they put money into it, that they keep using because it still technically works — all of this weight slows them down.

A startup builds on the latest that the whole world offers. They write minimum code on top of that to serve their customers. The less code they have, the faster they can change it.

In the 1990s, we built a big stack of custom work on top of a solid base. In the 2010s, we build less custom software to get the same business capabilities (with more reliability) because we’re building on various AWS services and many other tools and services.

This is not the only advantage a startup has, but it is a big one.

Software is never “done.”

Software is not bought, it is rented. (Regardless of how the accounting works.) It gives us capabilities as long as it keeps running, keeps meeting expectations, keeps fitting in with other elements of the world that need to integrate with it.

Keep evolving the software, infrastructure, and architecture. It is never going to be perfect, but we can keep it moving.

When I’m coding a feature, I rebase on the production branch every few hours. For software systems, try to rebase on the world every few months, bit by bit.

In an enterprise with a lot of code, this is an extra challenge. Change at that scale is always an evolution.

If you find yourself thinking, “we have so much code. How could we ever bring it all up to date?” then please check out Atomist’s Drift Management. Get visibility into what you have, and even automatic rebasing (of code, at least). There’s a service for this too.

Acknowledgment
A large amount of this information came out of a conversation with Zack Kanter, CEO of Stedi.

Progress in Learning

Back in the day, artists mixed their own paints. They bought minerals, ground them up, and mixed them with binder. Then in the 1800s, metal paint tubes became a thing, and people bought smooth, bright paint in tubes.

Do they still teach mixing your own pigments in art school? Maybe, in one class. I took one class in Assembler when studying Computer Science. That gave me some useful perspective and some vocabulary, but the baby computer designs we looked at were nothing like the computers we use.

I’m betting that art schools don’t return to pigment-grinding for months out of every year of school. There are new styles to learn, higher-level methods of expression that build on top of paint tubes.

Why do my kids spend months out of every year learning to do arithmetic by hand? They could be learning set theory, new ways of thinking that come from higher-level math.

Why are they learning cursive instead of coding? We can express ourselves at higher levels through computers.

Human culture advances through products. Paint in a tube is a product that lets artists focus on painting. It lets them work in new locations and at new speeds. Monet and the other Impressionists painted as they did because they could finally paint outside, thanks to tubes of paint!

I don’t want new computer programmers to learn the basics the way I had to learn them. They don’t need to program in C, except maybe one class for perspective, to learn the vocabulary of memory overflow. Learn queueing theory, that’s a useful way of thinking. Don’t implement a bunch of queues, except as extra credit.

Some artists still mix their own pigments.

Viewing a painting produced entirely with hand-ground mineral pigments is a completely different experience than looking at one made with modern chemical paints. The minerals scintillate and their vibrations seem to extend from the canvas.

Laura Santi

We need a few specialists to implement data structure libraries and programming languages. There are contests for mental arithmetic, if you enjoy that game. Calligraphy is a great hobby. When I sit down to learn, I want to learn new ways to think.

When younger people skip the underpinnings and learn higher-level concepts, that’s called progress.

Why is CSS a thing?

All I want is a web page. I want this one thing on the left and this other thing on the right — why is this so hard?? Can I just make a table in HTML like I used to do in the nineties? Why do I have to worry about stylesheets? and, why are they so hard?

As a backend developer, I’m used to giving the computer instructions. Like “put this on the left and this on the right.” But that is not how web development works. For good reason!

As the author of a web page, I do not have enough information to decide how that page should be laid out. I don’t know who is using it, on what device, in what program, on what screen, in what window, with what font sizes.

You know who does know that stuff? The user agent. That’s a technical term for an application that presents documents to people. The browser is a user agent. The user agent could also create printed documents, or it could speak the document to a person whose eyes are unavailable.

The user agent runs on a particular device. Computer, phone, TV, whatever. It knows the limitations of the hardware. It can be configured by the user. The user agent can conform to various CSS specifications.

CSS is not a programming language. It is a syntax for rules, rules which give the browser (that user agent) clues about how to display the document. The browser combines that information with what it knows about the world to come up with a format to display (or speak) the document.

It turns out that rule-based programming is hard. It sounds like it should be easier than imperative code, but it is not.

So no, you don’t get to decide that this thing goes on the left and that thing goes on the right. The browser gets that choice.

But here’s something I learned yesterday: put each thing in a div, and give those divs display: inline-block. then the browser has the option of putting them next to each other, if that fits with those constraints that only it knows.

Implementing all the interfaces

Humans are magic because we are components of many systems at once. We don’t just build into systems one level higher, we participate in systems many levels higher and everywhere in between.

In code, a method while is part of a class which is part of a library which is part of a service which is part of a distributed system — there is a hierarchy, and each piece fits where it does.

An atom is part of one molecule, which combines into one protein which functions in one cell in one tissue in one organ, if it’s lucky to be part of something exciting like a person.

But as a person, I am an individual and a mother and a team member and an employee and a citizen (of town, state, country) and a human animal. I am myself, and I participate in systems from relationship to family to community to culture. We function at all these levels, and often they load us with conflicting goals.

Gregory Bateson (PDF) describes native Bali culture: each full citizen participates in the village council. Outside of village council meetings, they speak for themselves. In the council, the speak in the interests of I Desa (literally, Mr. Village).

Stewart Brand lists these levels of pace and size in a civilization:

  • Fashion/art (changes fastest, most experimental)
  • Commerce
  • Infrastructure
  • Governance
  • Culture
  • Nature (changes slowest, moderates everything else)

Each of these work at different timescales. Each of us participates in each of them.

We each look out for our own interests (what is the fashionable coding platform of the day) and our family and company’s economic interest (what can we deliver and charge for this quarter) and infrastructure (what will let us keep operating and delivering long-term) and so on.

Often these are in conflict. The interests of commerce can conflict with the interests of nature. My personal finances conflict with the city building infrastructure. My nation might be in opposition to the needs of the human race. Yet, my nation can’t continue to exist without the stability of our natural world. My job won’t exist without an economic system, which depends on stable governance.

If we were Java classes, we’d implement twenty different interfaces, none of them perfectly, all of them evolving at different rates, and we’re single-threaded with very long GC pauses.

Tough stuff, being human.

These are not the only options. Wineglass edition

Today I found myself in the kitchen, near the fridge with the wine (it’s an excellent Chardonnay from Sonoma, thanks XYZ Beverages in Tybee Island, you exceed my expectations although you don’t have a website to link to). My empty glass was out on the screened porch.

Do I go outside for the glass? Or take the wine bottle to the glass, and then return it to the fridge?

These are not the only options. I snag another wineglass from the cupboard, fill it with wine, and take that out to the porch.

Now I have two dirty wineglasses, but who cares? The dishwasher washes them all at the same rate.

This is garbage collection in action. The dishwasher acts as a garbage collector for dirty dishes. It adds the capability of “do not worry about how many dishes you dirty. They will all be cleaned for the same fixed cost that you have already incurred.”

This removes one consideration that I need to think about in my actions. I’m free to optimize for my higher-level objectives (“be on the porch, with wine in a glass”) while ignoring the accumulation of resources (dirty wineglasses). It takes some adjustment to benefit from this condition.

It takes some adjustment in a brain to move from scarcity (“Dishes are a resource with a cost”) to abundance (“dirty dishes meh, not a problem anymore”). Once adjusted, the options opened to me are widened, in a way that a clearly optimal path is opened.

Now pardon me while I finish this delicious glass of wine and fetch another, from the nice cold bottle still in the fridge.

For cleaner code, write ugly code

We want to write only clean code, right?

Wrong. I want to write eventually-clean code. It starts exploring a space, and then I refine it to be cleaner and more suited to purpose. Usually, that purpose becomes clearer through writing, reading, and using the code.

That process of refining or tidying up can feel tedious, compared to implementing more features. It can be tempting to leave off error handling. I have a strategy for that: meaningful ugliness.

When I’m prototyping, I make the code so ugly that it will be satisfying to clean up. No attempt at appearing clean. I put a bunch of casts, bad casing maybe, random names instead of plausible-but-inaccurate ones. No null-checking.

for (const gerald of allKindsOfStuff.fingerprints) {
    (gerald as any).displayName =
         allKindsOfStuff.feature.convertinate(gerald.name);            }

(exaggerated to show detail)

When cleaning, I often start by making the code uglier. To move from an OO style toward functional, start by replacing all global or class-level variables with parameters. Twelve parameters on a function?! That’s hideous! Yes. Yes. Let ogres look like ogres.

This lets me feel productive when I come back to the code and clean it up. Later, I know more about what this code is for, what might be null and what won’t, what dependencies I can eliminate and which are meaningful. This is a better time to clean.

Disguising rough code in socially-acceptable clothing prevents cleaning. Appearance-of-good is the enemy of better.