Monday, January 16, 2017


Dependency management.
Nobody wants to think about it. We just want this stuff to work.
It is one of the nasty sneaky unsolved problems in software.

Each language system says, we've got a package manager and a build tool. This is how dependencies work. Some are better than others (I <3 Elm; npm OMG) but none of them are complete. We avert our eyes.

Dependencies are important. They're the edges in the software graph, and edges are always where the meaning lies. Edges are also harder to focus on than nodes.

They can be relatively explicit, declared in a pom.xml or package.json.
They can be hard to discover, like HTTP calls to URLs constructed from configuration + code + input.

Dependencies describe how we hook things together. Which means, they also determine our options for breaking things apart. And breaking things apart is how we scale a software system -- scale in our heads, that is; scale in terms of complexity, not volume.

If we look carefully at them, stop wishing they would stop being such a bother, maybe we can get this closer to right. There's a lot more to this topic than is covered in this post, but it's a start.

Libraries v Services

The biggest distinction. Definitions:

Libraries are compiled in. They're separate modules, in different repositories (or directories in a giant repository of doom (aka monorepo)). They are probably maintained by different companies or at least teams. Code re-use is achieved by compiling the same code into multiple applications (aka services). [I'm talking about compile-scope libraries here. Provided-scope, and things like .dll's (what is that even called) are another thing that should probably be a separate category in this post but isn't included.]

Services: one application calls another over the network (or sockets on the same machine); the code runs in different processes. There's some rigmarole in how to find each other: service discovery is totally a problem of its own, with DNS as the most common solution.



Libraries are declared explicitly, although not always specifically. Something physically brings their code into my code, whether as a jar or as code explicitly.

Service dependencies are declared informally if at all. They may be discovered in logging. They may be discernible from security groups, if you're picky about which applications are allowed to access which other ones.


Here's a crucial difference IMO. Libraries: you can release it and ask people to upgrade. If your library is internal, you may even upgrade the version in other teams' code. But it's the users of your library that decide when that new version goes into production. Your new code is upgraded when your users choose to deploy the upgraded code.

Services: You choose when it's upgraded. You deploy that new code, you turn off the old code, and that's it. Everyone who uses your service is using the new code. Bam. You have the power.
This also means you can choose how many of them are running at a time. This is the independent-scalability thing people get excited about.

If your library/service has data backing it, controlling code deployment means a lot for the format of the data. If your database is accessed only by your service, then you can any necessary translations into the code. If your database is accessed by a library that other people incorporate, you'd better keep that schema compatible.


There's a lovely Rich Hickey talk, my notes here, about versioning libraries. Much of it also applies to services.

If you change the interface to a library, what you have is a different library. If you name it the same and call it a new version, then what you have is a different library that refuses to compile with the other one and will fight over what gets in. Then you get into the whole question of version conflicts, which different language systems resolve in different ways. Version conflicts occur when the application declares dependencies on two libraries, each of which declares a dependency on the same-name fourth library. In JavaScript, whatever, we can compile in both of them, it's just code copied in anyway. In Java, thou mayest have only one definition of each class name in a given ClassLoader, so the tools choose the newest version and hope everyone can cope.

Services, you can get complicated and do some sort of routing by version; you can run multiple versions of a service in production at the same time. See? You call it two versions of the same service, but it's actually two different services. Same as the libraries. Or, you can support multiple versions of the API within the same code. Backwards compatibility, it's all the pain for you and all the actual-working-software for your users.

API Changes and Backwards Compatibility

So you want to change the way users interact with your code. There's an important distinction here: changing your code (refactoring, bug fix, complete rewrite) is very different from requiring customers to change their code in order to change yours correctly. That's a serious impact.

Services: who uses it? Maybe it's an internal service and you have some hope of grepping all company code for your URL. You have the option of personally coordinating with those teams to change the usage of your service.
Or it's a public-facing service. DON'T CHANGE IT. You can never know who is using it. I mean maybe you don't care about your users, and you're OK with breaking their code. Sad day. Otherwise, you need permanent backwards-compatibility forever, and yes, your code will be ugly.

Libraries: if your package manager is respectable (meaning: immutable, if it ever provides a certain library-version is will continue to provide the same download forever), then your old versions are still around, they can stay in production. You can't take that code away. However, you can break any users who aren't ultra-specific about their version numbers. That's where semantic versioning comes in; it's rude to change the API in anything short of a major version, and people are supposed to be careful about picking up a new major version of your library.
But if you're nice you could name it something different, instead of pretending it's a different number of the same thing.


A trick about libraries: it's way harder to know "what is an API change?"
With services it's clear; we recognize certain requests, and provide certain responses.
With libraries, there's all the public methods and public classes and packages and ... well, as a Java/Scala coder, I've never been especially careful about what I expose publicly. But library authors need to be if they're ever going to safely change anything inside the library.

Services are isolated: you can't depend on my internals because you physically can't access them. In order to expose anything to external use I have to make an explicit decision. This is much stronger modularity. It also means you can write them in different languages. That's a bonus.

There are a few companies that sell libraries. Those are some serious professionals, there. They have to test versions from way-back, on every OS they could run on. They have to be super aware of what is exposed, and test the new versions against a lot of scenarios. Services are a lot more practical to throw out there - even though backwards compatibility is a huge pain, at least you know where it is.


Libraries: it fails, your code fails. It runs out of memory, goodbye process. Failures are communicated synchronously, and if it fails, the app knows it.

Services: it fails, or it doesn't respond, you don't really know that it fails ... ouch. Partial failures, indeterminate failures, are way harder. Even on the same machine coordinating over a socket, we can't guarantee the response time or whether responses are delivered at all. This is all ouch, a major cost of using this modularization mechanism.


I think the biggest consideration in choosing whether to use libraries or services for distribution of effort / modularization is that choice of who decides when it deploys. Who controls which code is in production at a given time.

Libraries are more efficient and easier to handle failures. That's a thing. In-process communication is faster and failures are much easier to handle and consistency is possible.

Services are actual decoupling. They let a team be responsible for their own software, writing it and operating it. They let a team choose what is in production at a given time -- which means there's hope of ever changing data sources or schemas. Generally, I think the inertia present in data, data which has a lot of value, is underemphasized in discussion of software architecture. If you have a solid service interface guarding access to your data, you can (with a lot of painful work) move it into a different database or format. Without that, data migrations may be impossible.

Decoupling of time-of-deployment is essential for maintaining forward momentum as an organization grows from one team to many. Decoupling of features and of language systems, versions, tools helps too. To require everyone use the same tools (or heaven forbid, repository) is to couple every team to another in ways that are avoidable. I want my teams and applications coupled (integrated) in ways that streamline the customer's experience. I don't need them coupled in ways that streamline the development manager's experience.

Overall: libraries are faster until coordination is the bottleneck. Services add more openings to your bottle. That can make your bottle harder to understand.

There's a lot more to the problems of dependency management. This is one crucial distinction. All choices are valid, when made consciously in context. Try to focus through your tears.

Friday, January 13, 2017

Today's Rug: maven executable jar

I like being a polyglot developer, even though it's painful sometimes. I use lots of languages, and in every one I have to look stuff up. That costs me time and concentration.

Yesterday I wanted to promote my locally-useful project from "I can run it in the IDE" to "I can run it at the command line." It's a Scala project built in maven, so I need an executable jar. I've looked this up and figured this out at least twice before. There's a maven plugin you have to add, and then I have to remember how to run an executable jar, and put that in a script. All this feels like busywork.

What's more satisfying than cut-and-pasting into my pom.xml and writing another script? Automating these! So I wrote a Rug editor. Rug editors are code that changes code. There's a Pom type in Rug already, with a method for adding a build plugin, so I cut and paste the example from the internet into my Rug. Then I fill in the main class; that's the only thing that changes from project to project so it's a parameter to my editor. Then I make a script that calls the jar. (The script isn't executable. I submitted an issue in Rug to add that function.) The editor prints out little instructions for me, too.

$ rug edit -lC ~/code/scala/org-dep-graph MakeExecutableJar main_class=com.jessitron.jessMakesAPicture.MakeAPicture

Resolving dependencies for jessitron:scattered-rugs:0.1.0 ← local completed
Loading jessitron:scattered-rugs:0.1.0 ← local into runtime completed
run `mvn package` to create an executable jar
Find a run script in your project's bin directory. You'll have to make it executable yourself, sorry.
Running editor MakeExecutableJar of jessitron:scattered-rugs:0.1.0 ← local completed

→ Project
  ~/code/scala/org-dep-graph/ (8 mb in 252 files)

→ Changes
  ├── pom.xml updated 2 kb
  ├── pom.xml updated 2 kb
  ├── bin/run created 570 bytes
  └── .atomist.yml created 702 bytes

Successfully edited project org-dep-graph

It took a few iterations to get it working, probably half an hour more than doing the task manually.
It feels better to do something permanently than to do it again.

Encoded in this editor is knowledge:
* what is that maven plugin that makes an executable jar? [1]
* how do I add it to the pom? [2]
* what's the maven command to build it? [3]
* how do I get it to name the jar something consistent? [4]
* how do I run an executable jar? [5]
* how do I find the jar in a relative directory from the script? [6]
* how do I get that right even when I call the script from a symlink? [7]

It's like saving my work, except it's saving the work instead of the results of the work. This is going to make my brain scale to more languages and build tools.

below the fold: the annotated editor. source here, instructions here in case you want to use it -> or better, change it -> or even better, make your own.

@description "teaches a maven project how to make an executablejar"
@tag "maven"
editor MakeExecutableJar

@displayName "Main Class"
@description "Fully qualified Java classname"
@minLength 1
@maxLength 100
param main_class: ^.*$

let pluginContents = """<plugin>
[4]         </configuration>
                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
""" [2]

let runScript = """#!/bin/bash

while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
  DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
  SOURCE="$(readlink "$SOURCE")"
  [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done [7]
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"

java -jar $DIR/../target/executable.jar "$@" [5]

with Pom p
  do addOrReplaceBuildPlugin "org.apache.maven.plugins" "maven-shade-plugin" pluginContents [1]

with File f when path = "pom.xml" begin
  do replace "__I_AM_THE_MAIN__" main_class
  do eval { print("run `mvn package` to create an executable jar")[3]

with Project p begin
  do eval { print("Find a run script in your project's bin directory. You'll have to make it executable yourself, sorry") }
  do addFile "bin/run" runScript

Monday, January 9, 2017

It's Atomist Time!

I'm hella excited to get to work on Atomist full-time starting now (January 2017). Why? What do they do? Oh let me tell you!

I love developing software, not least because we (as an industry) have not yet figured out how to software. We know it's powerful, but not yet how powerful. Software is like engineering except that the constraints aren't in what we can build, but what we can design and specify. Atomist is expanding this capacity.
Atomist builds tooling to smooth some bumps in software development. There are three components that I'm excited about, three components that open new options in how we develop software.

Component 1: Code that changes code

First, there are code editors, called Rugs. On the surface, these automate the typing part. Like code generators, except they continue to work with the code after you modify it. Like refactorings in an IDE, except they appear as a pull request, and then you can continue development on that branch. If you have some consistent code structure (and if you use a framework, you do), Rugs can perform common feature-adding or upgrading or refactoring operations. Use standard Rugs to, say, add graph database support to an existing Spring Boot project. Customize Rugs to set up your Travis build uniformly in your projects. Create your own Rugs to implement metrics integration according to your company's standards -- and to upgrade existing code when those standards change.

On the surface this is an incremental improvement over existing code generation and IDE refactoring tools. Yet, I see it as something more. I see it as a whole new answer to the question of "indirection or repetition?" in code. Take for instance: adding a field to a Rails app makes us change the controller, the model, and four other places. Or creating a new service means changing deployment configuration, provisioning, and service discovery. Whenever a single conceptual change requires code changes in multiple spots, we complain about the work and we make mistakes. Then we start to get clever with it: we introduce some form of indirection that localizes that change to one place. Configuration files get generated in the build, Ruby metaprogramming introduces syntax that I can't even figure out how it's executable -- magic happens. The code gets less explicit, so that we can enforce consistency and make changing it ... well, I'm not gonna say "easier" because learning to cast the spell is tricky, but it is less typing.

Atomist introduces a third alternative: express that single intention ("create a new service" or "add this field") as a Rug editor. This makes writing it one step, and then the editor makes all those code changes in a single commit in a branch. From there, customize your field or your new service; each commit that you make shows how your feature is special. The code remains explicit, without additional magic. When I come back and read it, I have some hope of understanding what it's doing. When I realize that I forgot something ("oops! I also need to add that service to the list of log sources") then I fix it once, in the NewService.rug editor. Now I never forget, and I never have to remember.

I love this about developing with Rugs: as I code, I'm asking myself, "how could I automate this?" and then what I learn is encoded in the Rug, for the benefit of future-me and (if I publish it) of future-everyone-else. That is when I feel productive.

Component 2: Coordination between projects

Editors are cute when applied to one project. When applied across an organization, they start to look seriously useful. Imagine: A library released a security update, and we need to upgrade it across the organization. Atomist creates a pull request on every project that uses that library. The build runs, maybe we even auto-merge it when the build passes. Or perhaps there are breaking changes; the editor can sometimes be taught how to make those changes in our code.

And if a Rug can change the way we use a library, then it can change the way we use ours. This is cross-repository refactoring: I publish an internal library, and I want to rename this function in the next version. Here's my game: I publish not only the new version of my library, but an editor - and then I ask Atomist to create pull requests across the organization. Now it is a quick code review and "accept" for teams to upgrade to the new version.

Atomist coordinates with teams in GitHub and in Slack. Ask Atomist in Slack to start that new feature for you, or to check all repositories in the organization and create pull requests. Atomist can also coordinate with continuous integration. It ties these pieces together across repositories, and including humans. It can react to issues, to build results, to merges; and it can ping you in Slack if it needs more information to act appropriately. I have plans to use this functionality to link libraries to the services that use them: when the build passes on my branch, go build the app that uses my library with this new version, and tell me whether those tests pass.

This is cross-repository refactoring and cross-repository build coordination. This gives companies an alternative to the monorepo, to loading all their libraries and services into one giant repository in order to test them together. The monorepo is a lie: our deployments are heterogenous, so while the monorepo is like "look at this lovely snapshot of a bunch of code that works together" the production environment is something different. The monorepo is also painful because git gets slow when the repository gets large; because it's hard to tell which commits affect which deployed units; and because application owners lose control over when library upgrades are integrated. Atomist will provide a layer on top of many repositories, letting us coordinate change while our repositories reflect production realities.

Atomist tooling will make multirepo development grow with our codebases.

Component 3: is still a secret

I'm not sure I can talk about the third piece of possibility-expanding tooling yet. So have this instead:

Automated coordination among systems and people who interact with code -- this is useful everywhere, but it's a lot of work to create our own bots for this. Some companies put the resources into creating enough automation for their own needs. No one business-software-building organization has a reason to develop, refine, and publish a general solution for this kind of development-process automation. Atomist does.

When it becomes easy for any developer to script this coordination and the reactions just happen -- "Tell me when an issue I reported was closed" "Create a new issue for this commit and then mark it closed as soon as this branch is merged" -- then we can all find breakages earlier and we can all keep good records. This automates my work at a higher level than coding. This way whenever I feel annoyed by writing a status report, or when I forget to update the version in one place to match the version in another, my job is not to add an item to a checklist. My job is to create an Atomist handler script to make that happen with no attention from me.

My secret

I love shaving yaks. Shaving them deeply, tenderly, finding the hidden wisdom under their hair. I love adding a useful feature, and then asking "How could that be easier?" and then "How could making that easier be easier?" This is Atomist's level of meta: We are making software to make it easier for you to make your work easier, as you work to make software to make your customers' lives easier.

I think we're doing this in depths and ways other development tools don't approach. At this level of meta (software for building software for building software for doing work), there's a lot of leverage, a lot of potential. This level of meta is where orders-of-magnitude changes happen. Software changes the world. I want to be part of changing the software world again, so we can change the real world even faster.

With Atomist, I get to design and specify my own reality, the reality of my team's work. (Atomist does the operations bit.) Without spending tons of time on it! Well, I get to spend tons of time on it because I get to work for Atomist, because that's my thing. But you don't have to spend tons of time on it! You get to specify what you want to happen, in the simplest language we can devise.
We're looking for teams to work with us on alpha-testing, if you're interested now. (join our slack, or email me) Let's learn together the next level of productivity and focus in software development.

Monday, December 5, 2016

Using Rug with Elm

At elm-conf and CodeMesh and YOW! Australia this year, I did live demos using automated code modification with Atomist Rug.

Rug is now officially open source, and the Rug CLI is available so that you can try (and change! and improve!) these editors on your own Elm code. This blog post tells you how.

I usually start a new Elm project as a static page, make it look like something; then turn it into a beginner program, add some interactivity; then turn it into an advanced program and add subscriptions. I like how this flow lets me start super-simple, and then add the pieces for access to the world as I need it.

Now you can do this too!

Watch out: these editors (and the parser behind them) work for the code I've tried them on. As you try them, you'll find cases I didn't cover. Please file an issue when you do, or find me on Atomist-Community slack.

Install Rug

The local version of the Rug runtime is the Rug CLI. Complete installation instructions are here.

TL;DR for Mac:
brew tap atomist/tap
brew install rug-cli

Generate a project

This will create a directory containing a new static Elm app, with a build script etc. This will put a project named banana under your current directory, make it a git repo and make an initial commit:
rug generate -R jessitron:elm-rugs:StaticPage banana
Inside banana, edit src/Main.elm. Put something in that empty div.
Run ./build
Open target/index.html to see the results.

Upgrade it to a beginner program

After your banana looks OK, make it interactive. Run this inside your project directory:
rug edit jessitron:elm-rugs:UpgradeToBeginnerProgram
Now your src/Main.elm contains the beginnings of a beginner program. The model is empty and the only message is Noop, which does nothing. This is the beginner program template from the Elm tutorial, except that the view function is populated based on your main from the static page.

You could add a button:
rug edit jessitron:elm-rugs:AddButton button_text="Push Me" button_message=ButtonPushed
Now your src/Main.elm contains a new message type, ButtonPushed. Your update function handles it, but does nothing interesting.
type Msg
    = Noop

update : Msg -> Model -> Model
update msg model =
    case msg of
        Noop ->

ButtonPushed ->            model
Find a new function hanging out at the end of the file, buttonPushedButton. Incorporate that into your view to display the button. Run ./build and refresh target/index.html; push the button and see the message in the debugger.

Try adding a text input in a similar way, with
rug edit jessitron:elm-rugs:AddTextInput input_name=favoriteColor
This adds a function, a message, and a field to the model so that you'll have access to the content of the text input.

Try passing -R to rug, and it'll make a commit for you after the editor completes. You have to make a commit yourself right before running rug, or it'll complain.

For further edit operations, see my elm-rugs repo. You can upgrade to a full program, and add subscriptions to clicks and window size.

Change these editors! Add more!

The best part of running locally is running local versions.
Clone my repository: git clone
Now, go to the secret directory holding the editors: cd elm-repo/.atomist/editors
Here, you can see the scripts that work on the code, like AddButton.rug.

To run the local versions, be in that elm-rugs directory, and point rug at your project directory with -C:
rug -l -C /path/to/my/projects/banana edit AddButton button_message=Yay button_text="Say hooray"
I don't have to qualify the editor name with jessitron:elm-rugs when it's local.

There's more information in the Atomist docs on how rug works. TL;DR is, the files in the top level of elm-rugs/ are the starting point for newly generated project. NewStaticPage.rug, as a generator, starts from those and then changes the project name. The editors all start from whatever project they're invoked on, and they can change files in place, or create new ones from templates in the elm-rugs/.atomist/templates directory. (Most of my templates are straight files, with a .vm suffix to make Rug's merge function work.)

Questions very welcome on either the elm-lang slack in the #atomist channel, or the Atomist community slack on the #rug-elm channel.

Pull requests are even more welcome. Issues, too. These rugs work for the narrow cases I've tested them on. It'll be a community effort to refine and expand them!

Thursday, November 10, 2016

Areas of responsibility

"And the Delivery team is in charge of puppet...." said our new manager.

"Wait we're in charge of WHAT?" - me

"Well I thought that it fits in with your other responsibilities."

"That's true. But we're not working on it, we're working on these other things. You can put whatever you want in our yellow circle, but that's it."

"The yellow circle?"

See, I model our team's areas of responsibility as three circles. The yellow system is everything we're responsible for -- all the legacy systems and infrastructure have to belong to some team, and these are carved out for us. Some we know little about.
Inside the yellow circle is an orange circle: the systems we plan to improve. These appear on our backlog in JIRA epics. We talk about them sometimes.
Inside the orange circle, a red circle: active work. These systems are currently under development by our team. We talk about them every day, we add features and tests, we garden them.

That yellow circle holds a lot of risks: when something there breaks we'll stop our active work and learn until we can stand it back up. Management may add items here, as they recognize the schedule risk. We sometimes spend bits of time researching these, to reduce our fear of pager duty.

The orange circle holds a lot of hope, our ambitions for the year and the quarter. We choose these in negotiation with management.

The red circle is ours. We decide what to work on each day, based on plans, problems, and pain. Pushing anything directly into active work is super rude and disruptive.

"OK, it's in the yellow circle, cool. I'll work on hiring more people, so we can expand the orange and red circles too."

Sunday, September 25, 2016

Provenance and causality in distributed systems

Can you take a piece of data in your system and say what version of code put it in there, based on what messages from other systems? and what information a human viewed before triggering an action?

Me neither.

Why is this acceptable? (because we're used to it.)
We could make this possible. We could trace the provenance of data. And at the same time, mostly-solve one of the challenges of distributed systems.

Speaking of distributed systems...

In a distributed system (such as a web app), we can't say for sure what events happened before others. We get into general relativity complications even at short distances, because information travels through networks at unpredictable speeds. This means there is no one such thing as time, no single sequence of events that says what happened before what. There is time-at-each-point, and inventing a convenient fiction to reconcile them is a pain in the butt.

We usually deal with this by funneling every event through a single point: a transactional database. Transactions prevent simultaneity. Transactions are a crutch.

Some systems choose to apply an ordering after the fact, so that no clients have to wait their turn in order to write events into the system. We can construct a total ordering, like the one that the transactional database is constructing in realtime, as a batch process. Then we have one timeline, and we can use this to think about what events might have caused which others. Still: putting all events in one single ordering is a crutch. Sometimes, simultaneity is legit.

When two different customers purchase two different items from two different warehouses, it does not matter which happened first. When they purchase the same item, it still doesn't matter - unless we only find one in inventory. And even then: what matters more, that Justyna pushed "Buy" ten seconds before Edith did, or that Edith upgraded to 1-day shipping? Edith is in a bigger hurry. Prioritizing these orders is a business decision. If we raise the time-ordering operation to the business level, we can optimize that decision. At the same time, we stop requiring the underlying system to order every event with respect to every other event.

On the other hand, there are events that we definitely care happened in a specific sequence. If Justyna cancels her purchase, that was predicated on her making it. Don't mix those up. Each customer saw a specific set of prices, a tax amount, and an estimated ship date. These decisions made by the system caused (in part) the customer's purchase. They must be recorded either as part of the purchase event, or as events that happened before the purchase.

Traditionally we record prices and estimated ship date as displayed to the customer inside the purchase. What if instead, we thought of the pricing decision and the ship date decision as events that happened before the purchase? and the purchase recorded that those events definitely happened before the purchase event?

We would be working toward establishing a different kind of event ordering. Did Justyna's purchase happen before Edith's? We can't really say; they were at different locations, and neither influenced the other. That pricing decision though, that did influence Justyna's purchase, so the price decision happened before the purchase.

This allows us to construct a more flexible ordering, something wider than a line.

Causal ordering

Consider a git history. By default, git log prints a line of commits as if they happened in that order -- a total ordering.

But that's not reality. Some commits happen before others: each commit I make is based on its parent, and every parent of that parent commit, transitively. So the parent happened before mine. Meanwhile, you might commit to a different branch. Whether my commit happened before yours is irrelevant. The merge commit brings them together; both my commit and yours happen before the merge commit, and after the parent commit. There's no need for a total ordering here. The graph expresses that.

This is a causal ordering. It doesn't care so much about clock time. It cares what commits I worked from when I made mine. I knew about the parent commit, I started from there, so it's causal. Whatever you were doing on your branch, I didn't know about it, it wasn't causal, so there is no "before" or "after" relationship to yours and mine.

We can see the causal ordering clearly, because git tracks it: each commit knows its parents. The cause of each commit is part of the data in the commit.

Back to our retail example. If we record each event along with the events that caused it, then we can make a graph with enough of a causal ordering.

There are two reasons we want an ordering here: external consistency and internal legibility.

External Consistency

External consistency means that Justyna's experience remains true. Some events are messages from our software system to Justyna (the price is $), and others are messages coming in (Confirm Purchase, Cancel Purchase). The sequence of these external interactions constrains any event ordering we choose. Messages crossing the system boundary must remain valid.

Here's a more constricting example of external consistency: when someone runs a report and sees a list of transactions for the day, that's an external message. That message is caused by all the transactions reported in it. If another transaction comes in late, it must be reported later as an amendment to that original report -- whereas, if no one had run the report for that day yet, it could be lumped in with the other ones. No one needs to know that it was slow, if no one had looked.

Have you ever run a report, sent the results up the chain, and then had the central office accuse you of fudging the numbers because they run the same report (weeks later) and see different totals? This happens in some organizations, and it's a violation of external consistency.

Internal Legibility

Other causal events are internal messages: we displayed this price because the pricing system sent us a particular message. The value of retaining causal information here is troubleshooting, and figuring out how our system works.

I'm using the word "legibility"[1] in the sense of "understandability:" as a person we have visibility into the system's workings, we can follow along with what it's doing. Distinguish its features, locate problems and change it.

 If Justyna's purchase event is caused by a ship date decision, and the ship date decision ("today") tracked its causes ("the inventory system says we have one, with more arriving today"), then we can construct a causal ordering of events. If Edith's purchase event tracked a ship date decision ("today") which tracked its causes ("the inventory system says we have zero, with more arriving today"), then we can track a problem to its source. If in reality we only send one today, then it looks like the inventory system's shipment forecasts were inaccurate.

How would we even track all this?

The global solution to causal ordering is: for every message sent by a component in the system, record every message received before that. Causality at a point-in-time-at-a-point-in-space is limited to information received before that point in time, at that point in space. We can pass this causal chain along with the message.

"Every message received" is a lot of messages. Before Justyna confirmed that purchase, the client component received oodles of messages, from search results, from the catalog, from the ad optimizer, from the review system, from the similar-purchases system, from the categorizer, many more. The client received and displayed information about all kinds of items Justyna did not purchase. Generically saying "this happened before, therefore it can be causal, so we must record it ALL" is prohibitive.

This is where business logic comes in. We know which of these are definitely causal. Let's pass only those along with the message.

There are others that might be causal. The ad optimizer team probably does want to know which ads Justyna saw before her purchase. We can choose whether to include that with the purchase message, or to reconstruct an approximate timeline afterward based on clocks in the client or in the components that persist these events. For something as aggregated as ad optimization, approximate is probably good enough. This is a business tradeoff between accuracy and decoupling.

Transitive causality

How deep is the causal chain passed along with a message?

We would like to track backward along this chain. When we don't like the result of Justyna and Edith's purchase fulfillment, we trace it back. Why did the inventory system said the ship date would be today in both cases. This decision is an event, with causes of "The current inventory is 1" and "Normal turnover for this item is less than 1 per day"; or "The current inventory is 0" and "a shipment is expected today" and "these shipments usually arrive in time to be picked the same day." From there we can ask whether the decision was valid, and trace further to learn whether each of these inputs was correct.

If every message comes with its causal events, then all of this data is part of the "Estimated ship date today" sent from the inventory system to the client. Then the client packs all of that into its "Justyna confirmed this purchase" event. Even with slimmed-down, business-logic-aware causal listings, messages get big fast.

Alternately, the inventory system could record its decision, and pass a key with the message to the client, and then the client only needs to retain that key. Recording every decision means a bunch of persistent storage, but it doesn't need to be fast-access. It'd be there for troubleshooting, and for aggregate analysis of system performance. Recording decisions along with the information available at the time lets us evaluate those decisions later, when outcomes are known.


A system component that chooses to retain causality in its events has two options: repeat causal inputs in the messages it sends outward; or record the causal inputs and pass a key in the messages it sends outward.

Not every system component has to participate. This is an idea that can be rolled out gradually. The client can include in the purchase event as much as its knows: the messages it received, decisions it made, and relevant messages sent outward before this incoming "Confirm Purchase" message was received from Justyna. That's useful by itself, even when the inventory system isn't yet retaining its causalities.

Or the inventory system could record its decisions, the code version that made them, and the inputs that contributed to them, even though the client doesn't retain the key it sends in the message. It isn't as easy to find the decision of interest without the key, but it could still be possible. And some aggregate decision evaluation can still happen. Then as other system components move toward the same architecture, more benefits are realized.

Conscious Causal Ordering

The benefits of a single, linear ordering of events are consistency, legibility, and visibility into what might be causal. A nonlinear causal ordering gives us more flexibility, consistency, a more accurate but less simplified legibility, and clearer visibility into what might be causal. Constructing causal ordering at the generic level of "all messages received cause all future messages sent" is expensive and also less meaningful than a business-logic-aware, conscious causal ordering. This conscious causal ordering gives us external consistency, accurate legibility, and visibility into what we know to be causal.

At the same time, we can have provenance for data displayed to the users or recorded in our databases. We can know why each piece of information is there, and we can figure out what went wrong, and we can trace all the data impacted by an incorrect past event.

I think this is something we could do, it's within our ability today. I haven't seen a system that does it, yet. Is it because we don't care enough -- that we're willing to say "yeah, I don't know why it did that, can't reproduce, won't fix"? Is it because we've never had it before -- if we once worked in a system with this kind of traceability, would we refuse to ever go back?

[1] This concept of "legibility" comes from the book Seeing Like a State.

Wednesday, August 17, 2016

Harder and better (slides)

At Scenic City Summit in Chattanooga last week, I gave a closing keynote about 3 ways our jobs are harder than they used to be, and how each of these makes our jobs better.

Annotated slides are on Dropbox.