Definition of DevOps

Step 1: On one team, put the people with the knowledge and control necessary to change the software, see the results, change it, see the results.

Step 2: Use automation to take extrinsic cognitive load off this team, so that it needs fewer people.

That’s it, that’s DevOps.

Step 1 describes the cultural change that leads to flow. Delivering change requires no handoffs or approvals from outside the team; the impact of change flows back to the team. Act and learn.

Step 2 is where tools come in. If all you do is improve your tooling, well, that helps a little, but it doesn’t get you the qualitative change in flow. That comes from Step 1. The serious value of automation is that it enables Step 1, a single team with all the relevant knowledge.

Our job as developers is making decisions. DevOps gets us the knowledge we need to make good decisions, the authority to implement them, and the feedback to make better ones in the future.

Taking care of code … more and more code

(This is a shorter version of my talk for DeliveryConf, January 2020. Slides)

Good software is still alive.

The other day, I asked my twelve year old daughter for recommendations of drawing programs. She told me about one (FireAlpaca?) “It’s free, and it updates pretty often.” She contrasted that with one that cost money “but isn’t as good. It never updates.”

The next generation appreciates that good software is updated regularly. Anything that doesn’t update is eventually terrible.

Software that doesn’t change falls behind. People’s standards rise, their needs change. At best, old software looks dumb. At worst, it doesn’t run on modern devices.

Software that doesn’t change is dead. You might say, if it still runs, it is not dead. Oh, sure, it’s moving around — but it’s a zombie. If it isn’t learning, it’ll eventually fall over, and it might eat your face.

I want to use software that’s alive. And when I make software, I want it to stay alive as long as it’s in use. I want it be “done” when it’s out of production.

Software is like people. The only “done” is death.

Alive software belongs to a team.

What’s the alternative? Keep learning to keep living. Software needs to keep improving, at least in small ways, for as long as it is running.

We have to be able to change it, easily. If Customer Service says, “Hey, this text is unclear, can you change it to this?” then pushing that out should be as easy as updating some text. It should be not be harder than when the software was in constant iteration.

This requires automated delivery, of course. And you have to know that delivery works. So you have to have run it recently.

But it takes more than that. Someone has to know — or find out quickly — where that text lives. They have to know how to trigger the deployment and how to check whether it worked.

More than that, someone has to know what that text means. A developer needs to understand that application. Probably, this is a developer who was part of its implementation, or the last major set of changes.

For the software to be alive, it has to be alive in someone’s head.

And one head will never do; the unit of delivery is the team. That’s more resilient.

Alive software is owned and cared for by an active team. Some people keep learning, keep teaching the software, and the shared sociotechnical system keeps living. The team and software form a symmathesy.

How do we keep all our software alive, while still growing more?

Okay, but what if the software is good enough right now? How do we keep it alive when there’s no big initiative to change it?

Hmm. We can ask, what kind of code is easy to change?

Code needs to be clean and modern.

Well, it’s consistent. It is up-to-date with the language versions and frameworks and libraries that we currently use for development.

It is “readable” by our current selves. It uses familiar styles and idioms.

What you don’t want is to come at the “simple” (from outside perspective) task of updating some text, and find you need to install a bunch of old tools, oh wait, there’s security patches that need to happen before this will pass pre-deployment checks. Oh now we have to upgrade more stuff to the modern versions of those libraries to work. You don’t want to have to resuscitate the software before you can breathe new life into it.

If changing the software isn’t easy enough, we won’t do it. And then it gets terrible.

So all those tool upgrades, security patches, library updates gotta have been done already, in the regular course of business.

Keeping those up to date gives us an excuse to change the code, trigger a release, and then notice any problems in the deployment pipeline. We keep confidence that we can deploy it, because we deploy it every week whether we need to or not.

People need to be on stable teams with customer contact.

More interesting than properties of the code: what are some properties of people who can keep code alive?

The team is stable. There’s continuity of knowledge.

The team understands the reason the software exists. The business significance of that text and everything else.

And we still care. We have contact with people who use this software, so we can check in on whether this text change works for them. We continue to learn.

Code belongs to one team.

More interesting still: what kind of relationship does the alive-keeping team have with the still-alive code?

Ownership. The code is under the care of a single team.

Good communication. We can teach the code (by changing it), so we have good deployment automation and we understand the programming language, etc. And the code can teach us — it has good tests, so we know when we broke something. It is accountable to us, in the sense that it can tell us the story of what happens. This means observability. With this, we can learn (or re-learn) how it works while it’s running. Keep the learning moving, keep the system living.

The team is a learning system, within a learning system.

Finally: what kind of environment can hold such a relationship?

(diagram of code, people, relationship, environment)

It’s connected; the teams are in touch with the people who use software, or with customer support. The culture accepts continued iteration as good, it doesn’t fear change. Learning flows into and out of the symmathesy.

It supports learning. Software is funded as a profit center, as operational costs, not as capital expenditure, where a project is “done” and gets deprecated over years. How the accounting works around development teams is a good indication of whether a company is powered by software, or subject to software.

Then there’s the tricky one: the team doesn’t have too much else on their plate.

How do we keep adding code to our responsibilities?

The team that owns this code also owns other code. We don’t want to update libraries all day across various systems we’ve written before. We want to do new work.

It’s like a garden; we want to keep the flowers we planted years ago healthy, and we also want to plant new flowers. How do we increase the number of plants we can care for?

And, at a higher level — how can we, as people who think about DevOps, make every team in our organization able to keep code alive?

Teams are limited by cognitive load.

This is not: how do we increase the amount of work that we do. If all we did was type the same stuff all the time, we know what to do — we automate it.

Our work is not typing; it’s making decisions. Our limitation is not what we can do, it is what we can know.

In Team Topologies, Manuel Pais and Matthew Skelton emphasize: the unit of delivery of a team, and the limitation of a team is cognitive load.

We have to know what that software is about, and what the next software we’re working on is about. and the programming languages they’re in, and how to deploy them, and how to fill out our timesheets and which kitchen has the best bubbly water selection, and who just had a baby, and — it takes a lot of knowledge to do our work well.

Team Topologies lists three categories of cognitive load.

The germane cognitive load, we want that.

Germane cognitive load is the business domain. It is why our software exists. We want complexity here, because the more complex work our software does, the less the people who use it have to bother with. Maximize the percentage of our cognitive load taken up by this category.

So which software systems a team owns matters; group by business domain.

Intrinisic cognitive load increases if we let code get out of date.

Intrinsic cognitive load is essential to the task. This is our programming language and frameworks and libraries. It is the quirks of the systems we integrate with. How to write a healthy database query. How the runtime works: browser behavior, or sometimes the garbage collector.

The fewer languages we have to know, the better. I used to be all about “the best language for the problem.” Now I recommend “the language your team knows best, as long as it’s good enough.”

And “fewer” includes versions of the language, so again, consistency in the code matters.

Extrinsic cognitive load is a property of the work environment. Work on this

Finally, extrinsic cognitive load is everything else. It’s the timesheet system. The health insurance forms. It’s our build tools. It’s Kubernetes. It’s how to get credentials to the database to test those queries. It’s who has to review a pull request, and when it’s OK to merge.

This is not the stuff we want to spend our brain on. The less extrinsic cognitive load on the team, the more we have room for the business and systems knowledge, the more responsibility we can take on.

And this is a place where carefully integrated tools can help.

DevOps is about moving system boundaries to work better. How can we do that?

We can move knowledge within the team, and we can move knowledge out to a different team.

We can move work below the line.

Within the team, we can move knowledge from the social side to the technical side of the symmathesy. We can package up our personal knowledge into code that can be shared.

Automations encapsulate knowledge of how to do something

Automate bits of our work. I do this with scripts.

The trick is, can we make sharing it with the team almost as easy as writing it for ourselves?

Especially automate anything we want to remain consistent.

For instance, when I worked on the docs at Atomist, I wrote the deployment automation for them. I made a glossary, and I wanted it in alphabetical order. I didn’t to put it in alphabetical order; I wanted it to constantly be alphabetical. This is a case for automation.

I wrote a function to alphabetize the markdown sections, and told it to run with every build and push the changes back to the repository.

Autofixes like this also keep the third party licenses up to date (all the npm dependencies and their licenses). This is a legal requirement that a human is not going to do. Another one puts the standard license header on any code that’s committed without it. So I never copied the headers, I just let the automation do that. Formatting and linting, same thing.

If you care about consistency, put it in code. Please don’t nag a human.

Some of that knowledge can help with keeping code alive

Then there’s all that drudgery of updating versions and code styles etc etc — weeding the section of the garden we planted last year and earlier. how much of that can we automate?

We can write code to do some of our coding for us. To find the inconsistencies, and then fix some of them.

Encapsulate knowledge about -when- to do something

Often the work is more than knowledge of -how- to do something. It is also -when-, and that takes requires attentiveness. Very expensive for humans. When my pull request has been approved, then I need to push merge. Then I need to wait for a build, and then I need to use that new artifact in some other repository.

Can we make a computer wait, instead of a person?

This is where you need an event stream to run automations in response to.

Galo Navarro has an excellent description of how this helped smooth the development experience at Adevinta. They created an event hub for software development and operations related activities, called Devhose. (This is what Atomist works to let everyone do, without implementing the event hub themselves.)

We can move some of that to a platform team.

Yet, every automation we build is code that we need to keep alive.

We can move knowledge across team boundaries, with a platform team. I want my team’s breadth of responsibility to increase, as we keep more software alive, so I want its depth to be reduced.

Team Topologies describes this structure. The business software teams are called “stream aligned” because they’re working in a particular value stream, keeping software alive for someone else. We want to thin out their extrinsic cognitive load.

Move some it to a platform team. That team can take responsibility for a lot of those automations. And deep knowledge of delivery and operational tooling. Keep the human judgement of what to deploy when in the stream-aligned teams, and a lot of the “how” and “some common things to watch out for” in the platform team.

Some things a platform team can do:

  • onboarding
  • onboarding of code (delivery setup)
  • delivery
  • checks every team needs, like licenses

And then, all of this needs to stay alive, too. Your delivery process needs to keep updating for every repository. If delivery is event-based, and the latest delivery logic responds to every push (instead of what the repo was last configured for), then this keeps happening.

But keep thinning our platforms.

Platforms are not business value, though. We don’t really want more and more software there, in the platform.

We do want to keep adding services and automation that helps the team. But growing the platform team is not a goal. Instead, we need to make our platforms thinner.

There is such a thing as “done”

The best way to thin our software is outsourcing to another company. Not the development work, not the decisions. But software as a service, IaaS, logging, tooling of all sorts — hire a professional. Software someone else runs is tech debt you don’t have.

So maybe Galo could move Devhose on top of Atomist and retire some code.

Because any code that isn’t describing business complexity, we do want to die. As soon as we can move onto someone else’s service, win. Kill it, take it out of production. Then, finally, it’s done.

So yeah. There is such a thing as done. “Done” is death. You don’t want it for your value-producing code. You do want it for all other code you run.

Don’t do boring work.

If keeping software alive sounds boring, then let’s change that. Go up a level of abstraction and ask, how much of this can we automate?

Writing code to change code is hard. Automating is hard.

That will challenge your knowledge of your own job, as you try to encode it into a computer. Best case, you get the computer doing the boring bits for you. Worst case, you learn that your job really is hard, and you feel smart.

Keep learning to keep living. Works for software, and it works for us.

Fun with Docker: "Release file… is not valid yet"

Today my Docker build failed on Windows because apt-get update failed because some release files were not valid yet. It said they’d be valid in about 3.5 hours. WAT.

I don’t care about your release files! Do not exit with code 100! This is not what I want to think about right now!

Spoiler: restarting my computer fixed it. 😤

This turned out to be a problem with the system time. The Ubuntu docker containers thought it was 19:30 UTC, which is like 8 hours ago. Probably five hours ago, someone updated the release files wherever apt-get calls home to. My Docker container considered that time THE FUTURE. The scary future.

Windows had the time right, 21:30 CST (which is 6 hours earlier than UTC). Ubuntu in WSL was closer; it thought it was 19:30 CST. But Docker containers were way off. This included Docker on Windows and Docker on Ubuntu.

Entertainingly, the Docker build worked on Ubuntu in WSL. I’m pretty sure that’s because I ran this same build there long ago, and Docker had the layers cached. Each line in the Dockerfile results in a layer, so Docker starts the build operation at the first line that has changed. So it didn’t even run the apt-get update.

This is one of the ways that Docker builds are not reproducible. apt-get calls out to the world, so it doesn’t do the same thing every time. When files were updated matters, and (now I know) what time your computer thinks it is matters.

Something on the internet suggested restarting the VM that Docker uses. It seems likely that Docker on WSL and Docker on Windows (in linux-container mode) are using the same VM under the hood somewhere. I don’t know how to restart that explicitly, so I restarted the computer. Now all the clocks are right (Windows, Ubuntu in WSL, and Ubuntu containers from both Docker daemons). Now the build works fine.

I’m not currently worried about running containers in production. (I just want to develop this website without installing python’s package manager locally. This is our world.) Still working in Docker challenges me to understand more about operating systems, their package managers, networking, system clocks, etc.

Docker: it puts the Ops in DevOps. That’s my day.

Increase capacity, move slower

If the highways are crowded, and they build more lanes, the highways get more crowded.

If development is slow, and you add resources, development gets slower.

Adding people to a project increases the capacity for activity. Activity doesn’t translate to outcomes.

In these cases, you’re adding capacity to the system for cars, or for work, but those aren’t what makes the system run faster. Instead, adding capacity for traffic or for activity leads the system to change in ways that generate more traffic or activity. Which gets in the way of flow.

(lots more examples in this article)

What you want instead is to make flow easier. Add trains, intersperse commerce and residential. Add continuous delivery, add support structures to make progress easier. Don’t add more capacity for work! Doing work isn’t the point! Make the path shorter, instead.

Stick with “good enough,” until it isn’t

In business, we want to focus on our core domain, and let everything else be “good enough.” We need accounting, payroll, travel. But we don’t need those to be special if our core business is software for hospitals.

As developers, we want to focus on changing our software, because that is our core work. We want other stuff, such as video conferencing, email, and blog platforms to be “good enough.” It should just work, and get out of our way.

The thing is: “good enough” doesn’t stay good enough. Who wants to use Concur for booking travel? No one. It’s incredibly painful and way behind modern web applications that we use for personal travel. Forcing them into an outdated travel booking system holds your people back and makes recruiting a little harder.

When we rent software as a service, then it can keep improving. I shuddered the last time I got invited to a WebEx, but it’s better than it used to be. WebEx is not as slick as Zoom, but it was fine.

There is a lot of value in continuing with the same product that your other systems and people integrate with, and having it improve underneath you. Switching is expensive, especially in the focus it takes. But it beats keeping the anachronism.

DevOps says, “If it hurts, do it more.” This drives you to improve processes that are no longer good enough. Now and then you can turn a drag into a competitive advantage. Now and then, like with deployment, you find out that what you thought was your core business (writing code) is not core after all. (Operating useful software is.)

Limiting what you focus on is important. Let everything else be “good enough,” but check it every once in a while to make sure it still is. Ask the new employee, “What around here seems out of date compared to other places you’ve worked?” Or try a full week of mob programming, and notice when it gets embarrassing to have six people in the same drudgery.

You might learn something important.

Rebase on the World

We build our software in a particular world, a world of technologies that we link together. We choose a programming system (language, runtime, framework), libraries, and environment. We integrate components: databases, logging, and many different services.

Perhaps we built it on Java 8 running on VMs in our datacenter, connecting to a proprietary queuing service we bought years ago. We start with what is available and stable at the time.

But do we stay there?

The outside world moves capabilities toward commodity.

At some point, new businesses start building cloud applications instead of racking their servers.

An opportunity appears, and our enterprise can get out of the infrastructure business. When we shift our application onto AWS, there are whole areas of expertise we don’t need in-house. There are layers of infrastructure that Amazon maintains and upgrades, and we rarely even notice.

At some point, we integrate with new systems. They don’t speak our proprietary queuing protocol, so we move to Kafka, something that people and programs everywhere can understand. And at some point, new businesses don’t run Kafka; they rent it as a service.

When we move to SaaS, there’s a layer of expertise we don’t need to retain, pages we don’t have to answer, and upgrades we don’t have to manage. Or even better, maybe our needs have changed, or SQS has improved until it’s good enough. We get free integration with other AWS services and billing.

Is our software simpler? I don’t know, but it’s thinner. The layer we maintain is closer to the business logic, with integration code to link in SaaS solutions that other companies support.

All code is technical debt.

Every line of code written is in a context. Those contexts change, and expectations rise. New tools appear, and integrating them gives us unique abilities. Security vulnerabilities go noticed.

For the software we operate, we are responsible for upgrades. It is our job to keep libraries up to date, shift to modern infrastructure every few years, and add the features that everyone now expects.

What you get for operating custom software — you control the pace of change.  
What you pay  — you are responsible for the pace of change.

Maybe it’s authorization, or network configuration, or caching, or eventing. You wrote it back when your needs were exceptional, and now it’s your baby, and you’re changing its diapers. It takes effort to shift to anything else.

Incorporate the modern world into our software’s world.

When capabilities become commodities, it becomes cheaper to rent than to babysit them. It’s probably monetarily less expensive, and indeed, it’s less costly in knowledge. People and teams are limited by how much experience we can hold. We can only have current expertise in so many things.

On a development team, we can increase our impact by overseeing more and more business capabilities, but we can only operate so much software. If we thin that software by shifting our underpinnings to SaaS offerings, then we can keep up more of the software that matters to our particular business.

All code is technical debt. Let it be someone else’s technical debt. Move it off your balance sheet, to a company that specializes in this capability.

Rebase on the world

In git, sometimes I add some features in a branch, while other people improve the production branch. When I rebase, I put my changes on top of theirs, and remove any duplicate changes.

I want to do this with software infrastructure and capabilities. The outside world is the production branch. When I rebase my custom software on top of it, it takes work to reconcile similar capabilities. But it’s worth it.

When we rebase our software on the world, we get everything the world has improved since we started, we get integrations into other systems and tools, and we get learnings from experts in those capabilities. SaaS, in particular, has a bonus — we keep getting these things, for no extra work!

If we don’t rebase on the world, a startup will.

How can a scrappy little company defeat a powerful incumbent?

Every piece of software and infrastructure that the big company called a capital investment, that they value because they put money into it, that they keep using because it still technically works — all of this weight slows them down.

A startup builds on the latest that the whole world offers. They write minimum code on top of that to serve their customers. The less code they have, the faster they can change it.

In the 1990s, we built a big stack of custom work on top of a solid base. In the 2010s, we build less custom software to get the same business capabilities (with more reliability) because we’re building on various AWS services and many other tools and services.

This is not the only advantage a startup has, but it is a big one.

Software is never “done.”

Software is not bought, it is rented. (Regardless of how the accounting works.) It gives us capabilities as long as it keeps running, keeps meeting expectations, keeps fitting in with other elements of the world that need to integrate with it.

Keep evolving the software, infrastructure, and architecture. It is never going to be perfect, but we can keep it moving.

When I’m coding a feature, I rebase on the production branch every few hours. For software systems, try to rebase on the world every few months, bit by bit.

In an enterprise with a lot of code, this is an extra challenge. Change at that scale is always an evolution.

If you find yourself thinking, “we have so much code. How could we ever bring it all up to date?” then please check out Atomist’s Drift Management. Get visibility into what you have, and even automatic rebasing (of code, at least). There’s a service for this too.

Acknowledgment
A large amount of this information came out of a conversation with Zack Kanter, CEO of Stedi.

Tools -> capabilities -> acclaim

Here’s a lovely graphic from the folks at Intercom. It describes the difference between what companies sell and what people buy.

Even though customers buy this [skateboard parts]… they really want this [cool skateboard trick].

We don’t want the tool, we want what we can do with the tool. Take it further – maybe what that skateboarder really wants is: the high fives at the end.

skateboarder getting high fives from the crowd

Our accomplishments feel real when people around us appreciate them. If the skateboarder’s peers react with “Why would you do that?? You could die! You look like a dumbass,” titanium hardware doesn’t shine so bright.

It reminds me of the DevOps community. Back when coworkers said, “Why are you writing scripts for that? Do you want to put us out of a job?” automation wasn’t so cool. Now there’s a group of peers, at least online, ready to give you high fives for that. The people you meet at conferences, or get familiar with on podcasts, or collaborate on open source tools with — these take automation from laziness into accomplishment.

Giving developers polyurethane wheels and hollow trucks won’t let them do tricks. Move to a culture of “our job is to automate ourselves out of better and better jobs.” Give us the high fives (and let us pick our tools), and developers will invent new tricks.

Symmathecist (n)

A quick definition, without the narrative

Symmathecist: (sim-MATH-uh-sist) an active participant in a symmathesy.

A symmathesy (sim-MATH-uh-see, coined by Nora Bateson) is a learning system made of learning parts. Software teams are each a symmathesy, composed of the people on the team, the running software, and all their tools.

The people on the team learn from each other and from the running software (exceptions it throws, data it saves). The software learns from us, because we change it. Our tools learn from us as we implement them or build in them (queries, dashboards, scripts, automations).


This flow of mutual learning means the system is never the same. It is always changing, and its participants are always changing.

An aggregate is the sum of its parts. 
A system is also a product of its relationships.
A symmathesy is also powered by every past interaction.

I aim to be conscious of these interactions. I work to maximize the flow of learning within the system, and between the system and its environment (the rest of the organization, and the people or systems who benefit from our software). Software is not the point: it is a means, a material that I manipulate for the betterment of the world and for the future of my team.

I am a symmathecist, in the medium of software.

More info:

Advancing the bridge

Software Development has moved forward a lot recently. Both on the code side and the runtime side, we’ve had huge advances.

we’ve seen more advances in writing code and in running it, than in the bridge from written to running

We have way better languages and frameworks for writing applications now. For instance, JavaScript, JQuery, and modern frameworks like React were all big steps. Or you could look at Java, Spring, and now Spring Boot. For source control: cvs, git, and now GitHub changed the way we work.

Then on the runtime side, we don’t have to deploy to hardware anymore. Virtual machines were a step, and then the cloud, and now Kubernetes is a big deal. Kubernetes is a big deal because it’s higher level of abstraction, where developers can work in the language of the domain of software. But that’s not all: Kubernetes also offers an API, which means we can work with it using code.

We can do more with code now on both sides, thanks to expressive frameworks and to an API for hardware. But there’s something in the middle lagging behind.

A scary chasm: the team wants to get code across to the magical land of production. The rope bridge is beset by lightning and sea monsters.

The bridge from source code to running software is Delivery. Delivery has made advances: we went from Bash and make to pipelines. But pipelines have been around for a decade, and since then we’ve got … more pipelines. It’s time for the next revolutionary step.

This is Atomist’s mission: to change the way we deliver software. Not to make it incrementally better, but to rethink delivery the way we’ve rethought application architecture and runtime infrastructure.

The way forward is not more pipelines. Nor is it event more bash scripts or configuration that wishes it were code. The way forward is not updated separately for each delivered application.

The way forward is to do more with code. In a real programming language. The way forward responds to events in the domain of software delivery: to code pushes, and also tickets, pull requests, deploys, builds, and messages in chat. It responds in context: when our delivery machine decides (in code!) what to do with a particular change, it does so with awareness of what the code says, of who changed it, in response to what ticket. It responds in communication: when in doubt, contact the responsible human and ask them whether they’d like to restart the build, submit the pull request, or deploy to production.

As we succeed, our systems increase in complexity. The systems to control them need to be at least as smart. We need more power in our delivery than any GUI screen or YAML file can give us. We have that power, when we craft our delivery in code on a strong framework with a domain-specific API.

Atomist as the next sea change in delivery: event hub, API for software, delivery in code.

Every company is in the software delivery business now. Let’s take it seriously, the same way we do our code and our production runtime.

Get started today with the open-source Software Delivery Machine.

Align the stars (programmatically)

Yesterday I was ready to get some changes into master, so I merged in the latest and opened a PR. But NO, the build on my pull request broke.
The error was:

ERROR: (jsdoc-format) /long/path/to/file.ts[52, 1]: asterisks in jsdoc must be aligned
ERROR: (jsdoc-format) /long/path/to/file.ts[53, 1]: asterisks in jsdoc must be aligned
ERROR: (jsdoc-format) /long/path/to/file.ts[54, 1]: asterisks in jsdoc must be aligned
ERROR: (jsdoc-format) /long/path/to/file.ts[55, 1]: asterisks in jsdoc must be aligned
… fifty more like that …

gah! someone added more tslint rules, and this one doesn’t have an automatic fix. Maybe someone upgraded tslint, its “recommended” ruleset changed, and bam, my build is broken.

For measley formatting trivia.

 /**
* oh no! This JSDoc comment is not aligned perfectly!
* The stars are supposed to have one more space before them
* so they all line up under the first top one
*
* The world will end! Break the build!
*
* @param likeItMatters a computer could fix this grrr
* @deprecated
*/
whatever(likeItMatters: Things): Stuff;

Look, I’m all for consistent code formatting. But I refuse to spend my time adding a space before fifty different asterisks. Yes I know I can do this with fewer keystrokes using editor acrobatics. I refuse to even open the file.

You know who’s good at consistency? Computers, that’s who. You want consistent formatting? Make a robot do it.

tslint robot: “Your Code Is Not Acceptable!” / me: “I have better things to do :-(” / Atomist: “I will save you!”

So I made a robot do it. In our Software Delivery Machine, we have various autofixes: transformations that run on code after every commit. If an autofix makes a change, the robot makes a commit like a polite person would. No build errors, just fix it thanks.

I wrote this function, which does a transformation on the code. The framework takes care of triggering, cloning the repository, and committing the result.

const alignAsterisksInProject: CodeTransform = (project: Project) =>
doWithFiles(project, "**/*.ts", async f => {
const content = await f.getContent();
if (hasUnalignedAsterisks(content)) {
await f.setContent(alignStars(content));
}
});

You can do this too. You can do it on your local machine with the fully open-source Local Software Delivery Machine. Fix your code on demand, or in response to each commit. Write your own functions to keep your code looking the way you like. Never be embarrassed by the same mistake again!

To help you try it out, I added my autofix to an otherwise-empty Software Delivery Machine.

Align your own stars

Try it yourself (on Mac or Linux, currently):

  • Install the atomist command line: npm install -g @atomist/cli@next
  • By default, atomist works on projects in ~/atomist/projects. If you prefer a different directory, create one and set ATOMIST_ROOT=/path/to/that/directory
  • Bring down the code for the Software Delivery Machine (SDM): atomist clone https://github.com/atomist-blogs/align-stars-sdm 
    (This does git clone, plus sets up git hooks for magical autofixes.)
  • Go to that location: cd $ATOMIST_ROOT/atomist-blogs/align-stars-sdm
  • Now for the slow part: npm install
  • Start up your SDM: atomist start --local
    The SDM is a process that hangs out on your computer waiting to help. It swings into action when triggered by the atomist command line, or by commits inside $ATOMIST_ROOT.
  • Optional: in another terminal, run atomist feed. This will give you a running summary of what your SDM is up to.
  • Now screw up some formatting. I’ve left some nice jsdoc comments in lib/autofix/alignStars.ts; move those stars over a little.
  • Save and make a commit: git commit -am “Oh no, misalignment”
  • Check the output in your atomist feed, and you should see that an Autofix has run. (You can also type atomist align the stars to do this specific transform, in a repository not wired up to trigger you SDM.)
  • Check your commit history: git log. Did Atomist make a commit? Check its contents: git show and you should see the stars moved into alignment.
  • (if not, please leave me an issue or ping me (jessitron) in community slack)

But wait! That’s not all.

OK that was a lot just to format some comments. The important part is we can write functions to realize policy. If a person complained about this formatting I’d tell them to f*** off or fix it themselves — in fact I did curse extensively at tslint when it gave me these errors. Tslint didn’t care. Programs aren’t offended.

If I want my teammates’ code to conforms to my standards, it’s rude to ask them to be vigilant about details. Aligning asterisks — I mean, some people like it, and that’s fine, I kinda enjoy folding laundry — but as a developer that’s a crummy use of my attention. Computers, though! Computers are great at being vigilant. They love it. My Atomist SDM sits there eagerly awaiting commits just to dig around in those asterisks in the hope of fixing some.

Please make my star-alignment into a code transform that’s useful to you. I went with plain string parsing here, in a very functional style for my own entertainment. We also have clever libraries for working with the compiled AST and more (watch this space).

There’s more: an SDM running in the cloud listens to GitHub (GitLab, BitBucket, GHE) and applies autofixes to everyone’s commits. And code reviews. And runs or triggers build (but only when it’s worthwhile; it looks at what’s changed). And initiates deploys (except it asks us first in Slack). There’s no setting up a pipeline for a new repository or branch; our SDM knows what to do with a code push based on the code in it.

There’s more: an SDM in the cloud also listens to issues, pull requests, builds, deploys, and other events in the software domain. It can react to all of them by talking to people in Slack, or running any other program. Whatever we do that is boring, we can automate.

This is our power as software developers. We don’t need someone to write a GUI we can click in. We don’t need to configure in YAML. We can specify what needs to happen declaratively in code.

All we needed was a framework to do the common glue-y bits for us, like triggering on events, cloning repositories, passing our functions the data they need and carrying out our actions.

The autofix in this example triggers on commit, clones the repository, passes a Project object in for a code transform function to act on, commits those changes and pushes them back to where I’m working. The framework of the SDM lets me define my own policies and implement them in code. Then the machinery of the SDM keeps them running, locally (open source) or team- or organization-wide (using Atomist’s API).

Aligning the stars is only the beginning.