Wednesday, April 15, 2015

Looking for work

This Prezi movie expresses my reflections on the process of choosing the next place to work.
The embedded app below doesn't work for me - does it for you?

This does: click the play button in the lower-left of the presentation window here at Prezi.



Meta: About the Medium

I love Prezi as a tool for building presentations, because the presentation itself is a mind-map. While a typical slideshow is one-dimensional and unidirectional (forward to the next slide), Prezi offers three dimensions: two surface dimension, plus depth.
That depth, the ability to zoom in and in and in, makes Prezi my favorite medium for seamaps. Seamaps are kinda like roadmaps as they show milestones on the way to a goal, except seamaps recognize that there are many routes toward the goal, and that the straight path is rarely optimal. We will find obstacles, back up, and try other paths. We prefer running software at many increments to implementing everything at once. The zooming format helps me draw the big picture of the project and the detailed picture of the iteration and the day, in the same map.
However, Prezi isn't perfect for seamaps. The editor is infuriating in many cases. It's difficult to impossible to select things sometimes; you have to get the scale just right first. Clicking gets me a text prompt, and that's infrequently what I want. (Omnigraffle nailed this one: push T and then click.) Click-and-drag sometimes moves me back and forth in the map, and other times moves the image in the background out from under everything. There's a fair number of bugs, too.
All of these problems also apply to using Prezi for little movies like these, plus more. I need frames in order to make a path for presenting, but then frames take ownership of the objects inside them, and I can't resize the frame without warping parts of my picture. Those frames make objects under them unclickable in certain circumstances. Using Prezi takes patience and systems. Click off "edit path," select the objects, group them, move them to the front. Edit path, click the animation button, select the objects, escape. Click off "edit path," select the group, send them to the back. Repeat.
As a movie-creation medium, there are limitations, too: each frame will last as long as the audio attached to it, or 4 seconds if there's no audio. Animations ("appear" is the only animation) occur at 4 second intervals, period. On one hand this is limiting, but on the other it leaves fewer decisions for me. The above production took about 3 hours, including recording the audio in Camtasia, editing it, and exporting each frame as a separate audio clip, in between Camtasia crashing, as it does.
Prezi's strengths are unique and crucial to some forms of expression. Its limitations can be freeing, once they're accepted. The editor quirks drive me away for a while sometimes, but I keep coming back. Overall, I do recommend Prezi-with-voiceover as a short movie creation medium. Now if only the "embed" option worked on this blog.

Saturday, April 11, 2015

Philly ETE, Design in the small

Every conference has little themes for me. Here's one from Philly ETE this week.

The opening keynote from Tom Igoe, founder of Arduino, displayed examples of unique and original projects. Many of them were built for one person - a device to tell her mother whether she's taken this pill already today, or to help his father play guitar again after a stroke. Others were created for one place: this staircase becomes a piano. Arduino enables design in the small: design completely customized for a context and an audience. A product that meets these local needs, that works; no other requirements.

Diana Larsen does something similar for agile retrospectives. She pointed out that repeating the same retrospective format shows diminishing returns. Before a 90-minute retro, Diana spends 1-3 hours preparing: gathering data, choosing a theme and an activity. She has high-scale models of how people learn, yet each learning opportunity is custom-designed for the context and people. This has great returns: when a team gains the skill of thoughtful retrospectives, all team meetings are transformed.

Individualization appeared in smaller ways on the second day: Erik mentioned the compiler flags available in the Typelevel Scala compiler, allowing people to choose their language features. Aaron Bedra customized security features to the attack at hand: a good defense is intentional, as thoughtfully built as product features. Every one different.

Finally, Rebecca Wirfs-Brock debunked the agile aversion to architecture. Architecture tasks can be scaled and timeboxed based on the size and cruciality of the project. Architecture deliverables meet current needs, work for the team, and nothing else. It's almost like designing each design process to suit the context.

This is the magic that agile development affords us: suiting each process, each technology, each step to the local context. That helps us do our best work. The tricky bit is keeping that context in mind at all levels. From my talk: the best developers are reasoning at all scales, from the code details of this function to the program to the software system to the business of why are we doing this at all. This helps us do the right work. Maintaining the connection from what we're doing to why we're doing it, we can make better decisions. And as Tom said in the closing sentence of the opening keynote, we can remember: "The things we make are less important than the relationships they enable."

This was one of the themes I took from PhillyETE. Other takeaways were tweeted. There were many great talks, and InfoQ filmed them, so I'll link them in this post later. This is conference is thoughtfully put together. Recommend.

Saturday, April 4, 2015

Before balance, separation.

We don't program in a language, these days: we program in systems. I may write Clojure, with ring and schema and clj-http and clj-time and test.check and lein and jetty and many more inclusions. I may write Scala, with akka OR with scalaz and Shapeless, or a weird combination.

We never programmed in a language: we always programmed in systems. Dick Gabriel discusses the subtle paradigm shift when researchers separated the two, when the science and engineering of software parted ways in the 1990s. Before that, "language" and "system" were used interchangeably in some papers.

This is a valuable separation: now we can optimize each separately. In particular, languages are designed for extensibility, for building systems on. Both Scala and Clojure aim for this, in different ways. Let's have all the existing components on the JVM available to us, and make it clean for people to build new ones. Languages are seeds of systems, and language designers seed of communities.

Take Node.js -- it is not a new language, but it might as well be: it's a new system.  The creators of Node and npm seeded a system: they left obvious necessities unimplemented, and made it easy to implement them. A community formed, and built something bigger than one small team could sustain.

When you choose a programming language, recognize that what you're choosing is a whole ecosystem and the group of people who work on it, growing and evolving that system. Do you want that to be stewarded by a strong company? or full of individual discoveries and advances, in many open source communities? to hold you to a good style or offer free combinations of multiple styles? to be welcoming to beginners, or full of serious thinkers?

[Game: Match each of these generalizations with a community: Clojure, Scala, Haskell, .NET, JVM, Ruby.]

I'm glad that people separated languages from systems, because we can build better languages and better systems from the decoupling. Both are important, and need each other. I thought about this today while my child separated her S'Mores Goldfish into marshmallow, chocolate, and graham. She used this to produce a perfectly balanced bite.



And then she ate all the marshmallows.

Friday, March 20, 2015

Gaining new superpowers

When I first understood git, after dedicating some hours to watching a video and reading long articles, it was like I finally had power over time. I can find out who changed what, and when. I can move branches to point right where I want. I can rewrite history!

Understanding a tool well enough that using it is a joy, not a pain, is like gaining a new superpower. Like I'm Batman, and I just added something new to my toolbelt. I am ready to track down latent bug-villains with git bisect! Merge problems, I will defeat you with frequent commits and regular rebasing - you are no match for me now!

What if Spiderman posted his rope spinner design online, and you downloaded the plans for your 3D printer, and suddenly you could shoot magic sticky rope at any time? You'd find a lot more uses for rope. Not like now, when it's down in the basement and all awkward to use. Use it for everyday, not-flashy things like grabbing a pencil that's out of reach, or rolling up your laptop power cable, or reaching for your coffee - ok not that! spilled everywhere. Live and learn.

Git was like that for me. I solve problems I didn't know I had, like "which files in this repository haven't been touched since our team took over maintenance?" or "when was this derelict function last used?" or "who would know why this test matters?"

Every new tool that I master is a new superpower. On the Mac or linux, command-line utilities like grep and cut and uniq give me power over file manipulation - they're like the swingy grabby rope-shooter-outers. For more power, Roopa engages Splunk, which is like the Batmobile of log parsing: flashy and fast, doesn't fit in small spaces. On Windows, Powershell is at your fingertips, after you've put some time in at the dojo. Learn what it can do, and how to look it up - superpowers expand on demand! 

Other days I'm Superman. When I grasp a new concept, or practice a new style of coding until the flow of it sinks in, then I can fly. Learning new mathy concepts, or how and when to use types or loops versus recursion or objects versus functions -- these aren't in my toolbelt. They flow from my brain to my fingertips. Like X-ray vision, I can see through this imperative task to the monad at its core.

Sometimes company policy says, "You may not download vim" or "you must use this coding style." It's like they handed me a piece of Kryptonite. 

For whatever problem I'm solving, I have choices. I can kick it down, punch it >POW!< and run away before it wakes up. Or, I can determine what superpower would best defeat it, acquire that superpower, and then WHAM! defeat it forever. Find its vulnerability, so that problems of its ilk will never trouble me again. Sometimes this means learning a tool or technique. Sometimes it means writing the tool. If I publish the tool and teach the technique, then everyone can gain the same superpower! for less work than it took me. Teamwork!

We have the ultimate superpower: gaining superpowers. The only hard part is, which ones to gain? and sometimes, how to explain this to mortals: no, I'm not going to kick this door down, I'm going to build a portal gun, and then we won't even need doors anymore.

Those hours spent learning git may have been the most productive of my life. Or maybe it was learning my first functional language. Or SQL. Or regular expressions. The combination of all of them makes my unique superhero fighting style. I can do a lot more than kick.

Tuesday, March 17, 2015

Estimates and Our Brain

Why is it so hard to estimate how long a piece of work will take?

When I estimate how long to add a feature, I break it down into tasks. Maybe I'll need to create a table in the database, add a drop-down in the GUI, connect the two with a few changes to the service calls and service back-end. I picture myself adding a table to the database. That should take about a day, including testing and deployment. And so on for the other tasks.

Maybe it works out like this:

  Create Table     = 1 day
  Service back-end = 2 days
  New drop-down    = 2 days
+ Service call     = 1 day
-------------------------
  New feature      = 6 days

It almost never happens that way, does it? The estimate above is the happy path of feature development. Each component is probably accurate. If there's a 70% chance that each of four tasks works as expected, then the chance of the feature being completed on time is (0.7^4) = 24%. Those aren't very good odds.

It's worse than that. Take the first task: create table. Maybe there's a 70% chance of no surprises when we get to the details of schema design. And a 70% chance the tests work, nothing bites us. And a 70% chance of no problems in deployment. Then there's only a 34% chance that Create Table will take a day. Break each of the others into three 70% pieces, and our chance of completing the feature on time is 1%. Yikes! No wonder we never get this right!

We can picture the happy path of development. It's much harder to incorporate failure paths - how can we? We can't expect the deployment to fail because some library upgrade was incompatible with the version of Ruby in production (or whatever). The chance of each failure path is very low, so our brains approximate it to zero. For one likely happy path, there are hundreds of low-probability failure paths. All those different failures add up -- and then multiply -- until our best predictions are useless. The most likely single scenario is still the happy path and 6 days, but millions of different possible scenarios each take longer.

It's kinda like distributed computing. 99% reliability doesn't cut it when we need twenty service calls to work for the web page to load - our page will fail one attempt out of five. The more steps in our task, the more technologies involved, the worse our best estimates get.

Now I don't feel bad for being wrong all the time.

What can we do about this?

1. Smooth out incidental complexity: some tasks crop up in every feature, so making them very likely to succeed helps every estimate. Continuous integration and continuous deployment spot problems early, so we can deal with them outside of any feature task. Move these ubiquitous subtasks closer to 99%.

2. Flush out essential complexity: the serious delays are usually here. When we write the schema, we notice tricky relationships with other tables. Or the data doesn't fit well in standard datatypes, or it is going to grow exponentially. The drop-down turns out to require multiple selection, but only sometimes. Sensitive data needs to be encrypted and stored in the token service -- any number of bats could fly out of this feature when we dig into it. To cope: look for these problems early. Make an initial estimate very broad, work on finding out which surprises lurk in this feature, then make a more accurate estimate.

Say, for instance, we once hit a feature a lot like this one that took 4 weeks, thanks to hidden essential complexity. Then my initial estimate is 1-4 weeks. ("What? That's too vague!" says the business.) The range establishes uncertainty. To reduce it, spend the first day designing the schema and getting the details of the user interface, and then re-estimate. Maybe the drop-down takes some detail work, but the rest looks okay: the new estimate is 8-12 days, allowing for we-don't-know-which minor snafus.

Our brains don't cope well with low-probability events. The scenario we can predict is the happy path, so that's what we estimate. Reality is almost never so predictable. Next time you make an estimate, try to think about the possible error states in the development path. When your head starts to hurt, share the pain by giving a nice, broad range.

Sunday, February 15, 2015

Microservices, Microbusinesses

To avoid duplication of effort, we can build software uniformly. Everything in one monolithic system or (if that gets unwieldy) in services that follow the same conventions, in the same language, developed with uniform processes, released at the same time, connecting to the same database.

That may avoid duplication, but it doesn't make great software. Excellence in architecture these days involves specialization -- not by task, but by purpose. Microservices divide a software system into fully independent pieces. Instead of limiting which pieces talk to which, they specify clear APIs and let both the interactions and the innards vary as needed.

A good agile team is like a microservice: centered around a common purpose. In each retro, the team looks for ways to optimize their workings, ways particular to that team's members, context, objectives.

When a service has a single purpose[1], it can focus on the problems important to that purpose. Authentication? availability is important, consistency, and security. Search? speed is crucial, repeatability is not. Each microservice can use the database suited to its priorities, and change it out when growth exceeds capacity. The business logic at the core of each service is optimized for efficient and clear execution.

Independence has a cost. Each service is a citizen in the ecosystem. That means accepting requests that come in, with backwards compatibility. It means sending requests and output in the format other services require, not overloading services it depends on, and handling failure of any downstream service. Basically, everybody is a responsible adult.
That's a lot of overhead and glue code. Every service has to do translation from input to its internal format, and then to whatever output format someone else requires. Error handling, caching or throttling, failover and load balancing and monitoring, contract testing, maintaining multiple interface versions, database interaction details. Most of the code is glue, layers of glue protecting a small core of business logic. These strong boundaries allow healthy relationships with other services, including new interactions that weren't designed into the architecture from the beginning. For all this work, we get freedom on the inside. We get the opportunity to exceed expectations, rather than straining standardized languages and tools to meet requirements.

Do the teams in your company have the opportunity to optimize in the same way?

I've worked multiple places where management decreed that all teams would track work in JIRA, with umpteen required fields, because they like how progress tracking rolls up. They can see high-level numbers or drill down into each team's work. This is great for legibility.[3] All the work fits into nice little boxes that add together. However, what suits the organizational hierarchy might not suit the work of an individual team.

Managers love to have a standard programming language, standard testing tools with reports that roll up, standard practices. This gives them the feeling that they can move people from team to team with less impact to performance. If that feeling is accurate, it's only because the level of performance is constrained everywhere.

Like software components, teams have their own problems to solve. Communication between individuals matters most, so optimize for that[2]. Given the freedom to vary practices and tools, a healthy agile team gets better and better. The outcome of a given day's work is not only a task completed: it includes ideas about how to do this better next time.

Tom Marsh says, "Make everyone in your organisation believe that they are working in a business unit about 10 people big." A team can learn from decisions and make better ones and spiral upward into exceptional performance (plus innovation), when internal consensus is enough to implement a change. Like a microbusiness.

Still, a team exists as a citizen inside a larger organization. There are interfaces to fulfill. Management really does need to know about progress. Outward collaboration is essential. We can do this the same way our code does: with glue. Glue made of people. One team member, taking the responsibility of translating cards on the wall into JIRA, can free the team to optimize communication while filling management's strongest needs.

Management defines an API. Encapsulate the inner workings of the team, and expose an interface that makes sense outside. By all means, provide a reference implementation: "Other teams use Target Process to track work." Have default recommendations: "We use Clojure here unless there's some better solution. SQL Server is our first choice database for these reasons." Give teams a strong process and technology stack to start from, and let them innovate from there.

On a healthy team, people are accomplishing something together, and that's motivating. When we feel agency to share and try out ideas, when the external organization is only encouraging, then a team can gel, excel and innovate. This cohesive team culture (plus pairing) brings new members up to speed faster than any familiarity with tooling.

As in microservices, it takes work to fulfill external obligations and allow internal optimization. This duplication is not waste. Sometimes a little overhead unleashes all the potential.


[1] The "micro" in microservices is a misnomer: the only size restriction on a microservice is Conway's Law: each is maintained by a single team, 10 or fewer people. A team may maintain one or more system components, and one team takes full responsibility for the functioning of the piece of software.

[2] Teams work best when each member connects with each other member (NYTimes)

[3] Seeing Like a State points out how legibility benefits the people in charge, usually at the detriment of the people on the ground. Sacrificing local utility for upward legibility is ... well, it's efficiency over innovation.

Tuesday, February 10, 2015

Fun with Optional Typing: cheap mocking


For unit tests, it's handy to mock out side-effecting functions so they don't slow down tests.[1] Clojure has an easy way to do this: use with-redefs to override function definitions, and then any code within the with-redefs block uses those definitions instead.

To verify the input of the side-effecting function, I can override it with something that throws an exception if the input is wrong.[2] A quick way to do that is to check the input against a schema.

That turns out to be kinda pretty. For instance, if I need to override this function fetch-orders, I can enforce that it receives exactly the starting-date I expect, and a second argument that is not specified precisely, but still meets a certain condition.

(with-redefs [fetch-orders (s/fn [s :- (s/eq starting-date)
                                  e :- AtLeastAnHourAgo]
                            [order])]
... )

Here, the s/fn macro creates a function that (when validation is activated[3]) checks its input against the schemas specified after the bird-face operator. The "equals" schema-creating function is built-in; the other I created myself with a descriptive name. The overriding function is declarative, no conditionals or explicit throwing or saving mutable state for later.

If I have a bug that switches the order of the inputs, this test fails. The exception that comes out isn't pretty.
expected: (= expected-result (apply function-under-test input))
  actual: clojure.lang.ExceptionInfo: Input to fn3181 does not match schema: [(named (not (= #<DateTime 2014-12-31T23:55:00.000Z> a-org.joda.time.DateTime)) s) nil]
Schema isn't there yet on pretty errors. But hey, my test reads cleanly, it was simple to write, and I didn't bring in a mocking framework.

See the full code (in the literate-test sort of style I'm experimenting with) on github.




[1] for the record, I much prefer writing code that's a pipeline, so that I only have to unit-test data-in, data-out functions. Then side-effecting functions are only tested in integration tests, not mocked at all. But this was someone else's code I was adding tests around.

[2] Another way to check the output is to have the override put its input into an atom, then check what happened during the assertion portion of the test. Sometimes that is cleaner.

[3] Don't forget to (use-fixtures :once schema.test/validate-schemas)