Friday, September 19, 2014

Talk: Concurrency Options on the JVM

or, "Everything You Never Wanted to Know About java.util.concurrent.ExecutorService But Needed To"

 Here's the prezi:
from StrangeLoop, 18 Sep 2014


A clojure library for choosing your own threadpools:

Thursday, September 4, 2014

TDD with generative testing: an example in Ruby

Say I'm in retail, and the marketing team has an app that helps them evaluate the sales of various items. I'm working on a web service that, given an item, tells them how many purchases were influenced by various advertising channels: the mobile app, web ads, and spam email.

The service will look up item purchase records from one system, then access the various marketing systems that know about ad clicks, mobile app usage, and email sends. It returns how many purchases were influenced by each, and uses some magic formula to calculate the relevance of each channel to this item's sales.

My goal is to test this thoroughly at the API level. I can totally write an example-based test for this, with a nice happy-path input and hard-coded expected output. And then, I need to test edge cases and error cases. When no channels have impact; when they all have the same impact; when one fails; when they timeout; et cetera et cetera.

Instead, I want to write a few generative tests. What might they look like?

When I'm using test-driven development with generative testing, I start at the outside. What can I say about the output of this service? For each channel, the number of influenced purchases can't be bigger than the total purchases. And the relevance number should be between 0 and 100, inclusive. I can assert that in rspec.

expect(influenced_purchases).to be <= total_purchases
expect(relevance).to be >= 0
expect(relevance).to be <= 100

These kinds of assertions are called "properties". Here, "property" has NOTHING TO DO with a field on a class. In this context, a property is something that is always true for specified circumstances. The generated input will specify the circumstances.

To test this, I'll need to run some input through my service and then make these checks for each output circle. It needs some way to query the purchase and marketing services, and I'm not going to make real calls a hundred times. Therefore my service will use adapters to access the outside world, and test adapters will serve up data.

result =,

result.channels.each do |(channel, influence)|
  expect(influence.influenced_purchases).to be <= total_purchases
  expect(influence.relevance).to be >= 0
  expect(influence.relevance).to be <= 100

(relatively complete code sample here.)
To do this, I need purchases, events on each channel, and an item. My test needs to generate these 100 times, and then do the assertions 100 times. I can use rantly for this. The test looks like this:

it "returns a reasonable amount of influence" do
 property_of {
  ... return an array [purchases, channel_events, item] ...
 }.check do |(purchaseschannel_eventsitem)|
  total_purchases = purchases.size
  result =,

  result.channels.each do |(channel, influence)|
   expect(influence.influenced_purchases).to be <= total_purchases
   expect(influence.relevance).to be >= 0
   expect(influence.relevance).to be <= 100

(Writing generators needs a post of its own.)
Rantly will call the property_of block, and pass its result into the check block, 100 times or until it finds a failure. Its objective is to disprove the property (which we assert is true for all input) by finding an input value that makes the assertions fail. If it does, it prints out that input value, so you can figure out what's failing.

It does more than that, actually: it attempts to find the simplest input that makes the property fail. This makes finding the problem easier. It also helps me with TDD, because it boils this general test into the simplest case, the same place I might have started with traditional TDD.

In my TDD cycle, I make this test compile. Then it fails, and rantly reports the simplest case: no purchases, no events, any old item. After I make that test pass, rantly reports another simple input case that fails. Make that work. Repeat.

Once this test passes, all I have is a stub implementation. Now what? It's time to add properties gradually. Usually at this point I sit back and think for a while. Compared to example-based TDD, generative testing is a lot more thinking and less typing. How can I shrink the boundaries?

It's time for another post on relative properties.

TDD is Dead! Long Live TDD!

Imagine that you're writing a web service. It is implemented with a bunch of classes. Pretend this circle represents your service, and the shapes inside it are classes.

The way I learned test-driven development[1], we wrote itty-bitty tests around every itty-bitty method in each class. Then maybe a few acceptance tests around the outside. This was supposed to help us drive design, and it was supposed to give us safety in refactoring. These automated tests would give us assurance, and make changing the code easier.

It doesn't work out that way. Tests don't enable change. Tests prevent change! In particular, when I want to refactor the internals of my service, any class I change means umpteen test changes. And all these tests include example == actual, and I've gotta figure out the new magic values that should pass. No fun! These method- or class-level tests are like bars in a cage preventing refactoring.

Tests prevent change, and there's a place I want to prevent unintentional change: it's at the service API level. At the outside, where other systems interact with this service, where a change in behavior could be a nasty surprise for some other team. Ideally, that's where I want to put my automated tests.

Whoa, that is an ugly cage. At the service level, there are often many possible input scenarios. Testing every single one of them is painful. We probably can't even think of every relevant combination and all the various edge cases. Much easier to zoom in to the class level and test one edge case at a time. Besides, even if we did write the dozens of tests to cover all the possibilities, what happens when the requirements change? Then we have great big tests with long expected == actual assertions, and we have to rework all of those. Bars in a cage, indeed.

Is TDD dead? Maybe it's time to crown a new TDD. There's a style of testing that addresses both of the difficulties in API-level testing: it finds all the scenarios and tames the profusion of hard-coded expectations. It's called generative testing.[2]

Generative testing says, "I'm not gonna think of all the possible scenarios. I'm gonna write code that does it for me." We write generators, which are objects that know how to produce random valid instances of various input types. The testing framework uses these to produce a hundred different random input scenarios, and runs all of them through the test.

Generative testing says, "I'm not gonna hard-code the output. I'm gonna make sure whatever comes out is good enough." We can't hard-code the output when we don't know what the input is going to be. Instead, assertions are based on the relationship between the output and input. Sometimes we can't be perfectly specific because we refuse to duplicate the code under test. In these cases we can establish boundaries around the output. Maybe, it should be between these values. It should go down as this input value goes up. It should never return more items than requested, that kind of thing.

With these, a few tests can cover many scenarios. Fortify with a few hard-coded examples if needed, and now half a dozen tests at the API level cover all the combinations of all the edge cases, as well as the happy paths.

This doesn't preclude small tests that drive our class design. Use them, and then delete them. This doesn't preclude example tests for documentation. Example-based, expected == actual tests, are stories, and people think in stories. Give them what they want, and give the computer what it wants: lots of juicy tests in one.

There are obstacles to TDD in this style. It's way harder. It's tough to find the assertions that draw a boundary around the acceptable results. There's more thinking, less typing here. Lots more thinking, to find the assertions that draw a boundary around the acceptable output. That's the hardest part, and it's also the best part, because the real benefit of TDD is that it stops you from coding a solution to a problem you don't understand.

look for more posts on this topic, to go along with my talks on it. See also my video about Property Based Testing in Scala

[1] The TDD I learned, at the itty-bitty level with mock all the things, was wrong. It isn't what Kent Beck espoused. But it's the easiest. [2] Or property-based testing, but that has NOTHING to do with properties on a class, so that name confuses people. Aside from that confusion I prefer "property-based", which speaks about WHY we do this testing, over "generative", which speaks about how.

Sunday, August 31, 2014

The power of embedded developers

Meet my friend Sean.

Sean with a big smile

Sean is one of the most powerful developers I know. Not best, not brilliantest, but most powerful because: he provides more business value than some 50-person teams.

What is the job title of such a powerful developer? It isn't "ninja" (although maybe it should be). It's "Accounting Manager." WAT.

Program: runs on the author's machine when the author runs it.
Software: runs on any machine at any time, serving many people's purposes.

Here's the thing: Sean can code -- he was a top developer when we worked together fifteen years ago -- but he no longer writes software. He writes programs instead.

Dual wielding

When it comes to business-value power, domain knowledge adds potential; technical knowledge adds potential; and the combination is drastically better than each.
business knowledge, short bar. Technical knowledge, short bar. Both, very tall bar. Y axis, Value.
Why? In Sean's words, "because you can actually question what you're doing." While breaking a process down in order to encode it as a program, his knowledge of accounting and the business needs lets him distinguish between what's important or not, and how relevant a particular case is.

Typically, when software impacts a business process flow, it's because the designers and developers didn't understand. It's a negative impact. When the designer/developer is on the business team, process changes emerge from the program design. Codifying well-understood activities reveals opportunities for synergy between businesspeople and the program. It's a positive impact.

Sean can spot problems, observe trends, and improve processes. Software created from static requirements can only ossify processes.

How he does it

When Sean does reporting, that means: finding out what needs done and why, learning all the databases that have the information and acquiring it, joining millions of rows and performing whatever accounting business logic, putting this into a useful Excel spreadsheet, and distributing that to people - in a couple hours per month.

Those few hours of "doing reporting" each month are possible because the rest of his time is spent thinking, investigating, analyzing, troubleshooting, and programming. Given this freedom, Sean adds more business value every month.

Gardening: gradual improvement of a program. Adding error handling where errors occur, visibility where it's needed, performance improvements where it counts. Gardening addresses the pain points after they're felt.
When Sean does reporting, that means: automating every repetitive step, scheduling his programs, monitoring them, and gardening them. What he already does becomes trivial, and then he's free to do more and more useful work. He and his programs are not separable; they're both part of getting the job done. And when that job changes, he's ready.

a person and a box inside a wall

This is an example of the code+coder ecosystem. Sean's programs aren't useful without him around to run them and change them, and Sean wouldn't get much done without the stockpile of code he's written over the years. The benefit of this system is: flexibility! When the real accountants need more or different information, Sean makes a change in 5 minutes to 3 days.  Compare this to weeks when a small feature is requested of the IT department or software vendor.

There are good reasons the software vendor takes so long. Software should be bulletproof. It's backed by fixed, negotiated, contracted requirements. It has security, authorization, failover, automated testing. Coordination, contracts, APIs. The difficulty multiplies, and software is 1000x more work to write than a program.
software system: several boxes inside many walls, and lots of people, all inside another wall

Sean doesn't need all that, because he is the monitoring, the security, the failover. He tests with his eyeballs and his business knowledge as the results appear. As a single point of responsibility, every problem that Sean hits is a learning opportunity, and he has the ability and desire to implement the next program, or the next change, better.

The advantage of software over programs is: stability. The disadvantage is: stability. When a particular program is broadly useful and has hardly changed since Sean implemented it, that's a good candidate to transfer over IT as a feature request.

Who can do this?

A lot of people can be this powerful. Sean is a developer who learned the business. I know other business-types who have learned to program. It's wicked effective when it happens. It takes free time, a lot of learning, great management, and a contact network for support.

Having started as a developer with the major software vendor, Sean can talk to DBAs and sysadmins and programmers. His management had to push to get him access to some of these people, and now those communication channels are super valuable both ways. When there's a problem, the DBAs call him to find out about the significance of various data.

the code-coder ecosystem, underneath an umbrella with a person in it to represent the managerSean's management values his work, values him, and they work to cut through the red tape. Accounting Managers don't usually have access to unix machines, production Oracle databases, and automated emailers. Clever people do clever things, and sometimes that surprises the corporate security departments.

The breadth of knowledge, through the business domain to finance to programs to databases to the specific software his programs interact with, doesn't come for free. And it doesn't stay for free. There are an infinite number of tools that might help; one person can't know them all, but it helps to know what's possible. When Sean took over some other person's Access databases, it didn't take him long to knock them down from several gigs to 250MB, because he knew what was possible. Knowing the questions to ask is more important than knowing the answers.

This knowledge doesn't stay current for free. Most of Sean's time is spent in meetings, keeping up on what is changing in the many systems he and his programs interact with. This is the limiting factor in growing his power.

One thing you can't do to people like Sean: budget their time. When a person is production support and development and maintenance and design and interface to many departments, their time is unpredictable. Sometimes Sean will implement a last minute change to compensate for a bug introduced in some other software that can't be fixed for a few weeks, and he'll work crazy hours for a couple days. Other times, he can chip away at niggling errors, gardening his programs so that bugs like that will be caught earlier or easier to fix in the future. It isn't predictable. Sean is flexible, and his management recognizes this. Slack gives us freedom to get better at our jobs.

What can stop us?

Motivation comes for free when people feel valued, but it's easy to kill. One is cramming our time so full that we don't have time to improve how we work. Another is red tape: jumping through hoops to get access to data, to get programs installed, to get questions answered. These say, "your ideas are not valued, not shut up and do the same thing every day for the rest of your life."

There's HR rules, which say Sean can't get promoted or get a raise because Finance doesn't have a job title for "Powerful Accountant-Programmer Combination."

There's budgets: Sean could double his power if he had a person to train. Someone to split the meetings with. And someone to carry on his legacy when he moves on to another amazing thing. Right now he's too essential to move, and that's another de-motivation.

The Big Problem

And when Sean does move on? In his case, the code-coder ecosystem would collapse. No one else knows his code, so they can't re-use it. Maybe they could run it (maybe), but they couldn't change it. They couldn't monitor it, support it, or recognize problems when they occur sneakily. Sean is working in a dangerous silo. And he just bought a fast car.

Sean's new Honda S2000

He doesn't want to be in a silo; a team with one or two more people would be ideal. This kind of ninja doesn't have to work alone. There is a balance between programs and software. For many business departments, that balance may be far closer to the "program" side than software developers like me care to admit.

Monday, August 25, 2014

May I please have inlining for small dependencies?

Franklin Webber[1]  went on a quest to find out whether
utility code was better or worse than core code in open source
projects. Utility code is defined as anything necessary but not directly related to the application's
mission. Franklin found that overall quality of utility code was higher, perhaps because this code changes less. He also found a lot of repetition among projects, many
people doing the same thing. A function to find the home directory
in several possible env vars, for instance. HashWithIndifferentAccess,
which doesn't care whether you pass the key as string or symbol (keyword
in Clojure). He asks, why not break these out into tiny little gems?

Dependency management is an unsolved problem in our industry. We want
single-responsibility, and we want re-use. Reuse means separately
released units. But then when we compile those into one unit, transitive
dependencies conflict. Only one version of a particular library can make it into the final executable. In the example below, the Pasta library might not be compatible with the v2.1.0 of Stir, but that's what it'll be calling after it's compiled into Dinner.

Programs like WebLogic went to crazy ends to provide different
classloaders, so that they could run apps in the same process without
this dependency conflict. That's a mess!

We could say all libraries should all be backwards compatible. Compatibility is great thing, but
it's also limiting. It prevents API improvements, and makes code ugly.
There's more than API concerns as well: if the home-directory-finding
function adds a new place to look, its behavior can change, surprising
existing systems. We want stability and progress, in different places.

With these itty-bitty libraries that Franklin proposes we break out,
compatibility is a problem. So instead of dependency hell, we have chosen
duplication, copying them into various projects.

What if, for little ones, we could inline the dependency? Like
#include in C: copy in the version we want, and link whatever calls
it to that specific implementation. Then other code could depend on
other versions, inlining those separately. These transitive dependencies
would then not be available at the top level.

Seems like this could help with dependency hell. We would then be free
to incorporate tiny utility libraries, individually, without concern
about conflicting versions later.
I wonder if it's something that could be built into bundler, or leinengen?

This isn't the right solution for all our dependency problems: not large libraries, and not external-connection clients. Yet if we had this choice, if a library could flag itself as suitable for inlining, then microlibraries become practical, and that could be good practice.

[1] Talk from Steel City Ruby 2014
[2] Angry face:
Icon made by Freepik from

Friday, August 22, 2014

Quick reference: monads and test.check generators

Combine monads with test.check generators to build them up out of smaller generators with dependencies:

(require '[clojure.test.check.generators :as gen])
(require '[clojure.algo.monads :as m])
(m/defmonad gen-m 
  [m-bind gen/bind 
   m-result gen/return])

(def vector-and-elem
  (m/domonad gen-m
    [n (gen/choose 1 10)
     v (gen/vector gen/int n)
     e (gen/element v)]
    [v, e]))

(gen/sample vector-and-elem)
;; ([[0 0] 0] 
    [[0 -1 1 0 -1 0 -1 1] 0] 
    [[1 1 3 3 3 -1 0 -2 2] 3]
    [[8 4] 8]...

The generator here chooses a vector length, uses that to generate a vector, uses that to pick an element inside the vector, and then returns a tuple of the vector and the element. The syntax is cleaner than a lot of gen/bind and gen/fmap calls. It looks a lot like ScalaCheck.

I suspect we could define m-zero and m-plus in the monad to get :when conditions as well.

I'm working on a longer post that explains what's going on here and why we would do it.

Tuesday, August 5, 2014

STLCodeCast, and a bit of Functional Programming in Java 8

In this live video interview with screencasting, I show a bit of functional programming style in Java 8. I also rant about whether math is necessary to be a good programmer, and predict how organizational structures are changing to support learning.

STLCodeCast Episode 19

My Pluralsight course on FP with Java