Monday, October 13, 2014

Repeating commands in bash: per line, per word, and xargs

In bash (the default shell on Mac), today we wanted to execute a command over each line of some output, separately. We wanted to grep (aka search) for some text in a file, then count the characters in each matching line. For future reference...

Perform a command repeatedly, once per line of input:

grep "the" main.log | while read line; do echo $line | wc -c ; done

Here, grep searches the file for lines containing the search phrase, and each line is piped into the while loop, stored each time in variable line. Inside the loop, I used echo to print each line, in order to pipe it as input to word-count. The -c option says "print the number of characters in the input." Processing each line separately like this prints a series of numbers; each is a count of characters in a line. (here's my input file in case you want to try it)
      41      22      38      24      39      25      23      46      25
That's good for a histogram, otherwise very boring. Bonus: print the number of characters and then the line, for added context:

grep "themain.log | while read line; do echo $(echo $line | wc -c) $line ; done

Here, the $(...) construct says "execute this stuff in here and make its output be part of the command line." My command line starts with another echo, and ends with $line, so that the number of characters becomes just part of the output. 
41 All the breath and the bloom of the year
22 In the bag of one bee
38 All the wonder and wealth of the mine
24 In the heart of one gem
39 In the core of one pearl all the shade
25 And the shine of the sea
23 And how far above them
46 Brightest truth, purest trust in the universe
25 In the kiss of one girl.
This while loop strategy contrasts with other ways of repeating a command at a bash prompt. If I wanted to count the characters in every word, I'd use for.

Perform a command repeatedly, once per whitespace-separated word of input:

for word in $(grep "themain.log); do echo -n $word | wc -c; done

Here, I named the loop variable word. The $(...) construct executes the grep, and all the lines in main.log containing "the" become input to the for loop. This gets broken up at every whitespace character, so the loop variable is populated with each word. Then each word is printed by echo , and the -n option says "don't put a newline at the end" (because echo does that by default); the output of echo  gets piped into word-count.
This prints a lot of numbers, which are character counts of each word. I can ask, what's the longest word in the file?

for word in $(grep "themain.log); do echo $(echo -n $word | wc -c) $word; done | sort -n | tail -1

Here, I've used the echo-within-echo trick again to print both the character count and the word. Then I took all the output of the for loop and sent it to sort. This puts it in numeric order, not alphabetical, because I passed it the -n flag. Finally, tail -1 suppresses everything but the 1 last line, which is last in numeric order, where the number is the character count, so I see only the longest word.

9 Brightest

If that's scary, well, take a moment to appreciate the care modern programming language authors put into usability. Then reflect that this one line integrates six completely separate programs.

These loops, which provide one line of input to each command execution, contrast with executing a command repeatedly with different arguments. For that, it's xargs.

Perform a command repeatedly, once per line, as an argument

Previously I've counted characters in piped input. Word-count can also take a filename as an argument, and then it counts the contents of the file. If what I have are filenames, I can pass them to word-count one at a time.

Count the characters in each of the three smallest files in the current directory, one at a time:

ls -Srp | grep -v '/$| head -3 | xargs -I WORD wc -c WORD

Here,  ls gives me filenames, all the ones in my current directory -- including directories, which word-count won't like. The -p option says "print a / at the end of the name of each directory." Then grep eliminates the directories from the list, because I told it to exclude (that's the -v flag) lines that end in slash: in the regular expression '/$', the slash is itself (no special meaning) and $ means "end of the line." Meanwhile, ls sorts the directories by size because I passed it -S. Normally it sorts them biggest-first, but -r says "reverse that order." Now the smallest files are first. That's useful because head -3 lets only the first three of those lines through. In my world, the three smallest files are main.log, carrot2.txt, and carrot.txt.
Take those three names, and pipe them to xargs. The purpose of xargs is to take input and stick it on the end of a command line. But -I tells it to repeat the command for each line in the input, separately. And -I also gives xargs (essentially) a loop variable; -I WORD declares WORD as the loop variable, and its value gets substituted in the command.

In effect, this does:
wc -c main.log
wc -c carrot2.txt
wc -c carrot.txt

My output is:
      14 main.log      98 carrot2.txt     394 carrot.txt
This style contrasts with using xargs to execute a command once, with all of the piped input as arguments. Word-count can also accept a list of filenames in its arguments, and then it counts the characters in each. The previous task is then simpler:

ls -Srp | grep -v '/$| head -3 | xargs wc -c

      14 main.log      98 carrot2.txt     394 carrot.txt     506 total

As a bonus, word-count gives us the total characters in all counted files. This is the same as typing
wc -c main.log carrot2.txt carrot.txt

Remember that xargs likes to execute a command once, but you can make it run the thing repeatedly using -I.

This ends today's edition of Random Unix Tricks. Tonight you can dream about the differences between iterating over lines of input vs words of input vs arguments. And you can wake up knowing that long long ago, in a galaxy still within reach, integration of many small components was fun (iand cryptic).

Monday, October 6, 2014

A monadically built generator

Today, I wanted to write a post about code that sorts a vector of maps. But I can't write that without a test, now can I? And not just any test -- a property-based test! I want to be sure my function works all the time, for all valid input. Also, I don't want to come up with representative examples - that's too much work.[1]

The function under test is a custom-sort function, which accepts a bunch of rows (represented as a sequence of hashmaps) and a sequence of instructions: "sort by the value of A, descending; then the value of B, ascending."

To test with all valid input, I must write code to generate all valid input. I need a vector of maps. The maps should have all the same keys. Some of those keys will be sort instructions. The values in the map can be anything Comparable: strings and ints for instance. Each instructions also includes a direction, ascending or descending. That's a lot to put together.

For property-based (or "generative") tests in Clojure, I'll use test.check. To test a property, I must write a generator that produces input. How do I even start to create a generator this complicated?

Bit by bit! Start with the keys for the maps. Test.check has a generator for them:

(require '[clojure.test.check.generators :as gen])
gen/keyword ;; any valid clojure keyword.

The zeroth secret: I dug around in the source to find useful generators. If it seems like I'm pulling these out of my butt, well, this is what I ate.

Next I need multiple keywords, so add in gen/vector. It's a function that takes a generator as an argument, and uses that repeatedly to create each element, producing a vector.

(gen/vector gen/keyword) ;; between 0 and some keywords

The first secret: generator composition. Put two together, get a better one out.

Since I want a set of keys, not a vector, it's time for gen/fmap ("functor map," as opposed to hashmap). That takes a function to run on each produced value before giving it to me, and its source generator.

(gen/fmap set (gen/vector gen/keyword)) ;; set of 0 or more keywords

It wouldn't do for that set to be empty; my function requires at least 1 instruction, which means at least one keyword. gen/such-that narrows the possible output of the generator. It takes a predicate and a source generator:

(gen/such-that seq (gen/fmap set (gen/vector gen/keyword)))

If you're not a seasoned Clojure dev: seq is idiomatic for "not empty." Historical reasons.

This is enough to give me a set of keys, but it's confusing, so I'm going to pull some of it out into a named function.

(defn non-empty-set [elem-g
  (gen/such-that seq (gen/fmap set (gen/vector elem-g))))

Here's the generator so far:
(def maps-and-sort-instructions
  (let [set-of-keys  (non-empty-set gen/keyword)]

See what it gives me:
=> (gen/sample maps-and-sort-instructions
   ;; sample makes the generator produce ten values
(#{:Os} #{:? :f_Q_:_kpY:+:518} #{:? :-kZ:9_:_?Ok:JS?F} ....)

Ew. Nasty keywords I never would have come up with. But hey, they're sets and they're not empty.

To get maps, I need gen/hash-map. It wants keys, plus generators that produce values; from these it produces maps with a consistent structure, just like I want. It looks like:

(gen/hash-map :one-key gen-of-value :two-key gen-of-this-other-value ...)

The value for each key could be anything Comparable really; I'll settle for strings or ints. Later I can add more to this list. There's gen/string and gen/int for those; I can choose among them with gen/elements.

(gen/elements [gen/string gen/int]) ;; one of the values in the input vector

I have now created a generator of generators. gen/elements is good for selecting randomly among a known sequence of values. I need a quantity of these value generators, the same quantity as I have keys.

(gen/vector (gen/elements [gen/string gen/int]) (count #??#)) 
  ;; gen/vector accepts an optional length

Well, crap. Now I have a dependency on what I already generated. Test.check alone doesn't make this easy - you can do it, with some ugly use of gen/bind. Monads to the rescue! With a little plumbing, I can bring in algo.monad, and make the value produced from each generator available to the ones declared after it.

The second secret: monads let generators depend on each others' output.

(require '[clojure.algo.monads :as m])
(m/defmonad gen-m
    [m-bind gen/bind
     m-result gen/return])

(def maps-and-sort-instructions
 (m/domonad gen-m
   [set-of-keys (non-empty-set gen/keyword)
    set-of-value-gens (gen/vector  
                       (gen/elements [gen/string gen/int]) 
                       (count set-of-keys))]
    [set-of-keys, set-of-value-gens])

I don't recommend sampling this; generators don't have nice toStrings. It's time to put those keys and value-generators together, and pass them to gen/hash-map:

(apply gen/hash-map (mapcat vector set-of-keys set-of-value-generators))
  ;; intersperse keys and value-gens, then pass them to gen/hash-map

That's a generator of maps. We need 0 or more maps, so here comes gen/vector again:

(def maps-and-sort-instructions
 (m/domonad gen-m
  [set-of-keys (non-empty-set gen/keyword)
   set-of-value-gens (gen/vector  
                      (gen/elements [gen/string gen/int]) 
                      (count set-of-keys))
   some-maps (gen/vector 
              (apply gen/hash-map 
               (mapcat vector set-of-keys 

This is worth sampling a few times:
=> (gen/sample maps-and-sort-instructions 3) ;; produce 3 values
([] [] [{:!6!:t4 "à$", :*B 2, :K0:R*Hw:g:4!? ""}])

It randomly produced two empty vectors first, which is fine. It's valid to sort 0 maps. If I run that sample more, I'll see vectors with more maps in them.
Halfway there! Now for the instructions. Start with a subset of the map keys - there's no subset generator, but I can build one using the non-empty-set defined earlier. I want a non-empty-set of elements from my set-of-keys.

(non-empty-set (gen/elements set-of-keys)) 
  ;; some-keys: 1 or more keys. 

To pair these instruction keys with directions, I'll generate the right number of directions. Generating a direction means choosing between :ascending or :descending. This is a smaller generator that I can define outside:

(def mygen-direction-of-sort 
      (gen/elements [:ascending :descending])) 

and then to get a specific-length vector of these:

(gen/vector mygen-direction-of-sort (count some-keys)) 
   ;; some-directions

I'll put the instruction keys with the directions together after the generation is all complete, and assemble the output:

(def maps-and-sort-instructions
 (m/domonad gen-m
  [set-of-keys (non-empty-set gen/keyword)
   set-of-value-gens (gen/vector  
                      (gen/elements [gen/string gen/int]) 
                      (count set-of-keys))
   some-maps (gen/vector 
              (apply gen/hash-map 
               (mapcat vector set-of-keys 
   some-keys (non-empty-set (gen/elements set-of-keys)) 
   some-directions (gen/vector mygen-direction-of-sort 
                               (count some-keys))]
   (let [instructions (map vector some-keys some-directions)] 
                           ;; pair keys with directions
    [some-maps instructions]))) ;; return maps and instructions

There it is, one giant generator, built of at least 11 small ones. That's a lot of Clojure code... way too much to trust without a test. I need a property for my generator!

What is important about the output of this generator? Every instruction is a pair, every direction is either :ascending or :descending, and every key in the sort instructions is present in every map. I could also specify that the values for each key are all Comparable with each other, but I haven't yet. This is close enough:

(def sort-instructions-are-compatible-with-maps
    [[rows instructions] maps-and-sort-instructions]
    (every? identity (for [[k direction] instructions
                          ;; break instructions into parts
              (and (#{:ascending :descending} direction
                    ;; Clojure looks so weird
                   (every? k rows)))))) 
                          ;; will be false if the key is absent

(require '[clojure.test.check :as tc])
(tc/quick-check 50 sort-instructions-are-compatible-with-maps)
;; {:result true, :num-tests 50, :seed 1412659276160}

Hurray, my property is true. My generator works. Now I can write a test... then maybe the code... then someday the post that I wanted to write tonight.

You might roll your eyes at me for going to these lengths to test code that's only going to be used in a blog post. But I want code that works, not just two or three times but all the time. (Write enough concurrent code, and you notice the a difference between "I saw it work" and "it works.") Since I'm working in Clojure, I can't lean on the compiler to test the skeleton of my program. It's all on me. And "I saw it work once in the REPL" isn't satisfying.

Blake Meike points out on Twitter, "Nearly the entire Internet revolution... is based on works-so-far code." So true! It's that way at work. Maybe my free-time coding is the only coding I get to do right. Maybe that's why open-source software has the potential to be more correct than commercial software. Maybe it's the late-night principles of a few hungry-for-correctness programmers that move technology forward.


But it does feel good to write a solid property-based test.

[1] Coming up with examples is "work," as opposed to "programming."

Code for this post:

Friday, September 19, 2014

Talk: Concurrency Options on the JVM

or, "Everything You Never Wanted to Know About java.util.concurrent.ExecutorService But Needed To"

 Here's the prezi:
from StrangeLoop, 18 Sep 2014


A clojure library for choosing your own threadpools:

Thursday, September 4, 2014

TDD with generative testing: an example in Ruby

Say I'm in retail, and the marketing team has an app that helps them evaluate the sales of various items. I'm working on a web service that, given an item, tells them how many purchases were influenced by various advertising channels: the mobile app, web ads, and spam email.

The service will look up item purchase records from one system, then access the various marketing systems that know about ad clicks, mobile app usage, and email sends. It returns how many purchases were influenced by each, and uses some magic formula to calculate the relevance of each channel to this item's sales.

My goal is to test this thoroughly at the API level. I can totally write an example-based test for this, with a nice happy-path input and hard-coded expected output. And then, I need to test edge cases and error cases. When no channels have impact; when they all have the same impact; when one fails; when they timeout; et cetera et cetera.

Instead, I want to write a few generative tests. What might they look like?

When I'm using test-driven development with generative testing, I start at the outside. What can I say about the output of this service? For each channel, the number of influenced purchases can't be bigger than the total purchases. And the relevance number should be between 0 and 100, inclusive. I can assert that in rspec.

expect(influenced_purchases).to be <= total_purchases
expect(relevance).to be >= 0
expect(relevance).to be <= 100

These kinds of assertions are called "properties". Here, "property" has NOTHING TO DO with a field on a class. In this context, a property is something that is always true for specified circumstances. The generated input will specify the circumstances.

To test this, I'll need to run some input through my service and then make these checks for each output circle. It needs some way to query the purchase and marketing services, and I'm not going to make real calls a hundred times. Therefore my service will use adapters to access the outside world, and test adapters will serve up data.

result =,

result.channels.each do |(channel, influence)|
  expect(influence.influenced_purchases).to be <= total_purchases
  expect(influence.relevance).to be >= 0
  expect(influence.relevance).to be <= 100

(relatively complete code sample here.)
To do this, I need purchases, events on each channel, and an item. My test needs to generate these 100 times, and then do the assertions 100 times. I can use rantly for this. The test looks like this:

it "returns a reasonable amount of influence" do
 property_of {
  ... return an array [purchases, channel_events, item] ...
 }.check do |(purchaseschannel_eventsitem)|
  total_purchases = purchases.size
  result =,

  result.channels.each do |(channel, influence)|
   expect(influence.influenced_purchases).to be <= total_purchases
   expect(influence.relevance).to be >= 0
   expect(influence.relevance).to be <= 100

(Writing generators needs a post of its own.)
Rantly will call the property_of block, and pass its result into the check block, 100 times or until it finds a failure. Its objective is to disprove the property (which we assert is true for all input) by finding an input value that makes the assertions fail. If it does, it prints out that input value, so you can figure out what's failing.

It does more than that, actually: it attempts to find the simplest input that makes the property fail. This makes finding the problem easier. It also helps me with TDD, because it boils this general test into the simplest case, the same place I might have started with traditional TDD.

In my TDD cycle, I make this test compile. Then it fails, and rantly reports the simplest case: no purchases, no events, any old item. After I make that test pass, rantly reports another simple input case that fails. Make that work. Repeat.

Once this test passes, all I have is a stub implementation. Now what? It's time to add properties gradually. Usually at this point I sit back and think for a while. Compared to example-based TDD, generative testing is a lot more thinking and less typing. How can I shrink the boundaries?

It's time for another post on relative properties.

TDD is Dead! Long Live TDD!

Imagine that you're writing a web service. It is implemented with a bunch of classes. Pretend this circle represents your service, and the shapes inside it are classes.

The way I learned test-driven development[1], we wrote itty-bitty tests around every itty-bitty method in each class. Then maybe a few acceptance tests around the outside. This was supposed to help us drive design, and it was supposed to give us safety in refactoring. These automated tests would give us assurance, and make changing the code easier.

It doesn't work out that way. Tests don't enable change. Tests prevent change! In particular, when I want to refactor the internals of my service, any class I change means umpteen test changes. And all these tests include example == actual, and I've gotta figure out the new magic values that should pass. No fun! These method- or class-level tests are like bars in a cage preventing refactoring.

Tests prevent change, and there's a place I want to prevent unintentional change: it's at the service API level. At the outside, where other systems interact with this service, where a change in behavior could be a nasty surprise for some other team. Ideally, that's where I want to put my automated tests.

Whoa, that is an ugly cage. At the service level, there are often many possible input scenarios. Testing every single one of them is painful. We probably can't even think of every relevant combination and all the various edge cases. Much easier to zoom in to the class level and test one edge case at a time. Besides, even if we did write the dozens of tests to cover all the possibilities, what happens when the requirements change? Then we have great big tests with long expected == actual assertions, and we have to rework all of those. Bars in a cage, indeed.

Is TDD dead? Maybe it's time to crown a new TDD. There's a style of testing that addresses both of the difficulties in API-level testing: it finds all the scenarios and tames the profusion of hard-coded expectations. It's called generative testing.[2]

Generative testing says, "I'm not gonna think of all the possible scenarios. I'm gonna write code that does it for me." We write generators, which are objects that know how to produce random valid instances of various input types. The testing framework uses these to produce a hundred different random input scenarios, and runs all of them through the test.

Generative testing says, "I'm not gonna hard-code the output. I'm gonna make sure whatever comes out is good enough." We can't hard-code the output when we don't know what the input is going to be. Instead, assertions are based on the relationship between the output and input. Sometimes we can't be perfectly specific because we refuse to duplicate the code under test. In these cases we can establish boundaries around the output. Maybe, it should be between these values. It should go down as this input value goes up. It should never return more items than requested, that kind of thing.

With these, a few tests can cover many scenarios. Fortify with a few hard-coded examples if needed, and now half a dozen tests at the API level cover all the combinations of all the edge cases, as well as the happy paths.

This doesn't preclude small tests that drive our class design. Use them, and then delete them. This doesn't preclude example tests for documentation. Example-based, expected == actual tests, are stories, and people think in stories. Give them what they want, and give the computer what it wants: lots of juicy tests in one.

There are obstacles to TDD in this style. It's way harder. It's tough to find the assertions that draw a boundary around the acceptable results. There's more thinking, less typing here. Lots more thinking, to find the assertions that draw a boundary around the acceptable output. That's the hardest part, and it's also the best part, because the real benefit of TDD is that it stops you from coding a solution to a problem you don't understand.

look for more posts on this topic, to go along with my talks on it. See also my video about Property Based Testing in Scala

[1] The TDD I learned, at the itty-bitty level with mock all the things, was wrong. It isn't what Kent Beck espoused. But it's the easiest. [2] Or property-based testing, but that has NOTHING to do with properties on a class, so that name confuses people. Aside from that confusion I prefer "property-based", which speaks about WHY we do this testing, over "generative", which speaks about how.

Sunday, August 31, 2014

The power of embedded developers

Meet my friend Sean.

Sean with a big smile

Sean is one of the most powerful developers I know. Not best, not brilliantest, but most powerful because: he provides more business value than some 50-person teams.

What is the job title of such a powerful developer? It isn't "ninja" (although maybe it should be). It's "Accounting Manager." WAT.

Program: runs on the author's machine when the author runs it.
Software: runs on any machine at any time, serving many people's purposes.

Here's the thing: Sean can code -- he was a top developer when we worked together fifteen years ago -- but he no longer writes software. He writes programs instead.

Dual wielding

When it comes to business-value power, domain knowledge adds potential; technical knowledge adds potential; and the combination is drastically better than each.
business knowledge, short bar. Technical knowledge, short bar. Both, very tall bar. Y axis, Value.
Why? In Sean's words, "because you can actually question what you're doing." While breaking a process down in order to encode it as a program, his knowledge of accounting and the business needs lets him distinguish between what's important or not, and how relevant a particular case is.

Typically, when software impacts a business process flow, it's because the designers and developers didn't understand. It's a negative impact. When the designer/developer is on the business team, process changes emerge from the program design. Codifying well-understood activities reveals opportunities for synergy between businesspeople and the program. It's a positive impact.

Sean can spot problems, observe trends, and improve processes. Software created from static requirements can only ossify processes.

How he does it

When Sean does reporting, that means: finding out what needs done and why, learning all the databases that have the information and acquiring it, joining millions of rows and performing whatever accounting business logic, putting this into a useful Excel spreadsheet, and distributing that to people - in a couple hours per month.

Those few hours of "doing reporting" each month are possible because the rest of his time is spent thinking, investigating, analyzing, troubleshooting, and programming. Given this freedom, Sean adds more business value every month.

Gardening: gradual improvement of a program. Adding error handling where errors occur, visibility where it's needed, performance improvements where it counts. Gardening addresses the pain points after they're felt.
When Sean does reporting, that means: automating every repetitive step, scheduling his programs, monitoring them, and gardening them. What he already does becomes trivial, and then he's free to do more and more useful work. He and his programs are not separable; they're both part of getting the job done. And when that job changes, he's ready.

a person and a box inside a wall

This is an example of the code+coder ecosystem. Sean's programs aren't useful without him around to run them and change them, and Sean wouldn't get much done without the stockpile of code he's written over the years. The benefit of this system is: flexibility! When the real accountants need more or different information, Sean makes a change in 5 minutes to 3 days.  Compare this to weeks when a small feature is requested of the IT department or software vendor.

There are good reasons the software vendor takes so long. Software should be bulletproof. It's backed by fixed, negotiated, contracted requirements. It has security, authorization, failover, automated testing. Coordination, contracts, APIs. The difficulty multiplies, and software is 1000x more work to write than a program.
software system: several boxes inside many walls, and lots of people, all inside another wall

Sean doesn't need all that, because he is the monitoring, the security, the failover. He tests with his eyeballs and his business knowledge as the results appear. As a single point of responsibility, every problem that Sean hits is a learning opportunity, and he has the ability and desire to implement the next program, or the next change, better.

The advantage of software over programs is: stability. The disadvantage is: stability. When a particular program is broadly useful and has hardly changed since Sean implemented it, that's a good candidate to transfer over IT as a feature request.

Who can do this?

A lot of people can be this powerful. Sean is a developer who learned the business. I know other business-types who have learned to program. It's wicked effective when it happens. It takes free time, a lot of learning, great management, and a contact network for support.

Having started as a developer with the major software vendor, Sean can talk to DBAs and sysadmins and programmers. His management had to push to get him access to some of these people, and now those communication channels are super valuable both ways. When there's a problem, the DBAs call him to find out about the significance of various data.

the code-coder ecosystem, underneath an umbrella with a person in it to represent the managerSean's management values his work, values him, and they work to cut through the red tape. Accounting Managers don't usually have access to unix machines, production Oracle databases, and automated emailers. Clever people do clever things, and sometimes that surprises the corporate security departments.

The breadth of knowledge, through the business domain to finance to programs to databases to the specific software his programs interact with, doesn't come for free. And it doesn't stay for free. There are an infinite number of tools that might help; one person can't know them all, but it helps to know what's possible. When Sean took over some other person's Access databases, it didn't take him long to knock them down from several gigs to 250MB, because he knew what was possible. Knowing the questions to ask is more important than knowing the answers.

This knowledge doesn't stay current for free. Most of Sean's time is spent in meetings, keeping up on what is changing in the many systems he and his programs interact with. This is the limiting factor in growing his power.

One thing you can't do to people like Sean: budget their time. When a person is production support and development and maintenance and design and interface to many departments, their time is unpredictable. Sometimes Sean will implement a last minute change to compensate for a bug introduced in some other software that can't be fixed for a few weeks, and he'll work crazy hours for a couple days. Other times, he can chip away at niggling errors, gardening his programs so that bugs like that will be caught earlier or easier to fix in the future. It isn't predictable. Sean is flexible, and his management recognizes this. Slack gives us freedom to get better at our jobs.

What can stop us?

Motivation comes for free when people feel valued, but it's easy to kill. One is cramming our time so full that we don't have time to improve how we work. Another is red tape: jumping through hoops to get access to data, to get programs installed, to get questions answered. These say, "your ideas are not valued, not shut up and do the same thing every day for the rest of your life."

There's HR rules, which say Sean can't get promoted or get a raise because Finance doesn't have a job title for "Powerful Accountant-Programmer Combination."

There's budgets: Sean could double his power if he had a person to train. Someone to split the meetings with. And someone to carry on his legacy when he moves on to another amazing thing. Right now he's too essential to move, and that's another de-motivation.

The Big Problem

And when Sean does move on? In his case, the code-coder ecosystem would collapse. No one else knows his code, so they can't re-use it. Maybe they could run it (maybe), but they couldn't change it. They couldn't monitor it, support it, or recognize problems when they occur sneakily. Sean is working in a dangerous silo. And he just bought a fast car.

Sean's new Honda S2000

He doesn't want to be in a silo; a team with one or two more people would be ideal. This kind of ninja doesn't have to work alone. There is a balance between programs and software. For many business departments, that balance may be far closer to the "program" side than software developers like me care to admit.

Monday, August 25, 2014

May I please have inlining for small dependencies?

Franklin Webber[1]  went on a quest to find out whether
utility code was better or worse than core code in open source
projects. Utility code is defined as anything necessary but not directly related to the application's
mission. Franklin found that overall quality of utility code was higher, perhaps because this code changes less. He also found a lot of repetition among projects, many
people doing the same thing. A function to find the home directory
in several possible env vars, for instance. HashWithIndifferentAccess,
which doesn't care whether you pass the key as string or symbol (keyword
in Clojure). He asks, why not break these out into tiny little gems?

Dependency management is an unsolved problem in our industry. We want
single-responsibility, and we want re-use. Reuse means separately
released units. But then when we compile those into one unit, transitive
dependencies conflict. Only one version of a particular library can make it into the final executable. In the example below, the Pasta library might not be compatible with the v2.1.0 of Stir, but that's what it'll be calling after it's compiled into Dinner.

Programs like WebLogic went to crazy ends to provide different
classloaders, so that they could run apps in the same process without
this dependency conflict. That's a mess!

We could say all libraries should all be backwards compatible. Compatibility is great thing, but
it's also limiting. It prevents API improvements, and makes code ugly.
There's more than API concerns as well: if the home-directory-finding
function adds a new place to look, its behavior can change, surprising
existing systems. We want stability and progress, in different places.

With these itty-bitty libraries that Franklin proposes we break out,
compatibility is a problem. So instead of dependency hell, we have chosen
duplication, copying them into various projects.

What if, for little ones, we could inline the dependency? Like
#include in C: copy in the version we want, and link whatever calls
it to that specific implementation. Then other code could depend on
other versions, inlining those separately. These transitive dependencies
would then not be available at the top level.

Seems like this could help with dependency hell. We would then be free
to incorporate tiny utility libraries, individually, without concern
about conflicting versions later.
I wonder if it's something that could be built into bundler, or leinengen?

This isn't the right solution for all our dependency problems: not large libraries, and not external-connection clients. Yet if we had this choice, if a library could flag itself as suitable for inlining, then microlibraries become practical, and that could be good practice.

[1] Talk from Steel City Ruby 2014
[2] Angry face:
Icon made by Freepik from