Friday, August 22, 2014

Quick reference: monads and test.check generators

Combine monads with test.check generators to build them up out of smaller generators with dependencies:

(require '[clojure.test.check.generators :as gen])
(require '[clojure.algo.monads :as m])
(m/defmonad gen-m 
  [m-bind gen/bind 
   m-result gen/return])

(def vector-and-elem
  (m/domonad gen-m
    [n (gen/choose 1 10)
     v (gen/vector gen/int n)
     e (gen/element v)]
    [v, e]))

(gen/sample vector-and-elem)
;; ([[0 0] 0] 
    [[0 -1 1 0 -1 0 -1 1] 0] 
    [[1 1 3 3 3 -1 0 -2 2] 3]
    [[8 4] 8]...

The generator here chooses a vector length, uses that to generate a vector, uses that to pick an element inside the vector, and then returns a tuple of the vector and the element. The syntax is cleaner than a lot of gen/bind and gen/fmap calls. It looks a lot like ScalaCheck.

I suspect we could define m-zero and m-plus in the monad to get :when conditions as well.

I'm working on a longer post that explains what's going on here and why we would do it.

Tuesday, August 5, 2014

STLCodeCast, and a bit of Functional Programming in Java 8

In this live video interview with screencasting, I show a bit of functional programming style in Java 8. I also rant about whether math is necessary to be a good programmer, and predict how organizational structures are changing to support learning.

STLCodeCast Episode 19

My Pluralsight course on FP with Java

Property Based Testing for Better Code

Why replace our carefully chosen, exactingly specified example-based tests with properties, when properties are harder to write and less specific?


link to YouTube

Video from Midwest.io 2014

Related code: https://github.com/jessitron/scalacheck-prisoners-dilemma

Monday, August 4, 2014

Stacking responsibilities

TL;DR - Support decisions with automation and information; give people breadth of responsibility; let them learn from the results of their choices.

When I started writing software in 1999, The software development cycle was divided into stages, ruled over by project management.

Business people decided what to build to support the customers. Developers coded it. Testers tested it. System Administrators deployed and monitored it. Eventually the customer got it, and then, did anyone check whether the features did any good?

These days Agile has shortened the cycle, and put business, development, and QA in the same room. Meanwhile, with all the tools and libraries and higher-level languages, feature development is a lot quicker, so development broadens into automating the verification step and the deployment. Shorter cycles mean we ask the customer for feedback regularly.

Now developers are implementing, verifying, deploying, monitoring. The number of tools and environments we use for all these tasks becomes staggering. Prioritization - when the only externally-visible deliverable is features, who will improve tests, deployment, and monitoring? We automate the customer's work; when do we automate ours?

The next trend in development process helps with these: it divides responsibilities without splitting goals. Business works with customers, developers automate for business, and a slice of developers automate our work. Netflix calls this team Engineering Tools; at Outpace we call it Platform. Instead of handoffs, we have frequent delivery of complete products from each team.

Meanwhile, when developers own the features past production, another task emerges: evaluation of results. Automate that too! What is success for a feature? It isn't deployment: it's whether our customers find value in it. Gleaning that means building affordances into the feature implementation, making information easy to see, and then checking it out. We're responsible for a feature until its retirement. Combine authority with information, and people rise to the occasion.[1]


Learning happens when one person sees the full cycle and its effects, and that person influences the next cycle. Experiments happen, our capabilities grow.

In this division of responsibilities, no one delegates decisions. Everyone shares a goal, and supports the rest of the organization in reaching that goal. The platform team doesn't do deployments. It creates tools that abstract away the dirty details, supplying all information needed for developers to make decisions and evaluate the results. At Outpace, the Platform team is composed of people from the other development teams, so they share background and know each others' pain. The difference is: the platform team has a mandate to go meta, to improve developer productivity. Someone is automating the automation, and every developer doesn't have to be an expert in every layer.

The old way was like a framework: project managers take the requirements from the business, then the code from the developers, and pass them into the next step in the process. The new way is like libraries: the platform team provides what the developers need, who provide what the business needs, who provide what the customer needs. Details are abstracted away, and decisions are not.

When a developer's responsibilities end with code that conforms to a document, it's super hard to get incentives aligned to the larger needs. Once everyone is responsible for the whole cycle, we don't need to align incentives. Goals align, and that's more powerful. Remove misaligned incentives, give us a shared goal to work for, and people achieve. Give us some slack to experiment and improve, and we'll also innovate.

--------------------------------------
[1] via David Marquet

Tuesday, July 29, 2014

Printing the HTTP status code in curl

This should be easier, but here it is for reference:

curl -w "status: %{http_code}\n" <URL>

Monday, July 21, 2014

Testing akka actor termination

When testing akka code, I want to make sure a particular actor gets shut down within a time limit. I used to do it like this:

 Thread.sleep(2.seconds)
 assertTrue(actorRef.isTerminated())

That isTerminated method is deprecated since Akka 2.2, and good thing too, since my test was wasting everyone's time. Today I'm doing this instead:

import akka.testkit.TestProbe

val probe = new TestProbe(actorSystem)
probe.watch(actorRef)
probe.expectMsgPF(2.seconds){ case Terminated(actorRef) => true }

This says: set up a TestProbe actor, and have it watch the actorRef of interest. Wait for the TestProbe to receive notification that the actor of interest has been terminated. If actorRef has already terminated, that message will come right away. My test doesn't have to wait the maximum allowed time.[1]

This works in any old test method with access to the actorSystem -- I don't have to extend akka.testkit.TestKit to use the TestProbe.

BONUS: In a property-based test, I don't want to throw an exception, but rather return a result, a property with a nice label. In that case my function gets a little weirder:

def shutsDown(actorSystem: ActorSystem, 
              actorRef: ActorRef): Prop = {
  val maxWait = 2.seconds
  val probe = new TestProbe(actorSystem)
  probe.watch(actorRef)
  try {
   probe.expectMsgPF(maxWait){case Terminated(actorRef) => true }
  } catch { 
   case ae: AssertionError => 
    false :| s"actor not terminated within $maxWait"
  }
}

-----------
[1] This is still blocking the thread until the Terminated message is received or the timeout expires. I eagerly await the day when test methods can return a Future[TestResult].

Friday, July 11, 2014

A suggestion for testing style in Clojure

Getting a test to fail is one thing; getting it to express why it failed is another.
Clojure.test provides an assertion macro: is

(deftest "my-function" 
  (testing "some requirement"
    (is (something-that-evaluates-to-bool (arg1) (arg2)) 
    "Message to print when it's false")))

When this assertion fails, in addition to the message, "expected" and "actual" results print. The is macro tries to be a little clever.

If the expression passed to is an S-expr, and the first element of the is recognized as a function. Then is prints that first symbol directly, then evaluates all the arguments to the function and prints the results. For instance:

expected: (function-name (arg1) (arg2))
actual: (not (function-name "1st arg value" "2nd arg value"))

However, if is does not recognize that first element as a function, the whole expression passed to is is evaluated for the actual, and you get:

expected: (something-that-evaluates-to-bool (arg1) (arg2))
actual: false

To get the more communicative output, write a function with a name that describes the property you're testing. A top-level defn will be recognized by is as a function, so declare your test function that way.

For instance, if I'm testing whether a vector of numbers is sorted, I could use built-in functions:

(let [v [1 3 2 4]]
(is (apply <= v) "not sorted"))

Then I see[1]:

expected: (apply <= v)
actual: (not
           (apply
            #<core$_LT__EQ_ clojure.core$_LT__EQ_@50055df8>
            [1 3 2 4]))

Printing a function is never pretty. If, instead, I give the desired property a name:

(defn ascending? [v] (apply <= v))

(let [v [1 3 2 4]]
  (is (ascending? v) "not sorted"))

Then I get something much more communicative:

expected: (ascending? v)
actual: (not (ascending? [1 3 2 4]))

There it is -- declarative test output through creation of simple functions.

Bonus: If you want to make your failure output even better, define special functionality just for your function, as @pjstadig did for =.

-----------------
[1] I'm using humane-test-output to get the "actual" pretty-printed.

Clojure