Saturday, May 7, 2016

Tradeoffs in Coordination Among Teams

The other day in Budapest, Jez Humble and I wondered, what is the CAP theorem for teams? In distributed database systems, the CAP theorem says: choose two of Consistency, Availability, and Partitioning — and you must choose Partitioning.
Consider a system for building software together. Unless the software is built by exactly one person, we have to choose Partitioning. We can’t meld minds, and talking is slow.
In databases we choose between Consistency (the data is the same everywhere) and Availability (we can always get the data). As teams grow, we choose between Consensus (doing things for the same reasons in the same way) and Actually-getting-things-done.
Or, letting go of the CAP acronym: we balance Moving Together against Moving Forward.

Moving Together


A group of 1 is the trivial case. Decision-making is the same as consensus. All work is forward work, but output is very limited, and when one person is sick everything stops.

A group of 2-7 is ideal: the communication comes with interplay of ideas, and whole new outputs of dialogue make up for the time cost of talking to each other. It is still feasible for everyone in the group to have a mental model of each other person, to know what that person needs to know. Consensus is easy to reach when every stakeholder is friends with every other stakeholder.

Beyond one team, the tradeoffs begin. Take one team of 2-7 people working closely together. Represent their potential output with this tall, hollow arrow pointing up.


This team is building software to run an antique store. Look at them go, full forward motion. (picture: tall, filled arrow.)

Next we add more to the web site while continuing development on the register point-of-sale tools. We break into two teams. We’re still working with the same database of items, and building the same brand, so we coordinate closely. We leverage each others' tools. More people means more coordination overhead, but we all like each other, so it’s not much burden. We are a community, after all.
A green arrow and a red arrow, each connected by many lines of communication, are filled about halfway up with work.

Now the store is doing well. The web site attracts more retail business, the neighboring antique stores want to advertise their items on our site, everything is succeeding and we add more people. A team for partnerships, which means we need externally-facing reports, which means we need a data pipeline.
A purple arrow and a blue arrow join the red and green ones. Lines crisscross between them, a snarly web. The arrows are filled only a little because of these coordination costs. The purple arrow is less connected, and a bit more full, but it's pointed thirty degrees to the left.

The same level of consensus and coordination isn’t practical anymore. Coordination costs weigh heavily. New people coming in don’t get to build a mental model of everyone who already works there. They don’t know what other people know, or which other people need to know something. If the partnerships team touches the database, it might break point of sale or the web site, so they are hamstrung. Everyone needs to check everything, so the slowest-to-release team sets the pace. The purple team here is spending less time on coordination, so the data pipeline is getting built, but without any ties to the green team, it’s going in a direction that won’t work for point of sale.

This mess scales up in the direction of mess. How do we scale forward progress instead?

Moving Forward


The other extreme is decoupling. Boundaries. A very clear API between the data pipeline, point of sale, and web. Separate databases, duplicating data when necessary. This is a different kind of overhead: more technical, less personal. Break the back-end coupling at the database; break the front-end (API) coupling with backwards compatibility. Teams operate on their own schedules, releases are not coordinated. This is represented by wider arrows, because backwards compatibility and graceful degradation are expensive. 
Four arrows, each wide and short. A few lines connect them. They're filled, but the work went to width (solidness) rather than height (forward progress).

These teams are getting about as far as the communication-burdened teams. The difference is: this does scale out. We can add more teams before coordination becomes a limitation again.

Amazon is an extreme example of this: backwards compatible all the things. Each team Moving Forward in full armor. Everything fully separate, so no team can predict what other teams depend on. This made the AWS products possible. However, this is a ton of technical overhead, and maybe also not the kindest culture to work in.

Google takes another extreme. Their monorepo allows more coupling between teams. Libraries are shared. They make up for this with extreme tooling. Tests, refactoring tools, custom version control and build systems — even whole programming languages. Thousands of engineers work on infrastructure at Google, so that they can Move Together using technical overhead.

Balance


For the rest of us, in companies with 7-1000 engineers, we can’t afford one extreme or the other. We have to ask: where is consensus important? and where is consensus holding us back?

Consensus is crucial in objectives and direction. We are building the same business. The business results we are aiming for had better be the same. We all need to agree on “Which way is up?"

Consensus is crippling at the back end. When we require any coordination of releases. When I can’t upgrade a library without impacting other teams in way I can't predict. When my database change could break a system more production-critical than mine. This is when we are paralyzed. Don't make teams share databases or libraries.

What about leveraging shared tools and expertise? if every team runs its own database, those arrows get really wide really fast, unless they skimp on monitoring and redundancy — so they will skimp and the system will be fragile. We don't want to reinvent everything in every team.

The answer is to have a few wide arrows. Shared tools are great when they’re maintained as internal services, by teams with internal customers. Make the data pipeline serve the partnership and reporting teams. Make a database team to supply well-supported database instances to the other teams. (They’re still separate databases, but now we have shared tools to work with them, and hey look, a data pipeline for syncing between them.)


The green, red, and blue arrows are narrow and tall, and mostly full of work, with some lines connecting them. The purple arrow and a new black arrow are wide and short and full of work. The wide arrows (internal services) are connected to the tall arrows (product teams) through their tips.

Re-use helps only when there is a solid API, when there is no coupling of schedules, and when the providing team focuses on customer service.

Conclusions


Avoid shared code libraries, unless you’re Google and have perfect test coverage everywhere, or you’re Amazon and have whole teams supporting those libraries with backwards compatibility.
Avoid shared database instances, but build internal teams around supporting common database tools.

Encourage shared ideas. Random communication among people across an organization has huge potential. Find out what other teams are doing, and that can refine your own direction and speed your development — as long as everything you hear is information, not obligation.

Reach consensus on what we want to achieve, why we are doing it, and how (at a high level) we might achieve it. Point in the same direction, move independently.

Every organization is a distributed system, even when we sit right next to each other. Coordination makes joint activity possible, but not free. Be conscious of the tradeoffs as your organization grows, as consensus becomes less useful and more expensive. Recognize that new people entering the organization experience higher coordination costs than you do. Let teams move forward in their own way, as long as we move together in a common direction. Distributed systems are hard, and we can do it.







Bonus material 

Here is a picture of Jez in Budapest.



And here is a paper about coordination costs:
Common Ground and Coordination in Joint Activity

Saturday, April 16, 2016

Property Testing in Elm

Elm is perfectly suited to property testing, with its delightful data-in--data-out functions. Testing in Elm should super easy.

The tooling isn't there yet, though. This post documents what was necessary today to get a property to run in Elm.

Step 1: elm-test

This includes an Elm library and a node module for a command-line runner. The library alone will let you create a web page of test results and look at it, but I want to run them in my build script and see results in my terminal.

Installation in an existing project:
elm package install deadfoxygrandpa/elm-test
npm install -g elm-test
The node module offers an "elm test init" functionality to put some test files in the current directory: TestRunner (which is the Main module for test runs[1]) and Tests.elm which holds actual tests. Personally, I found it necessary to follow the following steps as well.

  • create a test directory (I don't want tests in my project home), and move the TestRunner.elm and Tests.elm files there.
  • add that test directory to the source directories in elm-package.json

Step 2: elm-check


The first thing to know is: which elm-check to install. You need the one from NoRedInk:
elm package install NoRedInk/elm-check
The next thing is: what to import. Where do all those methods used in the README live?

Here is a full program that lets elm-test execute the properties from the elm-check readme.
TL;DR: You need to import stuff from Check and Check.Producer for all properties; and  for the runner program, ElmTest and Check.Test and Signal, Console, and Task.

Name it test/Properties.elm and run it with
elm test test/Properties.elm
The output looks like
Successfully compiled test/Properties.elm
Running tests...
  1 suites run, containing 2 tests
  All tests passed
Here's the full text just in case.
module Main (..) where
import ElmTest
import Check exposing (Evidence, Claim, that, is, for)
import Check.Test
import Check.Producer as Producer
import List
import Signal exposing (Signal)
import Console exposing (IO)
import Task

console : IO ()
console =
  ElmTest.consoleRunner (Check.Test.evidenceToTest evidence)

port runner : Signal (Task.Task x ())
port runner =
  Console.run console

myClaims : Claim
myClaims =
  Check.suite
    "List Reverse"
    [ Check.claim
        "Reversing a list twice yields the original list"
        `that` (\list -> List.reverse (List.reverse list))
        `is` identity
        `for` Producer.list Producer.int
    , Check.claim
        "Reversing a list does not modify its length"
        `that` (\list -> List.length (List.reverse list))
        `is` (\list -> List.length list)
        `for` Producer.list Producer.int
    ]

evidence : Evidence
evidence =
  Check.quickCheck myClaims
How to write properties is a post for another day. For now, at least this will get something running.

See also: a helpful post for running elm-check in phantom.js


[1] How does that even work? I thought modules needed the same name as their file name. Apparently this is not true of Main. You must name the module Main. You do not have to have a 'main' function in there (as of this writing). The command-line runner needs the 'console' function instead.

Thursday, March 31, 2016

Scaling Intelligence

You can watch the full keynote from Scala eXchange 2015 (account creation required, but free). The talk includes examples and details; this post is a summary of one thread.

Scala is a scalable language, from small abstractions to large ones. This helps with the one scaling problem every software system has: scaling the feature set while still fitting it in our heads. Scaling our own intelligence.

Scala offers complicated powerful language features built from combinations of simpler language features. The aim is a staircase of learning: gradually learn features as you need them. The staircase starts in the green grass of toy programs, moves through the blue sky of useful business software, and finally into the outer space of abstract libraries and frameworks. (That dark blob is supposed to represent outer space.)


This is not how people experience the language.

The green grass is great: Odersky's Coursera courses, Atomic Scala. Next, we want to write something useful for work: the blue sky. It is time to use libraries and frameworks. I want a web app, so I bring in Spray. Suddenly I need to understand typeclasses and the magnet pattern. The magnet pattern? The docs link to a post on this. It's five thousand words long. I'm shooting into outer space -- I don't want to be an astronaut yet!

The middle of the staircase is missing.


Who can repair this? Not the astronauts, the compiler and library authors. They can write posts around program language theory, defining one feature in terms of a bunch of other concepts I don't understand yet. I need explanations by people who share my objectives, people a little bit ahead of me in the blue sky, who recently learned how to use Spray themselves. I don't need research papers, I need StackOverflow. Blog posts, not textbooks.

This is where we need each other. As a community, we can fill this staircase. At a macro level, we scale intelligence with teaching.

Scala as a language is not enough. We don't work in languages, especially not in the blue sky. We work in language systems, including all the libraries and tooling and all the people. The resources we create, and the personal interactions in real life and online. When we teach each other, we scale our collective intelligence, we scale our community.

Scaling the community is important, because only a large, diverse group can answer two crucial questions. To make the language and libraries great, we need to know about each feature: is this useful? and to make this staircase solid, we need to know about each source and document: is this clear?

Useful isn't determined by the library author, but by its users. Clear isn't determined by the writer, but by the reader. If you read the explanation of Futures on the official Scala site and you don't get it, if you feel stupid, that is not your fault. When documentation is not clear to you, its maintainers fail. Teaching means entering the context of the learner, and starting there. It means reaching for the person a step or two down, and pulling them up to where you are.

Michael Bernstein described his three years of learning Haskell. "I tried over and over again to turn my self doubt into a pure functional program, and eventually, it clicked."
Ouch. Not everyone has this tenacity. Not everyone has three years to spend becoming an astronaut. Teaching makes the language accessible to more people. At the same time, it makes everyone's life easier -- what might Mr Bernstein have accomplished during that year?

Scala, the language system, does not belong to Martin Odersky.  It belongs to everyone who makes Scala useful. We can each be part of this.

Ask and answer questions on StackOverflow. Blog about what you learned, especially about why it was useful.[1] Request more detail -- if something is not clear to you, then it is not clear. Speak at your local user group.[2] The less type theory you understand, the more people you can help!

Publish your useful Scala code. We need examples from the blue sky. If you do, tweet about it with #blueSkyScala.

It is up to all of us to teach each other, to scale our intelligence. Then we can make use of those abstractions that Scala builds up. Then it will be a scalable language.




[1] example: Remco Beckers's post on Option and Either and Try.
[2] example: Heather Miller's talk compensates for bad documentation around Scala Futures.

Thursday, December 31, 2015

Key signatures in piano music: the underlying technology

Today, sheet music (at least, the major keys) went from magic to technology for me. Magic is spellcasting, is memorization. Technology is understanding, is derivation. Technology takes up so much less space in my head!

If you can read music enough to pick out a simple song but wonder why chords and their weird names seem so obvious to some people, this post is for you.

Those markings at the beginning of the line that show which notes are played as sharps (called a key signature) - I have been trying to memorize their names. This one is called D Major, and it means that all Fs and Cs are to be played as sharps, hit the black key to the right instead of the white key.

D Major signature
I know how to play music with C# and F#, but why on earth is this called D Major? And why is it a thing?

Today I read a few pages in a book about scales, and now I get it. It takes some history to understand.

See, a long time ago the Greeks made a lyre with four strings, and they tuned the strings into a tetrachord, four notes that sound good. On a piano, a tetrachord is made out of four notes with the following separations: a whole step, then a whole step, then a half step. A whole step goes two keys, counting white and black keys; a half step is one key. Like walking -- right foot, left foot makes a whole step. One easy tetrachord starts with middle C:
From C to D is a whole step because there is a black key in between. From E to F is a half-step because there is no key in between. The C tetrachord is C, D, E, F.

The same formula makes a tetrachord starting from any key. The tetrachord starting with D goes D, E, F#, G.
A whole step from D is easy, skip the black key and hit E, a white key. A whole step from E means skipping the next key (F, a white key) and hitting the key after that (F#, a black key). Then a half step means the very next key (G, a white key). This is where the F# in the key signature is coming from.

But wait! There's more!

Put two tetrachords together to get an Ionian scale, also called a major scale. The tetrachords are separated by a whole step. In C, the first tetrachord is C, D, E, F. Take a whole step to G and start the next tetrachord. It goes G, A, B, C.


Eight notes, the last one the same as the first (one octave higher), make a major scale. The keys have this pattern of separations between them, made up of the Ionian scale and tetrachord patterns. Each scale uses a little over half the keys on the keyboard, and ignores the rest. Songs in C major use all white keys and none of the black keys. You want anything else, gotta put a sharp or flat symbol in front of the note.

What does this mean for D? The D major scale starts with the D tetrachord and adds a second tetrachord: D, E, F#, G; A, B, C#, D.


C# and F#! There they are, the two black keys with numbers on them! The "normal" keys to play in the D scale include C# and F# (black keys), but never C or F (white keys). Putting the D Major key signature in front of the music means that all the keys in the D scale look like ordinary notes.
With the C# and F# handled by the key signature, any special marking (sharp, flat, or natural) points out a note that is unexpected, that does not fit in with the rest.

The same pattern works for other key signatures; try constructing the Ionian scale for G out of two tetrachords separated by a whole step. You'll find that the only black key used is F#, so this is G major:
G major key signature
These are historical explanations for the structure of major scales, of seven different notes (plus the same notes in other octaves) that sound good together. There are scientific explanations too, even ratios of wavelengths.

On the piano, this means only 7/12 of the keys are used in any song, ordinarily. Why have the other keys? They bring equality to the different notes: any key can start a scale, because the whole-steps and half-steps are there no matter where you start. C is not special, just convenient. The circle is complete. Actually 12 circles for the 12 keys in an octave. So many patterns are visible to me now!

Now I can name the key signatures and say why they have those names. I don't have to memorize them, because I can derive them. Next I can learn chords and why certain ones like C and F and G7 appear together frequently. All of this from two pieces of vocabulary and some counting. Goodbye magic, hello math.

Sunday, November 22, 2015

Getting off the ground in Elm: project setup


If you have tried Elm and want to make a client bigger than a sample app, this post can help you get set up. Here, find what goes into each of my Elm repositories and why. This template creates a fullscreen Elm application with the potential for server calls and interactivity. This post is up-to-date as of Elm 0.16.0 on 11/21/2015, on a Mac.
TL;DR: Clone my sample repo; Change CarrotPotato to your project name everywhere it appears (elm-package.json; src/CarrotPotato.elm; index.html). Replace origin with your remote repository.

Step 0: Install Elm (once)

Run the latest installer.
To check that this worked, run `elm` at a terminal prompt. You should see a long usage message, starting with `Elm Platform 0.16.0 - a way to run all Elm tools`

Bonus: getting Elm text highlighting in your favorite text editor is a good idea. That's outside the scope of this post, because it was hard. I use Sublime 2 and this Elm plugin.

Step 1: Establish version control (every project)

Step 1A: create a directory and a repository. 

Make a directory on your computer and initialize a git repository inside it.
mkdir CarrotPotato
cd CarrotPotato
git init

 Step 1B: configure version control

In every project, I use the first commit to establish which files do not belong under version control.
I'm going to have the Elm compiler write its output to a directory called target. I want to save the source code I write, not stuff that's generated from it, so git should not save the compiler output. Git ignores any files or directories whose names are in a file called .gitignore, so I put target in there.
The Elm package manager uses a directory called elm-stuff for its work. That doesn't belong in our repository, so put it in .gitignore too. I recommend making .gitignore the first file committed in any new repository.
echo "target" >> .gitignore
echo "
elm-stuff" >> .gitignore
git add .gitignore
git commit -m "New Elm project"

Step 2: Bring in core dependencies

The Elm package manager will install everything you need, including the core language, including the configuration it needs. To bring in any dependency, use `elm package install <dependency>`, where <dependency> is specified as github-user/repo-name. Most of the packages come from github users elm-lang or evancz (Evan Czaplicki is the author of Elm). All the packages that elm-package knows about are listed on package.elm-lang.org.

In keeping with the Elm Architecture, I use StartApp as the basis for all my projects. Bring it in:
elm package install evancz/start-app
elm-package is very polite: it looks at your project, decides what it needs to do, and  asks nicely for permission before doing anything. It will add the dependency to elm-package.json (creating the file if it doesn't exist), then install the package you requested (along with anything that package depends on) in a directory called elm-stuff.

Here's a gotcha: the StartApp install downloads its dependencies, but you can't use them directly until they are declared as a direct dependency of your project. And you can't actually use StartApp without also using Effects and Html. So install them too:
elm package install evancz/elm-html
elm package install evancz/elm-effects
Note: This step won't work without internet access. Elm's package manager doesn't cache things locally; everything is copied into elm-stuff within each project. On the upside, you can dig around in elm-stuff to look at the code (and embedded documentation) of any of your project's dependencies.

Step 3: Improve project configuration

3A: Welcome to elm-package.json

You now have an elm-package.json file in your project directory. Open it in your text editor.
{
    "version": "1.0.0",
    "summary": "helpful summary of your project, less than 80 characters",
    "repository": "https://github.com/user/project.git",
    "license": "BSD3"
,
    "source-directories": [
        "."
    ],
    "exposed-modules": [],
    "dependencies": {
        "elm-lang/core": "3.0.0 <= v < 4.0.0",
        "evancz/start-app": "2.0.2 <= v < 3.0.0"
    },
    "elm-version": "0.16.0 <= v < 0.17.0"
}
The project version, summary, etc. become crucial when you publish a new library to the central Elm package list. Until then, you can update them if you feel like it.

Note: the project's dependencies are specified as ranges. Elm is super specific about semantic versioning. It is impossible for one of the libraries you use to introduce a compilation-breaking change without going up a major version (the first section in the version number), so Elm knows that (for instance) any version of StartApp that's at least as high as its current one "2.0.2" and less than the next major version "3.0.0" is acceptable. This matters if you publish your project as a library for other people to use. For now it's just cool.

3B: Establish a source directory

With the default configurtion, Elm looks for project sources in "." (the current directory; project root). I want to put them in their own directory, so I change the entry in "source-directories" to "src". Then I create a directory called `src` in my project root.
mkdir src
[editor] elm-package.json
and set:
"source-directories": [
       
"src"
    ],

Step 4: Create the main module

4A: Bring in "hello world" code

Create a file src/CarrotPotato.elm (if the name of your project is CarrotPotato), and open it in your text editor.
touch src/CarrotPotato.elm
[editor] 
src/CarrotPotato.elm
Every StartApp application starts about the same. I cut and paste most of this out of the StartApp docs, then added everything necessary to make it compile. It had to do something, so it outputs Hello World in an HTML text element.

Copy from this file, or this gist.

To understand this code, do the Elm Architecture Tutorial. (It's a lot. But it's the place to go to understand Elm.)

4B: compile the main module

I want this compiled into a JavaScript file in my `target` directory, so this is my build command:
elm make --output target/elm.js src/CarrotPotato.elm
When this works, a target/elm.js file should exist.

Note: by default, elm-make (v0.16) creates an index.html file instead of elm.js. That's fine for playing around, but in any real project I want control over the surrounding HTML.
Note: I ask elm-make to build the top-level module of my project. Once I add more source files, elm-make will compile all the ones that my top-level module brings in.

To remind myself of how to do this correctly, I put it in a script:
echo "elm make --output target/elm.js src/CarrotPotato.elm" >> build
chmod u+x 
build
Then every time I want to compile:
./build

Step 5: Run the program in a web page

Elm runs inside a web page. Let's call that page index.html because that's the default name for these things. Create that file and put something like this into it:
touch index.html
[editor] 
index.html
put this in:
<!DOCTYPE HTML>
<html>
<head>
  <meta charset="UTF-8">
  <title>CarrotPotato</title>
  <script type="text/javascript" src="target/elm.js"></script>
</head>
<body>
</body>

<script type="text/javascript">
  var app = Elm.fullscreen(Elm.
CarrotPotato, {});
</script>

</html>
The important parts here are:
  • in the header, set the page's title
  • in the header, bring in the compiler output; this matches the file I told elm-make to write to
  • in the header, you're free to bring in CSS
  • the body is empty
  • the script tag at the end activates my Elm module.
Save this file, and open it in your default browser:
open index.html
You should see "Hello World". Quick, make a commit!

Note: opening index.html as a file doesn't always work smoothly. If the browser gives you trouble, try running an http server in that directory instead. There's a very easy one available from npm.

Step 6: Go forth and Elminate

The foundation is set for an Elm project. From here, I can start building an application. Here are some things I often do next:
  • change the view function to show something more interesting. see elm-html for what it can retusrn.
  • make a git repository, push my project to it; update this in elm-package.json, and create a README.md
  • create a gh-pages branch to serve my project on the web (blog post on this coming soon, I hope)
  • break out my project's functionality into more modules, by creating files like src/CarrotPotato/View.elm and importing them from my main module
You can get everything up to this point without doing it yourself by cloning my elm-sample repo.
I do this:
git clone git@github.com:jessitron/elm-sample.git carrot-potato
< create repo at github called my-user/carrot-potato; copy its git url>
cd carrot-potato
git remote set-url origin git@github.com:my-user/carrot-potato.git
 
Comments and suggestions welcome! I'm sure this isn't the most optimal possible setup.



Tuesday, October 13, 2015

The Emperor has no clothes: Bad actors in tech


Maybe you are interested in a language, or an open source project, but you feel like the community is unwelcoming: Some big voices are rude, they’re downright hostile to newcomers and anyone who disagrees with them. Let’s not get involved.


Or in your workplace: influential people in the organization aren’t nearly as helpful, or as smart, as your teammates. Yet their opinions, and bad behavior, are followed by everyone else, and you feel bullied into decisions, you accept their little abuses. Ultimately the people with the most freedom move away, until the workplace is a surreal island where time stood still


There’s this old Christian Andersen tale, The Emperor’s New Clothes. Some weavers sold the emperor a suit, which cannot be seen by those who are incompetent or unfit for their position. It was really not a suit: they gave him nothing, the emperor was running around nude. Nobody said a thing. Each assumed they were in the wrong. Only when a child cried out “he has no clothes!” did everyone else realize that they were not alone in pretending.


While the Emperor’s situation is humorous and extreme, the open source and workplace situations are not. They are real, and they have mathematical underpinnings! A paper, called “The Majority Illusion in Social Networks,” studies this very phenomena. The Washington Post describes how public opinion appears to change rapidly, when sometimes it’s really that public opinion suddenly became known.


Those whose opinions are the most visible control what opinions are seen as acceptable and polite. The moment enough of the social network sees their own opinion as acceptable, bam! a major change in sentiment appears. That feeling was already there, but it was masked by a few local celebrities who didn’t share the values of the majority.


When we dislike bad behavior, we feel alone, but we are not special. We all see it, we all dislike it. We all wish we were using the right tools for the job. We all wish the mailing list contained only polite helpful responses. But social norms -- set by the few who are also the loudest -- make us not care enough, not invest enough, to communicate our opinions with such volume. Shame and fear of rejection freeze us. Or we lack the hunger for conflict. None of us say the emperor has no clothes, bad behavior persists.


The math says the evident culture is not always the predominant culture. A few confident, inconsiderate people in key positions intimidate an entire department. A few derogatory voices fence off a language, leaving erstwhile contributors to chew on rocks outside.


If you care about your organization, work to make everyone’s voice heard.
If you care about your open source community, speak out against behavior that masks the majority attitude. Watch for a red flag in your head: “I think his opinion is wrong, but I’m not going to say anything because arguing with him is not worth it.” You’re not the only one who sees through that clothing.






Friday, October 2, 2015

ElixirConf keynote: Elixir for the world

Video

http://confreaks.tv/videos/elixirconf2015-keynote-elixir-should-take-over-the-world

Slides with notes

Big PDF with notes (15M)

Slides only (on speakerdeck)

References


Camille Fournier on distributed systems: video
Caitie McCaffrey on stateful services: video 
Denise Jacobs, creativity: video
Marty Cagan on what's better than agile: video
How to Measure Anything: book
BrenĂ© Brown on vulnerability, Grounded Theory: video book
Property testing: video
Property testing: QuickCheck CI 
Elm: Richard Feldman's talk: video
My talk about React and Elm: video
Structure of Scientific Revolutions: about the book