In defense of rationality and dynamic programming

Karl Popper defines rationality as: basing beliefs on logical arguments and evidence. Irrationality is everything else.

He also defines comprehensive rationality as: only logical arguments and evidence are valid basis for belief. But this belief itself can only be accepted by choice or faith, so comprehensive rationality is self-contradictory. It also excludes a lot of useful knowledge. This, he says, is worse than irrationality.

It reminds me of some arguments for typed functional programming. We must have proof our programs are correct! We must make incorrect programs impossible to represent in compiling code! But this excludes a lot of useful programs. And do we even know what ‘correct’ is? Not in UI development, that’s for sure.

Pure functional programming eschews side effectsy (like printing output or writing data). Yet it is these side effects that make programs useful, that let’s them impact the world. Therefore, exclusively pure functional programming is worse than irrationality (all dynamic, side-effecting programming).

Popper argues instead for critical rationality: Start with tentative faith in premises, and then apply logical argument and evidence to see if they hold up. Consider new possible tenets, and see whether they hold up better. This kind of rationality accepts that there is no absolute truth, no perfect knowledge, only better. Knowledge evolves through critical argument. We can’t get to Truth, but we can free ourselves from some falsehoods.

This works in programming too. Sometimes we do know what ‘correct’ is. In those cases, it’s extremely valuable to have code that you know is solid. You know what it does, and even better, you know what it won’t do. Local guarantees of no mutation and no side effects are beautiful. Instead of ‘we don’t mutate parameters around here, so I’ll be surprised if that happens’ we get ‘it is impossible for that to happen so I put it out of my mind.’ This is what functional programmers mean when they talk about reasoning about code. There is no probably about it.

Where we can use logical arguments and evidence, let’s. For everything else, there’s JavaScript.

Everything is a big circle. Inside are some blobs of nonsense and a nice rectangle of purely functional programs. Some of the area between FP and nonsense is useful.

Growth Loops: circular causality is real

A few hundred years ago, we decided that circular causality was a logical fallacy. All causes are linear. If something moves, then something else pushed it. If you want to see why, you have to look smaller; all forces derive from microscopic forces.

Yet as a human, I see circular causality everywhere.

  • autocatalytic loops in biology
  • self-reinforcing systems (few women enter or stay in tech because there are few women in tech)
  • romantic infatuation (he likes me, that feels good, I like him, that feels good to him, he likes me 🔄)
  • self-fulfilling prophecies (I expect to fail, so I probably will 🔄)
  • self-regulating systems (such as language)
  • cohesive groups (we help out family, then we feel more bonded, then we help out 🔄)
  • downward spirals (he’s on drugs, so we don’t talk to him, so he is isolated, so he does drugs 🔄)
  • virtuous cycles (it’s easier to make money when you have money 🔄)

These are not illusory. Circular causalities are strong forces in our lives. The systems we live in are self-perpetuating — otherwise they wouldn’t stay around. The circle may be initiated by some historical accident (like biological advantage in the first days of agriculture), but the reason it stays true is circular.

Family therapy recognizes this. (In the “soft” sciences you’re allowed to talk about what’s right in front of you but can’t be derived from atomic forces 😛.) When I have a bad day and snip at my spouse, he withdraws a bit; this hurts our connection, so I’m more likely to snip at him 🔄. When I wonder if my kid is hiding something, I snoop around; she sees this as an invasion of privacy, she starts hiding things 🔄. When my partner tells me something that bothers him, and I say “thank you for expressing that” and then change it, next time he’ll tell me earlier, and it’ll be easier to hear and fix 🔄.

Note that nobody is forcing anyone to do anything. This kind of causality acts on propensities, on likelihoods. We still have free will, but it takes more work to overcome tendencies that are built into us by these interactions.

Or at work: When I think a coworker doesn’t respect me, I make sarcastic remarks; each time he respects me less 🔄. As a team, when we learn together and accomplish something, we feel a sense of belonging; this leads us to feel safe, which makes it easier to learn together and accomplish more 🔄.

Some of these cycles are merely self-sustaining. Many spiral us further and further in a particular direction. These are growth loops, which Kent Beck describes: “the more it grows, the easier it is to grow more.” There is power for us in setting up, nourishing, or thwarting our own cycles. Growth loops are more powerful than individual, discrete incentives. The most supportive families, the most productive teams have them.

Because a growth loop moves a system in a particular direction, it’s more of a spiral than a circle. I want to draw a z-axis through it. Like snipping at my spouse:

a spiral toward disconnection, of me getting snippy and him getting sarcastic

As my spouse and I get snippy with each other, we spiral toward disconnection. When we talk early and welcome gentle feedback, we spiral toward connection. Whereas, when my team bonds and accomplishes things, we spiral toward belonging — with a side effect of accomplishments.

a spiral toward connection, of a team learning together and accomplishing stuff and bonding

I like to use this to explain why JavaScript is the most important programming language. It might be an inferior language by objective standards, but “objective standards” are like linear causality: limited. Reality is richer.

JavaScript started by being the first runtime built into the browser. This made it useful enough for people to learn it, and that’s key: a language is only useful if the know-how of using it is alive inside people. Then all these people create tools, resources, and businesses that make JavaScript more useful 🔄.

a spiral toward usefulness, from the first runtime to people learning and language improvements and tooling

Call this “the network effect” if you want to; network effects are one form of growth loop.

In our situation, JavaScript is the most useful language and it’s only getting more useful. It may not have the best syntax, or the best guarantees. It does have the runtime available to the most humans, the broadest community and a huge ecosystem. Thanks to all the web-based learning resources, it is also the most accessible language to learn.

When I start looking for these growth loops, I see them everywhere, even inside my head. I’m low on sleep, so I’m tired, so I don’t exercise, so I sleep poorly, so I’m tired 🔄. I’m not sleeping, so I get mad at myself for being awake, which is agitating and makes it harder to sleep 🔄. Once I recognize these, I can intervene. I notice that what I feel like doing and what will make me happy are not the same things. I step back from my emotions and feel compassion for myself and then let them go instead of feeding them. I stop negative loops and nourish positive ones.

Circular causality is real, and it powerful. In biology, in our lives, and in our teams, it feels stronger than linear causality; it can override individual competition and incentives. It forms ecosystems and symmathesies and cultures. Linear causality is valuable to understand, because its consequences are universal, while every loop is contextual. But can we stop talking like the only legitimate explanations start from atoms? We influence each other at every level of systems. Circular causality is familiar to us. I want to get better at seeing this and harvesting it in my work.

Systems and context at THAT Conference

It’s all that

THAT Conference is not THOSE conferences. It’s about the developer as more than a single unit: this year, in multiple ways.

I talked about our team as a system — more than a system, a symmathesy. Cory House said that if you want to change your life, change your systems. As humans, our greatest power lies in changing ourselves by changing our environment. It’s more effective than willpower.

Cory and his family on the grassy stage

Many developers brought family with them; THAT conference includes sessions for kids and partners. It takes place in the Kalahari resort in Wisconsin Dells. My kids spent most of their time in the water parks. Socialization at this conference was different: even though fewer than half of attendees brought family with them, it changes the atmosphere. There’s a reminder that we are more than individuals, and the world will go on after we are gone.

my friend Emma, daughter Linda, and me at the outdoor water park

Technical sessions broaden perspective. Joe Morgan put JavaScript coding styles in perspective: yes, they’re evolving and we have more options to craft readable code. But what is “readable” depends on the people and culture of your team. There are no absolutes, and it is not all about me.

Brandon Minnick told us the nitty-gritty of async/await in C#, and how to do things right. I learned that by default, all the code in an async function (except calls with await) runs on the same thread. This is not the case in Node, which messes with the thread-local variables we use for logging. But in C# it’s easy to lose exceptions entirely; the generated code swallows them. This makes me appreciate UnhandledPromiseException.

Ryan Niemeyer gave us 35 tips and tricks for VSCode. I love this IDE because it is useful right away, and sweetly customizable when you’re ready. Since this session, I’ve got FiraCode set up, added some custom snippets for common imports, enabled GitLens for subtle in-line attributions, and changed several lines in a file simultaneously using multicursor. And now I can “add suggested import” without clicking the little light bulb: it’s cmd-. to bring up the “code actions” menu for arrow keys. Configuring my IDE is a tiny example of setting up my system to direct me toward better work.

Then there was the part where my kids and I goaded each other into the scary water slides. They start vertical. They count down “3, 2, 1, Launch” the floor drops out from under you and you fall into the whooshy tube of water. I am proud to have lived through this.

From personalizing your IDE, to knowing your programming language, to agreeing with your team on a shared style, our environment has a big effect on us.

McLuhan’s Law says: We shape our tools, and our tools shape us. This is nowhere more effective than in programming, where are tools are programs and therefore malleable.

But our tools aren’t everything: we also shape our environment in whom we hang around, whom we listen to. Conferences are a great tool for broadening this. THAT Conference is an unusually wholesome general-programming conference, and I’m very happy to have spoken there. My daughters are also ready to go back (but not to do that scary water slide again).

Do Things Right with npm install

Lately I’ve been wrestling with npm. Here are some rules I’ve learned:

Use `npm ci` rather than `npm install`

`npm ci` will bring down exactly the dependencies specified in package-lock.json. `npm install` does more than that; it also tries to update some libraries to a more recent version. Sometimes it updates URLs or nonsense in package.json so that it my `git status` is dirty. Sometimes it does deduping. Sometimes it sticks the version you have lying around. I haven’t figured it out. It seems to be pretty dependent on the current circumstances on my file system.

Now I only use `npm install` if I specifically want to change the dependencies in my filesystem.

Use `npm install –save-exact`

Especially for snapshots. Semver does not work for snapshots or branches or anything but releases. And npm only works with semver. If you are not using a release; if you publish with build tags or branch tags or anything like that; do not give npm any sort of flexibility. It will not work. Specify a precise version or else it will give you nasty surprises, like deciding some alphabetically-later branch is better than the master-branch version you specified.

Use `npm view` to check the status of a library

This is quite useful. Try `npm view ` and it brings to your command line the info you can get from the npm website. You can ask it for specific fields. To get the latest version of chalk:

$ npm view chalk dist-tags.latest
2.4.1

If you want to do anything programmatic with this info, the “do things right” flag for `npm view` is `–json`.

Try `npm ls` but then dig around on the filesystem

Exploring the dependency tree, `npm ls` is really cool; it shows it to you. You can see where you’re getting a specific library with `npm ls ` except that it doesn’t always work. In the end, I dig around in my node_modules directory, using `find -name .` to look for the real thing.

Other times I use my little node-dependency dungeon explorer game to see what version of stuff is where. 

These are just a few of the nasty surprises I’ve found moving from Java to TypeScript, from maven dependencies to npm. Dependency management is an unsolved problem, and the people working on npm have made huge improvements in the last few years. I look forward to more.

Understanding Promises in JavaScript (with yarn and Legos)

TL;DR: creating a Promise splits off another path of execution, and the Promise object represents the end of that path. Calling .then adds code to the end of that path.

You can think of your program’s execution as following a piece of yarn. this video illustrates the difference between a synchronous program and the same program using Promises:

Promises let you by explicit about what needs to happen after what, while giving you more flexibility than “each of these things happens one at a time in this order” (the default flow of a simple synchronous program).

The negative is that when you want to specify “do this after that Promise,” you have to package up that code and pass it to .then() . The Promise object holds the end of the yarn representing its path of execution; .then() ties more code onto the end and returns the new end.

See this in the readConfig function, which reads a file and parses its contents. The synchronous version executes on the program’s usual path of execution: readFileSync retrieves some bits, and then JSON.parse turns them into a useful object.

synchronous: one piece of yarn proceeds straight down the code.

In the version with promises, readConfig returns immediately, but what it returns is the end of a piece of string. It’s a piece of string that includes readFile, which fetches some bits; tied on by .then() is JSON.parse, which turns those bits into a useful object.

promises: the end of a string is returned, an orange piece tied on to a dark blue piece

The useful object will be available at the end of the orange string to whatever code gets tied on to it later.

Promises beat callbacks in this respect: when you start up the asynchronous task, you don’t have to provide alllll the code that needs to execute after it. You can add more later, as long as you keep hold of the end of the string.

Don’t lose the end of the string! If you don’t need it to add any more code, tie the string off neatly with .catch() — otherwise an error might come out of a stray end and mess up your program. (I could do another video on that.)

Promises don’t beat callbacks in that you still have to wrap subsequent code up into a function. It gets messy when you have .then() calls within .then() calls. But wait! Don’t get discouraged!

In TypeScript and ES2018?, we can write asynchronous code in the same simple format using async and await. While the code looks almost the same as the synchronous version, the paths of execution are more like the Promises one.

async function: just as many pieces of yarn as the Promises, with no calls to .then()

The async function returns immediately — don’t be fooled by that return statement way at the end. It splits off a path of execution, which does work (here, reading the file) until it hits theawait keyword. The rest of the code (parsing) becomes another piece of string. await ties the strings together just like .then() (except way prettier). At the end of an async function is a return statement, which supplies the value that will come out the end of the string. Anasync function always returns a Promise.

Promises give you more control, so they give you more to think about. This means they’ll always be more complicated than synchronous code. With async and await we get both control and clarity: what Avdi calls “straight line code that just thunks itself onto the work queue whenever it gets stuck.” Don’t fear Promises, do use TypeScript, and do keep hold of the ends of your strings.

Dictionary Objects in JavaScript and TypeScript

TL;DR: when using an object as a dictionary in TypeScript/ES6, iterate through it using `Object.keys()`.

Coming from statically typed languages, I keep looking for a Map or Dict type in TypeScript (or JavaScript). People use objects for this, though. Objects have key-value pairs in them, and you can add them and delete them and declare them statically and it looks nice, especially in literals.

const dictionary = { 
   impute: “attribute to”,
   weft: “the holding-still strings on a loom”,
}

But using a JavaScript object as a dictionary has its quirks.

There are three ways to iterate through the values in the object. (This is TypeScript syntax.)

Flip through the enumerable keys defined on an object:

for (const key of Object.keys(dictionary)) {
   const valuedictionary[key]
   console.log(`${key} -> ${value}`)
}

This one makes the most sense to me; it’s what I expect from a dictionary. It flips through the values in an array of the keys defined on that object directly, no prototypal inheritance considered. This is how JSON.stringify() prints your object.

Flip through the enumerable keys defined on that object and its prototype chain:

for (const key in dictionary{
   const value = dictionary[key]
   console.log(`${key} -> ${value}`)
}

This is the easiest one to write. It flips through the keys defined on the object and its prototype chain. If you’re using an ordinary object for this, and no one has done anything bizarre like add an enumerable property to Object, it’s fine. tslint hates it though; it bugs me about “for(…in…) statements must be filtered with an if statement.” tslint is like “OMG, you do not know what is on that thing’s prototype chain it could be ANYTHING”

I find it backwards that for(…in…) flips through property names of an object, while for(…of…) flips through values in an array. This confuses me daily in TypeScript. If you accidentally use for(…of…) instead of for(…in…) on an object, then you’ll see 0 iterations of your loop. Very sneaky. If you accidentally use for(…in…) on an array, you get the indices instead of the values, also confusing. TypeScript and tslint don’t warn about either error, as far as I can tell. 😦

Flip through the enumerable and non-enumerable keys defined on that object:

for (const key of Object.getOwnPropertyNames(dictionary)) {
   const value = dictionary[key]
   console.log(`${key} -> ${value}`)
}

This one flips through only keys on the object, not its prototype chain, and also gives you the names of non-enumerable properties. You probably don’t want those.

What are non-enumerable properties?

Conceptually, they’re properties that don’t make sense to flip through, that we don’t want JSON.stringify() to look at. They’re hidden from for(…in…) iteration and from Object.keys(). You can still access them on the object. For instance, constructors of TypeScript classes are non-enumerable properties. Methods on built-in types like Array and object are non-enumerable. They stay out of the way.

When would we want to flip through them, like in Object.getOwnPropertyNames()?
I don’t know, maybe for debugging.

Why make a non-enumerable property?
I hit a use case for this today: serializing an instance of a class with recursive fields. JSON.stringify() can’t print recursive structures.

Side quest: Making recursive objects printable

In TypeScript, every field in a class instance (including any inherited field) is an enumerable property, returned from Object.keys() and printed by JSON.stringify(). See this TreeNode class that tracks its children, and its children track it:

class TreeNode {


private _parent: TreeNode;
public children: TreeNode[] = [];


public constructor(public readonly value: string) {
}
}

Printing an instance of TreeNode gives me: `TypeError: Converting circular structure to JSON` waah 😦

Here’s a tricky way to say “Hey JavaScript, don’t print that _parent field”. Explicitly override its enumerable-ness in the constructor.


class
TreeNode {

private _parent: TreeNode;
public children: TreeNode[] = [];

public constructor(public readonly value: string) {
Object.defineProperty(this, “_parent”, { enumerable: false });
}
}

We can get tricky with TypeScript class fields. After all, they get tricky with us.

Properties on a class instance

In TypeScript, class instance fields show up in Object.keys() and accessor-based properties (like with getters and setters) do not. They’re properties on the prototype, which is the class itself. So if you want to see accessor properties, use for(…in…) on class instances. That gets the enumerable properties from the prototype chain. Watch out: these include methods.

Why iterate through the properties on a class? I don’t know, maybe for debugging again. If you do it, I suggest skipping methods. This makes tslint happy because its an if statement:

for (const propertyName in classInstance) {
  if (typeof classInstance[propertyName] !== “function”) {
    console.log(`${propertyName}=${classInstance[propertyName]}`);
  }
}

Recommendations

If you have a class instance, access its properties with dot notation, like treeNode.children. That way TypeScript can help you avoid mistakes. If you have a dictionary object, access its properties with index notation, like dictionary[“impute”] (and turn off the angry tslint rule). Class instances have specific types; dictionary objects are type object. Access the contents of a dictionary using Object.keys().

Functional principles come together in GraphQL at React Rally

Sometimes multiple pieces of the industry converge on an idea from
different directions. This is a sign the idea is important.

Yesterday I spoke at React Rally (video coming later) about the
confluence of React and Flux in the front end with functional
programming in the back end. Both embody principles of composition, declarative
style, isolation, and unidirectional flow of data.

In particular, multiple separate solutions focused on:

  •   components declare what data they need
  •   these small queries compose into one large query, one GET request
  •   the backend gathers everything and responds with data in the requested format

This process is followed by Falcor from Netflix (talk by Brian Hunt) and GraphQL from Facebook (talks by Lee Byron and Nick Schrock, videos later). Falcor adds caching on the client, with cache
invalidation defined by the server (smart; since the server owns
the data, it should own the caching policy). GraphQL adds an IDE for
queries, called GraphiQL (sounds like “graphical”), released as
open source for the occasion! The GraphQL server provides introspection
into the types supported by its query language. GraphiQL uses this to let the developer
work with live, dynamically fetched queries. This lets us explore the available
data. It kicks butt.

Here’s an example of GraphQL in action. One React component in a GitHub client might specify that it needs
certain information about each event (syntax is approximate):

{
  event {
    type,
    datetime,
    actor {
      name
    }
  }
}

and another component might ask for different information:

{  event {    actor {      image_uri    }  }}

The parent component assembles these and adds context, including
selection criteria:

{  repository(owner:”org”, name:”gameotron”) {    event(first: 30) {       type,       datetime,       actor {         name,         image_url      }    }  }}

Behind the scenes, the server might make one call to retrieve the repository,
another to retrieve the events, and another to retrieve each actor’s
data. Both GraphQL and Falcor see the query server as an abstraction
layer over existing code. GraphQL can stand in front of a REST
interface, for instance. Each piece of data can be
fetched with a separate call to a separate microservice, executed in
parallel and assembled into the structure the client wants. One GraphQL
server can support many version of many applications, since the
structure of returned data is controlled by the client.
The GraphQL server assembles all the
results into a response that parallels the structure of the client’s
query:

{  “repository” : {    “events” : [{      “type” : “PushEvent”,      “datetime” : “2015-08-25Z23:24:15”,      “actor” : {        “name” : “jessitron”,        “image_url” : “https://some_cute_pic”      }    }    …]  }}

It’s like this:

The query is built as a composition of the queries from all the components. It goes to the server. The query server spreads out into as many other calls as needed to retrieve exactly the data requested.

The query is composed like a fan-in of all the components’
desires. On the server this fans out to as many back-end calls as
needed. The response is isomorphic to the query. The client then spreads
the response back out to the components. This architecture supports
composition in the client and modularity on the server.

The server takes responses from whatever other services it had to call, assembles that into the data structure specified in the query, and returns that to the client. The client disseminates the data through the component tree.

This happens to minimize network traffic between the client and server.
That’s nice, but what excites me are these lovely declarative queries that
composes, the data flowing from the parent component into all the
children, and the isolation of data requests to one place. The exchange
of data is clear. I also love the query server as an abstraction over
existing services; store the data bits in the way that’s most convenient
for each part. Assembly sold separately.

Seeing similar architecture in Falcor and GraphQL, as well as in
ClojureScript and Om[1] earlier in the year, demonstrates that this is
important in a general case. And it’s totally compatible with
microservices! After React Rally, I’m excited about where front ends are
headed.


[1] David Nolen spoke about this process in ClojureScript at Craft Conf
earlier this year. [LINK]

An Opening Example of Elm: building HTML by parsing parameters

I never enjoyed front-end development, until I found Elm. JavaScript with its `undefined`, its untyped functions, its widely scoped mutable variables. It’s like Play-Doh, it’s so malleable. And when I try to make a sculpture, the arms fall off. It takes a lot of skill to make Play-Doh look good.

Then Richard talked me into trying Elm. Elm is more like Lego Technics. Fifteen years ago, I bought and built a Lego Technics space shuttle, and twelve years ago I gave up on getting that thing apart. It’s still in my attic. Getting those pieces to fit together takes some work, but once you get there, they’re solid. You’ll never get “method not found on `undefined`” from your Elm code.

Elm is a front-end, typed functional language; it to JavaScript for use in the browser. It’s a young language (as of 2015), full of opportunity and surprises. My biggest surprise so far: I do like front-end programming!

To guarantee that you never get `undefined` and never call a method that doesn’t exist, all Elm functions are Data in, Data out. All data is immutable. All calls to the outside world are isolated. Want to hit the server? Want to call a JavaScript library? That happens through a port. Ports are declared in the program’s main module, so they can never hide deep in the bowels of components. Logic is in one place (Elm), interactions in another.

one section (Elm) has business logic and is data-in, data-out. It has little ports to another section( JavaScript) that can read input, write files, draw UI. That section blurs into the whole world, including the user.


This post describes a static Elm program with one tiny port to the outside world. It illustrates the structure of a static page in Elm. Code is here, and you can see the page in action here. The program parses the parameters in the URL’s query string and displays them in an HTML table.[1]

All web pages start with the HTML source:


  URL Parameters in Elm
  <script src="elm.js” type=”text/javascript”>
 

  var app = Elm.fullscreen(Elm.UrlParams,
                           { windowLocationSearch:
                               window.location.search
                           });

This brings in my compiled Elm program and some CSS. Then it calls Elm’s function to start the app, giving it the name of my module which contains main, and extra parameters, using JavaScript’s access to the URL search string.

Elm looks for the main function in my module. The output of this function can be a few different types, and this program uses the simplest one: Html. This type is Elm’s representation of HTML output, its virtual DOM.

module UrlParams where

import ParameterTable exposing (view, init)
import Html exposing (Html)

main : Html
main = view (init windowLocationSearch)

port windowLocationSearch : String

The extra parameters passed from JavaScript arrive in the windowLocationSearch port. This is the simplest kind of port: input received once at startup. Its type is simply String. This program uses one custom Elm component, ParameterTable. The main function uses the component’s view function to render, and passes it a model constructed by the component’s init method.

Somewhere inside the JavaScript call to Elm.fullscreen, Elm calls the main function in UrlParams, converts the Html output into real DOM elements, and renders that in the browser. Since this is a static application, this happens once. More interesting Elm apps have different return types from main, but that’s another post.

From here, the data flow of this Elm program looks like this:

The three layers are: a main module, a component, and a library of functions.
The main module has one input port for the params.  That String is transformed by init into a Model, which is transformed by View into Html. The Html is returned by main and rendered in the browser. This is the smallest useful form of the Elm Architecture that I came up with.

Here’s a piece of the ParameterTable module:

module ParameterTable(view, init) where

import Html exposing (Html)
import UrlParameterParser exposing (ParseResult(..), parseSearchString)

— MODEL
type alias Model = { tableData: ParseResult }

init: String -> Model
init windowLocationSearch =
  { tableData = parseSearchString windowLocationSearch }

— VIEW
viewModel -> Html
view model =
  Html.div …

The rest of the code has supporting functions and details of the view. These pieces (Model, init, and view) occur over and over in Elm. Often the Model of one component is composed from the Models of subcomponents, and the same with init and view functions.[2]

All the Elm files are transformed by elm-make into elm.js. Then index.html imports elm.js and calls its Elm.fullscreen function, passing UrlParams as the main module and window.location.search in the extra parameter. And so, a static (but not always the same) web page is created from data-in, data-out Elm functions. And I am a happy programmer.


[1] Apparently there’s not a built-in thing in JavaScript for parsing these. Which is shocking. I refused to write such a thing in JavaScript (where by “write” I mean “copy from StackOverflow”), so I wrote it in Elm.

[2] Ditto with update and Action, but that’s out of scope. This post is about a static page.

Left to right, top to bottom

TL;DR – Clojure’s threading macro keeps code in a legible order, and it’s more extensible than methods.

When we create methods in classes, we like that we’re grouping operations with related data. It’s a useful organizational scheme. There’s another reason to like methods: they put the code in an order that’s easy to read. In the old days it might read top-to-bottom, with subject and then verb and then the objects of the verb:

With a fluent interface that supports immutability, methods still give us a pleasing left-to-right ordering:
Methods look great, but it’s hard to add new ones. Maybe I sometimes want to add functionality for returns, or print a gift receipt. With functions, there is no limit to this. The secret is: methods are the same thing as functions, except with an extra secret parameter called this
For example, consider JavaScript. (full gist) A method there can be any old function, and it can use properties of this.


var
completeSale = function(num) {
console.log("Sale " + num + ": selling " 


+ this.items + " to " + this.customer);
}

Give that value to an object property, and poof, the property is a method:

var sale = {

customer: "Fred",

items: ["carrot","eggs"],

complete: completeSale

};
sale.complete(99);
// Sale 99: selling carrot,eggs to Fred

Or, call the function directly, and the first argument plays the role of “this”:

completeSale.call(sale, 100)
// Sale 100: selling carrot,eggs to Fred
In Scala we can create methods or functions for any operation, and still organize them right along with the data. I can choose between a method in the class:
class Sale(…) {
   def complete(num: Int) {…}
}
or a function in the companion object:
object Sale {
   def complete(sale: Sale, num: Int) {…}
}
Here, the function in the companion object can even access private members of the class[1]. The latter style is more functional. I like writing functions instead of methods because (1) all input is explicit and (2) I can add more functions as needed, and only as needed, and without jumbling up the two styles. When I write functions about data, instead of attaching functions to data, I can import the functions I need and no more. Methods are always on a class, whether I like it or not.
There’s a serious disadvantage to the function-with-explicit-parameter choice, though. Instead of a nice left-to-right reading style, we get:

It’s all inside-out-looking! What happens first is in the middle, and the objects are separated from the verbs they serve. Blech! It sucks that function application reads inside-out, right-to-left. The code is hard to follow.

We want the output of addCustomer to go to addItems, and the output of addItems to go to complete. Can I do this in a readable order? I don’t want to stuff all my functions into the class as methods.
In Scala, I wind up with this:

Here it reads top-down, and the arguments aren’t spread out all over the place. But I still have to draw lines, mentally, between what goes where. And sometimes I screw it up.

Clojure has the ideal solution. It’s called the threading macro. It has a terrible name, because there’s no relation to threads, nothing asynchronous. Instead, it’s about cramming the output of one function into the first argument of the next. If addCustomer, addItems, and complete are all functions which take a sale as the first parameter, the threading macro says, “Start with this. Cram it into first argument of the function call, and take that result and cram it into the first argument of the next function call, and so on.” The result of the last operation comes out. (full gist


\\ Sale 99 : selling [carrot eggs] to Fred


This has a clear top-down ordering to it. It’s subject, verb, object. It’s a great substitute for methods. It’s kinda like stitching the data in where it belongs, weaving the operations together. Maybe that’s why it’s called the threading macro. (I would have called it cramming instead.)

Clojure’s prefix notation has a reputation for being harder to read, but this changes that. The threading macro pulls the subject out of the first function argument and puts it at the top, at the beginning of the sentence. I wish Scala had this!
—————–
Encore:
In case you’re still interested, here’s a second example: list processing.

Methods in Scala look nice:

but they’re not extensible. If these were functions I’d have:

which is hideous. So I wind up with:
That is easy to mess up; I have to get the intermediate variables right.
In Haskell it’s function composition:
That reads backwards, right-to-left, but it does keep the objects with the verbs.

Notice that in Haskell the map, filter, reduce functions take the data as their last parameter.[2] This is also the case in Clojure, so we can use the last-parameter threading macro. It has the cramming effect of shoving the previous result into the last parameter:

Once again, Clojure gives us a top-down, subject-verb-object form. See? the Lisp is perfectly readable, once you know which paths to twist your brain down.

Update: As @ppog_penguin reminded me, F# has the best syntax of all. Its pipe operator acts a lot like the Unix pipe, and sends data into the last parameter.
F# is my favorite!
————
[1] technical detail: the companion object can’t see members that are private[this]
[2] technical detail: all functions in Haskell take one parameter; applying map to a predicate returns a function of one parameter that expects the list.

Return of the Stateful Client

We used to write thick clients, fat happy Swing apps that executed application logic and business logic on the user’s computer, and connected to the server mostly for data access. Then we moved toward service-oriented architecture and the thin, browser-based client. This solved all our deployment problems, and it moved the business logic back to the server.

Now that browsers are more powerful, more of the application logic is moving back into the front end. Instead of calling the client “thick,” we now call it “rich.” Where is this going? Toward stateful, browser-based apps. Forget maintaining a session on the server — store all the state in JavaScript.

Think about apps that run in a browser, but aren’t dependent on constant network access.

Now think about developing these apps. Think about reloading to bring in a JavaScript or CSS change… and starting over. With all the state in the JavaScript, reloading a page means restarting the app. More than that: reinstalling the app. That’s harder than reloading one thin page to see changes, or running that Swing app from your IDE. So what’s to be done?

Bring the IDE into the browser. With even more JavaScript, dynamically reload pieces of the application.

Last night at STLJS, Bill Edley from Technical Pursuits Inc. demonstrated his TIBET library, which makes the browser into the IDE. Change your application while it’s running. Type commands in the little in-app REPL. The library also handles asynchronously getting data back to the server, abstracting temporary disconnections (motto: “There is no server, there is no server, there is no server”). It provides a richer OO framework for JavaScript as well.
What’s the negative? 3 megabytes. yikes! and it’s not out yet, so you can’t play with it.

But hey, the thick rich client is back, still functional when the server is unavailable, and it is integrated into the IDE (Smalltalk lives!). Just think, no session data on the server — that smells like freedom. We’re bringing the code to the data again, where this time the data is user input. This bodes well for responsiveness.