"In cognitive science, we only use the lambda-calculus model of computation," says Dr Philipp Koralus (Phd, Philosophy & Neuroscience, on his way to be a lecturer at Oxford). "We want to talk about what the system is doing, and abstract the how."
Two models of computation: lambda calculus and the Turing machine. Teacher and student, Church and Turing.
Years later, engineering catches up and we start building computers.
John McCarthy favored an encoding of the lambda calculus. John Von Neumann built a Turing machine, with a separation of processing and data. "Von Neumann won because he was smarter," says Barbara Liskov. Von Neumann's particular type of intelligence was a huge working memory: he could hold a crapton of state in his head and manipulate it. Practical computation took the Turing machine route, stateful and imperative.
great effort, we could port some pieces of solutions to other problems.
Meanwhile, lambda calculus and functional languages fell to academia. Now they're coming back. Now memory and processes are plentiful enough that we can afford to force a different computational model onto Von Neumann's architecture. But why would we?
What's the difference that makes the functional model of computation more useful to theoreticial cognitive scientists, who are modeling the reasoning method of the human mind? Philip says it's the semantics and the systematicity. "You can't get arbitrary novelty and recombination without a formal system."
But what about reuse in imperative languages? We do manage this. "You can get similarity in templates," says Philipp. We can have design patterns, frameworks, and libraries. We can attend conferences and learn the tools people have crafted so we can recognize when our problem might mostly fit. Bur for arbitrary novelty, we have to get down to the language and code it ourselves. With our fingers.
I used to think that FP concepts like monads and monoids were like OO design patterns, only more abstract. Now I see that they are fundamentally different. If code can be represented in a formal symbolic notation, then any piece can be broken away and used differently in a different place. Like you can pull any subexpression out of a mathematical equation and think about it on its own.
It's the difference between a neural network, which can learn to recognize faces, and a symbolic representation of faces that says, "This is a nose. This is an eye. This is the distance between the nose and eye." The symbolic system gives us meaningful dimensions we can use independently and learn from. The neural network tells us "You" or "Not you."
Then, composition. In OO our idea of composition is a has-a relationship. Yet, the owner object must be written to accommodate and pass messages through to its components. Contrast this with functional composition, which works like this:
find . -name '*.txt' | xargs cat | grep 'ERROR' | cut -d ':' -f 2 | sort | uniq -c
Each symbol here, "find", "cat", "grep", "sort" has an independent meaning and is useful alone. Functional composition is a fits-together relationship. Neither part knows anything about the others. We piece them together to serve arbitrary purposes. Best of all, we don't have to be Von Neumann to understand any given piece or conjunction.
Now that the field of software development recognizes the value of a more "tell us what it does" declarative and recombinable-without-template-matching computational model, some of us are struggling. We grew up learning to think like Von Neumann.
I used to say at interviews that my best skill was holding the whole system in my head. Now I recognize that this was a crappy way to reason. It doesn't scale, and I can't pass that understanding on to others as a whole. The wiser goal is to eliminate a need to load everything into one head. It's going to be a tough transition from the relatively-informal structure I'm used to, but now I'm sure it's worth it. I only hope I can take some of you with me.