In defense of rationality and dynamic programming

Karl Popper defines rationality as: basing beliefs on logical arguments and evidence. Irrationality is everything else.

He also defines comprehensive rationality as: only logical arguments and evidence are valid basis for belief. But this belief itself can only be accepted by choice or faith, so comprehensive rationality is self-contradictory. It also excludes a lot of useful knowledge. This, he says, is worse than irrationality.

It reminds me of some arguments for typed functional programming. We must have proof our programs are correct! We must make incorrect programs impossible to represent in compiling code! But this excludes a lot of useful programs. And do we even know what ‘correct’ is? Not in UI development, that’s for sure.

Pure functional programming eschews side effectsy (like printing output or writing data). Yet it is these side effects that make programs useful, that let’s them impact the world. Therefore, exclusively pure functional programming is worse than irrationality (all dynamic, side-effecting programming).

Popper argues instead for critical rationality: Start with tentative faith in premises, and then apply logical argument and evidence to see if they hold up. Consider new possible tenets, and see whether they hold up better. This kind of rationality accepts that there is no absolute truth, no perfect knowledge, only better. Knowledge evolves through critical argument. We can’t get to Truth, but we can free ourselves from some falsehoods.

This works in programming too. Sometimes we do know what ‘correct’ is. In those cases, it’s extremely valuable to have code that you know is solid. You know what it does, and even better, you know what it won’t do. Local guarantees of no mutation and no side effects are beautiful. Instead of ‘we don’t mutate parameters around here, so I’ll be surprised if that happens’ we get ‘it is impossible for that to happen so I put it out of my mind.’ This is what functional programmers mean when they talk about reasoning about code. There is no probably about it.

Where we can use logical arguments and evidence, let’s. For everything else, there’s JavaScript.

Everything is a big circle. Inside are some blobs of nonsense and a nice rectangle of purely functional programs. Some of the area between FP and nonsense is useful.

direct aims, broad interests

You can want something, or you can just be interested in things.

On a dating site, you can decide what you want in a partner and filter people for that and message them. Or, you can find many things about people interesting, and look for any of these traits or hobbies, and ask about them.

In software, you can set an engagement metric and aim to move it. Or, you can aim to “be more useful,” think of many possible ways that could happen, and look for ones that you can try.

A danger of aiming for one metric is: in moving that needle, you may degrade essential properties. If your added information makes the page so busy that I can’t look at it, then your needle may move while the software becomes less useful.

In people, the world has more wonder in it than I can think to want. Wide interests invite widening surprise.

There is a place for purposive action (as Gregory Bateson calls it). For deliberately moving directly toward a goal. Maybe that place is limited to systems we can understand and predict.

I’ll be specific about my wants, in the small: I want to write this post. And open to whatever finds me, in the large: someone will subsume it in more interesting ideas.

Victory at life

In most (modern) board games, there’s a phase where you build an engine, and a phase where you use that engine to achieve victory. This is not explicit, it’s just that in the first part of the game you choose things that give you more power, while in the last few rounds you maximize victory points. And then you win!

For instance, in San Juan, you win with victory points. The core mechanic is: pay cards to build buildings. Some of the buildings give you lots of victory points, while others give you powers that help you get more cards.

At the start of the game, build only buildings that help you get more cards. Points don’t matter. Near the end of the game, build whatever gets you the most points. Cards are about to be useless. Only points matter.

In life, there are activities that build our engines, that grow ourselves or improve our circumstances. Some other activities are just winning.

Winning is looking at beautiful things, art or the faces of people. Winning is laughing while my children goof around together. It is playing music, dancing all-out, soaking up the sun.

Winning is also taking action to move the larger system, the country or the world, in a better direction. Donating money helps, but if we participate in a campaign, we get to experience the winning.

What gives you victory points at life? Those activities that give you the feeling, “Yeah. This is what we are here for.” For me, looking out a plane window during takeoff. Eating great food. Cuddling with my partner. Playing Beat Saber with great ardor.

We never know when our game will end. Cards will become useless. Victory is never useless, so collect some points every day.

Ascendency

What makes one system more organized than another? More developed, more … civilized? How can we measure advancement? In Ecology, the Ascendent Perspective, Robert Ulanowicz has an answer. Along the way, he answers even bigger questions, like: how do we reconcile the inexorable increase in entropy with the constant growth, learning, and making going on around us?

Concept: total system throughput

Take an ecosystem, or an economy. We can measure the size (magnitude) of the system by counting its activity. In the economy, this is GDP: how much money changes hands throughout the year? The same dollar might be spent over and over, and it counts toward GDP every time. In an ecosystem, we count carbon exchanges between species and stocks (like detritus on the ground). Or we count nitrogen, or phosphorus, or calories — whichever is the limiting factor for that system. If plankton gets eaten by a minnow which is eaten by a catfish, the carbon transferred counts every time.

The system grows in magnitude when more money/carbon/nitrogen enters it, or when more exchanges happen. But does this mean it’s more developed?

Concept: average mutual information

We count the transfers, the exchanges, to measure total system throughput. But are all transfers created equal? What about the patterns of movement?

Take this imaginary ecosystem that I just made up. There are some plants and some fish. Imagine the nitrogen flows from dirt to plants to fish to fish and back to dirt. I am not an ecologist.

at the bottom, dirt. nitrogen goes into two plants. each of them go into some minnows, which go to a fish, which goes to a bigger fish. Everybody contributes nitrogen back to the soil.

There are some flows, and ecologists can measure them, with lots and lots of work.

Now imagine this same system, except all the flows are equally distributed. Every species is the same as any other, they all eat each other and poop equally.

The same components, but now each one is connected to each one by an arrow of the same thickness.

The first picture looks more organized, right? In the second picture, it is maximally random where the carbon goes. The first ecosystem is more ordered. Nitrogen is moving through these pathways for reasons, not willy-nilly everywhere. Ulanowicz came up with a formula for quantifying the difference, and he calls it average mutual information (AMI). Because, the part where the plants don’t eat the fish is informational, it says something about the system, about which pathway is more efficient.

Concept: ascendency

Both the total system throughput and the un-randomness in the system (AMI) contribute to its significance, its organization. So Ulanowicz multiplies these (or something similar, the math is in the book at a high level and in his other works at a detailed level) and that’s Ascendancy. How much does the system do, and how interestingly does it do it?

Concept: efficiency and overhead

For a given system throughput, a maximally ascendent system would have like one path in a circle. Only the most efficient path would be used. There wouldn’t even be two plants.

Thinking about it this way, you can divide the system throughput into the part that contributes to ascendency, and the rest. The rest is overhead. It’s all those extra pathways, which, what is even the point?

The point is flexibility. Resilience. A perfectly efficient system is maximally fragile. One disruption, one disease kills off the single plant, and the whole thing done, everybody’s dead. No more flows, no more fishes.

Overhead, unpredictability, extra pathways, these are what keeps the ecosystem alive and flourishing under changing conditions. They let it change without dying out.

Efficiency vs Resilience

This illustrates that there is a conflict between efficiency and resilience. Organization is important — it often leads to system growth — and also limiting.

The ascendency of the economy is higher if, for the same GDP, money flows toward fewer corporations. But is that the most resilient?

Our teams are more efficient if there is exactly one path of information flow. But human communication is always partial, and broad social pressures direct us more constructively than a single financial incentive.

The concept of ascendency is useful in many ways, as detailed in the book. It incorporates and raises broader philosophical points, which come up in my other post about this book. “The world as we perceive it is the outcome … of a balanced conflict — the opposition of propensities that build order arrayed against the inevitable tendency for structures to fall apart.”

Reading: Ecology, the Ascendent Perspective

Complexity in Ecological Systems. by Robert E. Ulanowicz

It’s about ecology, complexity, and a considerable bit of philosophy. Very readable. Unfortunately only available in dead-tree.

“Life itself cannot exist in a wholly deterministic world!”

Concept: auto-catalytic loops

This book explains how evolution works. Evolution works by “try shit and see what happens” — perturb and test, quickly discard the not-better.

Competition is one method for test. It is a weak one, unsatisfying to explain (to me) how life got so interesting so fast. And keeps getting more and more interesting, faster than fits the time scale of generations.

Auto-catalytic loops are the next selection method, as described here. A water-plant’s leaves provide surface for bacteria. The water-plant even encourages the bacteria to grow there, why? Tiny zooplankton come to eat the bacteria, and are captured by the plant. The three form an auto-catalytic loop of helping each other; they can’t survive nearly as well alone. Once a loop like this forms, any organism that contributes to the loop, contributes to itself. It is selected for on its own merit, which is increased by other members of the loop: the loop advantages all its members. The loop becomes its own tiny system and emerges as a thing, its own evolutionary power. If another organism can serve the purpose of the zooplankton better, the loop will adopt that one and the zooplankton will miss out, maybe die out.

Certain dogs undergo strong selection pressures from the auto-catalytic loop with dog owners: the better they are at making humans happy (looking cute, convenient size) the more humans take care of them. (That’s all me, the book doesn’t make such stretches)

Everything interesting is a circle. It happens with our ideas too: religion supports unity which supports group advancement which strengthens need for inclusion in that group which strengthens religion. Beliefs turn into communities which reinforce the beliefs.

Or in programming, more frequent deploys lead to safer deploys which leads to more frequent deploys. You can join the TDD virtuous cycle, where you build clearer mental models of the code, which are preserved by automated testing, which leads to better design and more TDD. Or the strongly-typed-functional-programming virtuous cycle: using strong (mathematical) abstraction leads to a solid mental model, which feels good, which leads to further abstraction (which leads you to a place where people outside this cycle can’t understand the code, but you do), and the code is very predictable, which leads you to propound FP and learn more of it.

Concept: propensities

Auto-catalytic loops introduce a new form of causality. Why are things as they are? Because there’s some internally-consistent loop that keeps them that way. (The random historical events that led to the setup of that loop are less useful to understand.) Why are women not in computer programming? Because they don’t think of themselves as the type, because they don’t see other women doing it. Which leads to men don’t think of women as programmers, which leads them to assume women they meet aren’t competent programmers, which drives women out of programming, which means younger women don’t see themselves in programmers and don’t go into it. Or aren’t seen as potential architects, and people push the strongest ones in to management instead. It all perpetuates itself.

But that’s not causal, right? No one literally pushed a woman into management. They might have encouraged her, but she chose to make the switch. She has free will.

Causality as we are used to thinking of it, as forcing, as push, is a weak concept. It doesn’t explain human action. It doesn’t explain biology, it doesn’t explain squat about the systems we live in. Newton’s Laws are satisfying in their determinism, but they’re only an approximation, an edge case.

The bacteria doesn’t have to live on the water-plant. But it finds it useful, and then even its individual demise supports the species by enhancing the auto-catalytic loop. No one forces it to live on the leaf, and it doesn’t always, but more and more often, it does.

Human action is the sum of many influences, many of them random. Including our consciously formed intentions. Including how happy our dog made us this morning. Including the perceived expectations of others, the social pressure we feel from our communities. Including the incentives set up at work. Including what smells good right this moment.

All of these influences on us are not forces in the “push” sense. Yet they matter. They change our propensities, the likelihood of doing one thing or another. And propensities matter. Because we do have free will: the ability to consciously influence our own decisions. (Okay, now I’m totally into the content and message of Dynamics in Action, which is the book whose references led me to Ecology, the Ascendent Perspective). We don’t have control over our decisions, though; our environment and every higher-level system (auto-catalytic loop or community) we are part of also affects what we do. We are not self-sufficient; humans aren’t human without other humans, without our relationships and institutions and cultures, nor without changing our personal environment.

It helps to recognize that push-forces, deterministic forces that compel 100%, are a rare edge case. Useful in analyzing collisions of solid objects. Statistical mechanics, useful in analyzing aggregate behavior of many-many unintelligent particles, is another edge case. The messy middle, where everything interesting happens, works in propensities: a generalization of “force” which changes probabilities without setting them to 1. (Now I’m totally into Popper’s A World of Propensities, which I reached from the references in Ecology: the Ascendent Perspective, and which is now my favorite paper.)

Auto-catalytic loops are more powerful forces of evolution than individual selection. They’re hard for us to see because they work in propensities, not forces. And indirectly, by changing the propensities of other organisms in the loop.

Science as I learned it in school (especially in my Physics major) doesn’t explain real life. No universal, deterministic law can. “There appears to be no irrefutable claims by any discipline to encompass ecological phenomena,” much less human workings! If science insists that “all causes are material and mechanical in origin,” that we can explain politics based on particles if we just push hard enough, science is not believable. It leaves us to supplement it with religion, because it does not explain the world we experience.

If we widen our thinking from forces to propensities, and from individual selection to auto-catalytic loops, we get closer. And there’s math behind this now. This book doesn’t include all of Ulanowicz’s math behind the concept of ascendency, but he references papers that do. There is math behind propensities, rigorous thinking. Scientific thinking, in the broader sense of science, of models that we can test against the real world. We have to move “test” beyond the edge case of controlled, isolated experiments. We have to appreciate knowledge that is not universal, because each auto-catalytic loop creates its own rules. What is beneficial in TDD is different from what is well-fitted in strongly-typed FP. Programming really is a harder field for women; it has nothing to do with the women as individuals and everything to do with the self-reinforcing loops in the culture. Reality is richer than deterministic laws.

Oops, I forgot to explain ascendency. Instead, this book (and its friends) inspired a whole personal philosophy. Win?

Horizonal goals

Video version here

There’s this great, short book by John Kay called Obliquity. It’s about goals that you can’t achieve by aiming for them directly; you have to look for an oblique goal that will happen to get you there. Like, you can’t aim for “happiness;” you have to find something such that aiming for it makes you happy, like raising children or writing or helping people who are hurting.

This book gives a name to some parts of my seamaps. The star at the top is the “high-level objective,” the unquantifiable goal which can never be achieved. Aiming for it sends us in a direction which happens to obliquely fill a goal such as “happiness” or “profit.” Goals such as “change the way development is done” or “find the optimal combination of music and words” or “address the observability needs of modern architectures” These are horizonal goals; as we make progress, the state of the art moves. We can never reach the horizon, but aiming for it takes us interesting places.

The mountains in the seamap are milestones. They’re achievable, measurable goals that we work toward because they’re in the direction of our high-level objective. Periodically we climb up and look around, take stock of whether our current direction is still going toward our star, and if not, change our milestone goals.

There are many smaller milestones on the way to the bigger one. Each offers an opportunity to take stock and possibly shift direction. There are actions that we take to move toward these goals. This is us in the boat, rowing.

Obliquity adds another element: necessary states. A necessary state to moving toward the next feature is: tests are passing. A necessary state for teamwork is that we are getting along with each other. Many of the actions we take are aimed at maintaining or restoring necessary states. These are like the whirlpools in my seamap; we have to smooth them out before we can row in the direction of our choice.

For example, here is a seamap for my current activity:

high-level objective: change the way people think about programming. Goal: explain Symmathecist. Subgoal: explain Horizonal. Necessary state: don't be too drunk. Action: type this post before opening wine.  

I will now hit “publish” and go open a bottle of wine.