Gaining new superpowers

When I first understood git, after dedicating some hours to watching a video and reading long articles, it was like I finally had power over time. I can find out who changed what, and when. I can move branches to point right where I want. I can rewrite history!

Understanding a tool well enough that using it is a joy, not a pain, is like gaining a new superpower. Like I’m Batman, and I just added something new to my toolbelt. I am ready to track down latent bug-villains with git bisect! Merge problems, I will defeat you with frequent commits and regular rebasing – you are no match for me now!
What if Spiderman posted his rope spinner design online, and you downloaded the plans for your 3D printer, and suddenly you could shoot magic sticky rope at any time? You’d find a lot more uses for rope. Not like now, when it’s down in the basement and all awkward to use. Use it for everyday, not-flashy things like grabbing a pencil that’s out of reach, or rolling up your laptop power cable, or reaching for your coffee – ok not that! spilled everywhere. Live and learn.
Git was like that for me. I solve problems I didn’t know I had, like “which files in this repository haven’t been touched since our team took over maintenance?” or “when was this derelict function last used?” or “who would know why this test matters?”
Every new tool that I master is a new superpower. On the Mac or linux, command-line utilities like grep and cut and uniq give me power over file manipulation – they’re like the swingy grabby rope-shooter-outers. For more power, Roopa engages Splunk, which is like the Batmobile of log parsing: flashy and fast, doesn’t fit in small spaces. On Windows, Powershell is at your fingertips, after you’ve put some time in at the dojo. Learn what it can do, and how to look it up – superpowers expand on demand! 
Other days I’m Superman. When I grasp a new concept, or practice a new style of coding until the flow of it sinks in, then I can fly. Learning new mathy concepts, or how and when to use types or loops versus recursion or objects versus functions — these aren’t in my toolbelt. They flow from my brain to my fingertips. Like X-ray vision, I can see through this imperative task to the monad at its core.
Sometimes company policy says, “You may not download vim” or “you must use this coding style.” It’s like they handed me a piece of Kryptonite. 
For whatever problem I’m solving, I have choices. I can kick it down, punch it >POW!< and run away before it wakes up. Or, I can determine what superpower would best defeat it, acquire that superpower, and then WHAM! defeat it forever. Find its vulnerability, so that problems of its ilk will never trouble me again. Sometimes this means learning a tool or technique. Sometimes it means writing the tool. If I publish the tool and teach the technique, then everyone can gain the same superpower! for less work than it took me. Teamwork!
We have the ultimate superpower: gaining superpowers. The only hard part is, which ones to gain? and sometimes, how to explain this to mortals: no, I’m not going to kick this door down, I’m going to build a portal gun, and then we won’t even need doors anymore.
Those hours spent learning git may have been the most productive of my life. Or maybe it was learning my first functional language. Or SQL. Or regular expressions. The combination of all of them makes my unique superhero fighting style. I can do a lot more than kick.

Brains, computers, and problem solutions

Programmers tell the computer what to do.

The computer has a processor, it has some memory, and it follows a set of instructions. The instructions say what to load into memory, what to do with the values in memory, how to change that memory, and when to store it off somewhere else. There is an instruction pointer that shows the next instruction to execute. There are tests that can change where the instruction pointer goes next.

(approximation of a Harvard architecture)

Assembly language is the closest to pure computer programming, to telling the computer exactly what to do. Imperative languages like C are an abstraction above that. We still tell the computer what to do, in what order, and how to change the values in memory. Code is in one place, data in another. Object oriented languages like Java offer a few more abstractions, organizing relevant code with data, hiding memory management. These languages are still imperative: we tell the processor what to do and in what order.

Now that we don’t program business software in assembler, do programmers still tell the computer what to do? We tell the compiler what to do, and that tells the runtime what to do, and that tells the computer what to do. What do we really do?

Programmers solve problems.

When we learn to code we learn to think like the computer. But what if that is not the optimal way to think? what if we think instead of how the problem looks, what the solution looks like?

(it’s abstract, OK? It’s supposed to represent a data flow.)

Programming in a declarative style means stating solutions, not steps. State what is true, or state what we’re doing – don’t describe how the computer goes about it. That’s an implementation detail for the compiler to optimize. (I’m talking business software here, not embedded systems.) Code in a language suited to the problem. This gives us more ways to solve problems, and cleaner ways to solve problems.

We’re not limited to what comes naturally to a computer, or what comes naturally to our brains.

Abstraction is key. The right abstractions make our code easy to read and faster to write and maintain. Don’t limit abstractions to the ones that fit the computer. Don’t limit them to objects that model business entities. This is why learning math is useful, and it’s why functional programming is useful: the more abstractions we know how to hold in our head — abstractions with no analog in either the real world or the computer world — the more tools we have for solving problems.

Finding Primes in Haskell

I’m currently distracted from life, immersed in Concepts of Modern Mathematics.

Did you know that you can find out whether a number is prime by multiplying all the integers before it, adding one, and then dividing by the number of interest? Me neither! it’s like this:

a number p is prime if and only if: (1*2*3*...*(p-1) + 1) mod p = 0

So, if the product of all positive integers less than p is one short of dividing evenly by p, then p is prime. Otherwise, not prime.

As the book points out, this is not a practical way to determine primeness, because if p is 17, that product is fourteen digits long. However, there’s a sneaky way to make this more practical.

This formula came out of the concept of arithmetic modulus p, which is a strange sort of arithmetic that considers only the numbers 0 through p-1. Any higher or lower numbers circle around. Any number above p-1 reduces to its remainder when divided by p. It’s a bit like when a two-year old is counting: “One, two, three, four, one, two, three…”

Since this equation came out of arithmetic modulus p, we can use this property to shrink the product as we do the multiplication. While we multiply 1 times 2 times 3 times 4, etc, we can stop after each multiplication and mod p. That way the product never gets crazy high — at most (p-1)*(p-1) — because at each step it drops to something less than p.

To simplify this equation, we take out the “mod p” because in arithmetic modulus p, “mod p” is free. Also, 0 and p are the same number. (yes, that’s weird.)
We can knock a few steps off this procedure:

1*2*...*(p-1) + 1 = 0 = p

1*2*...*(p-1) = p - 1

1*2*...*(p-2) = (p-1)/(p-1) = 1

1*2*3*...*(p-2) = 1

In Haskell, we can write this as: for a list of numbers from 1 to p-2, accumulate a product, modding the product by 17 at each step; see whether the end result is 1.

let isPrime p = foldl (\acc x -> acc * x `mod` p) 1 [2..(p-2)] == 1

This says start with 1, accumulate the product-then-mod, and see whether we come out with 1.

Let’s check all the numbers up to 100 and see which ones are prime.

>filter isPrime [2..100]


That’s pretty much instantaneous. Finding every prime between 2 and 10000 — there are 1229 — takes about thirty seconds on Swirlybob in GHCi. It was twenty seconds on Carolyn, my work laptop.

A future goal: use monads to time the operation precisely. I wonder whether implementing it as a recursive function and explicitly checking for 0 in the accumulator will be faster. (This happens early in the process for all non-primes except perfect squares.)