1936

Math, like recipes, has both ingredients and method. And just as a recipe would be useless if it omitted the method, we can’t understand what math is unless we talk about the way it is done, not just the things it studies. — Eugenia Cheng, How to Bake π

The Universal Digital Computer

What was the first computer for? There are two good answers:

  1. It was for mathematicians, to help them understand math.
  2. It wasn’t for anything. Like the rest of math it just was, as it is and always will be.

Alan Turing discovered the universal digital computer in 19361. It was — it is — a machine made of math that performs math. For Turing, it was a tool to help him do his work. His job was to demonstrate mathematical truths. Here is a mathematical truth:

1 + 1 = 2

That computation is not very challenging. A child of the right age will proudly demonstrate it on his fingers. He is confident in his understanding of “1”, “+”, and “=”. His confidence comes from the applause of adults who love him. Mathematicians are a tougher audience. From their perspective, you have not demonstrated the computation 1 + 1 = 2 until you have proven what 1, +, and = are. In 1910, mathematicians Bertrand Russell and Alfred Whitehead earned great fame by publishing 300 pages of steps showing how to make 1, +, and =2. Russell, who would live another 60 years, said he never really recovered from the strain of digging that deeply into math’s foundations3.

He didn’t reach the bottom. Consider this computation:

2 ÷ 0 = ?

Russell and Whitehead didn’t prove an answer to that problem because the answer is: “there is no answer.” Or: “don’t do that.” Replying “don’t do that” works fine if you are teaching children everyday division. But if you are trying to establish foundations for all of mathematics, “don’t do that” won’t do.

Turing set out to do better. He strove to draw a boundary between the computations that can be done and those that can’t. Children, scientists, you, I, and Turing all agree that:

  • 1 + 1 computes to 2
  • 2 ÷ 0 doesn’t compute at all

We know that 2 ÷ 0 has no answer, but we can’t demonstrate it. We just know, because a mathematical authority — our school teacher — told us. Professional mathematicians insist on formal proof, but that formal proof also relies on human authority: like a child bringing two fingers together, a professional mathematician brings a sequence of statements together for consideration by mathematical authority — professors at university. But professors argue. Russell and Whitehead felt it necessary to argue for 300 pages that 1 + 1 = 2. If we can’t take that simple computation for granted, how can we ever take enough mathematics for granted to know that our computations will be sound?

Math is about removing human judgment from things, so that everything proceeds just by logic. — Eugenia Cheng, How to Bake π

 

Turing proposed that we move the work of computation from imperfect, disputatious human minds to a tireless, precise computing machine. If we could construct a machine that computed correctly, and watch it perform each step in a calculation, then there would be no question of whether a person’s fallible reasoning had slipped up somewhere. And having built such a machine and observed what it could do, we could perhaps deduce what it couldn’t do, and consider whether the limits of our computing machine illuminated the boundaries of computation in general.

What would it look like to compute mechanically? Consider

(7 + 5) ÷ 2

Think of that combination of numbers and operations as a recipe. The ingredients are 7, 5, and 2. The instructions are add and divide. If math textbooks were written like cookbooks, it might read:

Add-divided Six for Home Mathematician
    Ingredients
      * 7
      * 5
      * 2
    Instructions
      1. Take the 7 and add it to the 5 to get the dividend.
      2. Divide the dividend by 2 to get 6.

At the time Turing wrote, we already had machines that could perform individual instructions. Like an electric mixer, they could make one particularly tedious step simpler, but they were made for fingers directed by a mind. Turing imagined an approach that would eliminate the fingers and the mind: a machine that read the ingredients and the steps in one continuous stream from a paper tape. It looked more like this:

Add-divided Six for Machine
    START
      to
      7
      add
      5
      divide
      that
      by
      2
    STOP

This first trick of Turing’s removed human activity from computation. He replaced phrases crafted for readers of English with discrete operations: not “and add it to the”, but simply “add”.

His second trick was to show how to translate complex operations such as “add” into a series of extremely simple instructions for reading and writing numbers and other marks on the tape. Consider adding 5 to 7 by counting on your fingers:

  1. Take 1 from 5, leaving 4 (fingers sticking up).
  2. Use that 1 to tick 7 up to 8.
  3. Return to the 4 and take 1 from it, leaving 3.
  4. Use that 1 to tick 8 up to 9.

Counting on fingers is slow — after four steps, it hasn’t completed half of a calculation that you can perform instantly — but each step is clear. Turing designed his machine around similarly simple, “atomic” operations — not counting on fingers, but reading or writing a 1, 0, or one of a few symbols representing actions to perform. This translation, like a child’s translation of adult arithmetic into counting on fingers, greatly increased the length of the recipe for each computation. But by replacing the mystery of mathematical actions such as “add” with a small set of mechanical instructions, it removed the human mind’s understanding from computation.

Turing then extended the second trick by replacing the instruction-symbols with numbers, laying the groundwork for his third and greatest trick. Consider a machine that interprets the first number on a tape as a value to work with, the second as an instruction to perform, the third as the next value, and so on. We’ll use the following table for looking up instructions from numbers (it holds human-level instructions such as “add”, but real Turing machine instructions work the same way):

Number Instruction
4 +
8 ÷
0 stop!

If we pass the machine defined by that number-to-instruction table a tape reading “7 4 5 8 2 0”, here is what it will do4:

Read Action
(Start) (Take …)
7 … 7 …
4 … add it to …
5 … 5, giving 12,
8 divide that …
2 … by 2, giving 6,
0 and the answer is 6

Turing showed that once recipes took the form of numbers, you no longer needed to use different machines for different computations. Instead, a single, universal digital computer could perform any possible computation. All it needed was a tape holding the computation and a table mapping instruction-numbers to the corresponding operations5. One such universal Turing machine is as good as another. You could (and should) instead say that there is only one universal Turing machine, just as any mathematical object has only one existence. Consider the number 4. I can produce 4 by counting the legs of a cow. At the same instant, halfway around the world you can produce 4 by finding the sum of 3 and 1. We are separated by distance and purpose but we touch a common, a universal mathematical object.

Turing published his discovery in a paper titled:

On the Computable Numbers, with an Application to the Entscheidungsproblem

which means, roughly:

About the math we can do, which is relevant to what we can and can’t prove.

A paper is a formal letter to the mathematical community that claims to demonstrate a new truth. Inside “On the Computable Numbers” mathematicians found a description of Turing’s machine, demonstrating what computation looks like outside the human mind. To operate the machine, his colleagues read the paper very carefully, in effect moving it into their minds. Turing created a demonstration in human minds of a demonstration of how computation works outside human minds.

If you were a mathematician in 1936, that would be pretty exciting. If you are interested today, consult:

Turing’s machine is not more effective for doing arithmetic calculations than our usual methods with paper and pencil — in fact, it’s much less effective. But by showing that a single machine made of math could perform any theoretically possible computation, it inspired the building of electronic machines to perform a great variety of practically important calculations. That effort is our next subject.

Notes

1. Ada Lovelace arguably got there first in the 19th century, but she did so the way some European explorers reached icy wastes and impenetrable jungles — she left little trace and no direct successors.

2. “Twentieth Century Mathematics: Russell and Whitehead

3. “Bertrand Russell

4. Unlike my imagined example, a real Turing machine builds everything up from 0 and 1. The computation (7 + 5) ÷ 2 would require a much longer tape than “7 4 5 8 2 0”, holding many more instructions.

5. This sentence flies right past Turing’s intricate working-out of fundamental mathematics. See Charles Petzold’s The Annotated Turing.