Our society’s ability to accumulate information requires flows of energy, the physical storage of information in solid objects, and of course our collective ability to compute. — César Hidalgo, Why Information Grows
Circuits Solving Formulas
We care about the math scientists do because that math explains our world to us, for example by forecasting tomorrow’s weather. Soon after Alan Turing discovered universal digital computation, the people of the northern hemisphere fought World War II, killing millions of each other in a struggle to determine who would rule. The United States’ government asked scientists to contribute to the war effort. One typical scientific task was to simulate a new design for artillery, so that the government would know whether it would help them win battles. Scientists had determined a computational recipe describing the flight of the proposed artillery shell through the air, but the recipe had many, many steps. Human minds could not follow the recipe quickly enough to inform the decision to build the gun before it had to be made.
At that time, the scientific community understood how to build machines that worked much faster than people for the individual steps of a long computational recipe, but they did not have devices that could work through such a recipe without stopping. Imagine that given:
(7 + 5) ÷ 2
we could in effect ask a machine to add 7 and 5, and we could also ask a machine to divide the resulting 12 by 2, but it took a person to pass that 12 from the addition step to the division step. Alternately we could have built a system which wired together machinery for the specific calculations describing the flight of a shell — an artillery predictor as it were — but we couldn’t have built the predictor much more easily than the artillery itself, and it would not have been good for predicting anything else. We needed our investment in electronic calculating machines to pay off in tools with the versatility to run many different computations:
- simulating artillery designs
- forecasting the weather
- understanding atomic nuclei
- modeling countless other systems
We needed a machine with the extraordinary flexibility of Turing’s mental construct, but one made tangible, practical, and most of all, fast.
By 1945, the scientific community had started multiple engineering efforts to create such a device. They faced a myriad of difficult design decisions and construction challenges. Mathematician John von Neumann wrote a key paper laying out a specific reimagining of Turing’s universal digital computer into a buildable design for a physical device. Von Neumann’s approach is just one of many that could work, but most computing devices are powered by circuits that follow it. We can call such circuitry a “Von Neumann computer”1.
Von Neumann computers performed computations at the speed of lightning. They traded the theoretical universality of Turing’s machine for the practical versatility needed to compute important real-world calculations. Using a Von Neumann computer, a scientist faced with life and death questions could get previously unattainable answers in hours. The first ones were made and operated by hand at great expense, each a unique and temperamental beast. We can call them
Consider these pioneering computer operators practicing their craft:
Asked to write down a description of their work, they might have come up with this recipe:
- Choose a specific calculation to perform, such as forecasting tomorrow’s weather.
- Learn how to set your contraption with the number-instructions for your calculation.
- Set it that way.
- Tell it to start.
- Wait for it to fail or finish.
- If it failed (which it did often), return to steps 2 and 3 to figure out what you did wrong and fix it.
- Otherwise, decode the answer it produced into numbers.
- Interpret those numbers as a picture of the world, such as “rain likely in the afternoon”.
To follow this recipe, you must know both the arcane workings of your contraption and the many precise expressions of the calculation describing an answer you wanted. You write the expressions in a numerical “machine language”, a set of number-instructions equivalent to those laid out by Alan Turing. Machine language is extremely difficult to write, requiring you to look up and copy down long chains of arbitrary numbers without a making single mistake. You only bothered to use a bespoke contraption if the calculation you wanted to solve was literally a matter of national importance: too complicated to do by hand but important enough to justify the expense of encoding the formulas and running the contraption.
What were the first physical computers for? They were for describing the physical systems important to the world’s richest societies.
From Calculation To Commands
It is difficult to spot an error in a long series of mathematical statements, difficult to spot a miscoding in a long series of instruction-numbers, and vexingly difficult to perform both of those tasks perfectly, as you must in order to get your contraption to answer your question. Engineer Grace Hopper describes the burden of working with bespoke contraptions, and her insightful response:
There were two problems. One was you had to copy these darn [numerical recipes], and I sure found out fast that programmers can not copy things correctly. … Programmers could not add [correctly either]. There sat that beautiful big machine whose sole job was to copy things and do addition. Why not make the computer do it? That’s why I sat down and wrote the first compiler. … What I did was watch myself put together a program and make the computer do what I did. — “Oral History of Captain Grace Hopper”
allowed us to replace steps 2 and 3 of operating a Von Neumann computer with much easier tasks:
2a. Write a recipe for your calculation by assembling commands from a set of instruction-words designed to make sense to both you and the compiler.
3a. Have the compiler translate the instruction-words into the equivalent instruction-numbers that the Von Neumann computer understands.
Thinking in instruction-words is much easier than thinking in instruction-numbers. From A Machine for Math, recall the number
That number could represent the weight of a teenage giraffe in grams, the population of San Francisco, or a high score in your favorite video game. In our history, it was the imagined encoding of a numerical recipe that would compute:
(7 + 5) ÷ 2
The first form makes sense only to the machine and human minds willing to force themselves to memorize the machine’s instruction-numbers. The second form makes sense to you, me, and millions of other people. Wouldn’t it be great if people could write in that familiar idiom and yet be understood by the computer? By inventing
Hopper made that understanding possible: “(7 + 5) ÷ 2” is a working expression in most of the world’s popular programming languages, once you replace the division symbol with the “/” character.
Hopper realized that the unity of instruction and number inside a Von Neumann computer, which put the burden of speaking machine language on operators, held the key to its own solution. Because Von Neumann computers cook numbers, they can cook their own instructions. To speak to a computer in code, the operator writes a recipe in instruction-words. She then feeds the wordy recipe to the compiler, a numeric recipe which knows how to cook instruction-words into a corresponding recipe formed of instruction-numbers, in this case 745820. Finally she has the Von Neumann computer run 745820 directly, computing the answer 12. 745820 is first a result, then a recipe whose result is 12. Hopper understood the importance of her creation:
They had the common verbs for the things they were going to do. And the nouns, they’d just have to have a dictionary for things they were referring to … they could make a dictionary of common verbs and translate the program. … No problem, you’d have communication. — “Oral History of Captain Grace Hopper”
Because compilers listen to commands, we can construct the commands to listen to “verbs and nouns”: not mere numbers, but words “for things [we are] referring to”. Once the numeric recipes describing weather have been written and given corresponding commands in the compiler’s dictionary, we no longer have to think in numbers in order to make use of the astonishing speed and flexibility of digital computing. We can express our concerns directly:
If the humidity equals the dewpoint then forecast fog.
Code works on information. The information will be compiled into numbers, so it must be given a precise, logical structure. Call a working piece of code a
Logic recipes are much less obscure than numeric recipes such as 745820, but writing and reading them still demands extensive training2. For example, a “forecast fog” recipe written in real code would make less sense to untrained readers than the version above. In the 60 years since Hopper has given us code, thousands of clever computer operators have noticed the demands of code and tried to invent the next big step forward: some way to instruct computers that harnesses their full power but requires less mental overhead than code, just as code requires less mental overhead than machine language. Every one of those inventions has failed to replace code.
Sensors and Senders
Once we are writing code, setting switches makes no sense at all. In Hopper’s time society already had a machine for generating words as fast as your fingers could move, the typewriter. Engineers adapted the typewriter to output code. Instead of having each key produce its own shape on paper, a computer operator’s keyboard sends electrical signals corresponding to the key’s letter into the circuitry for compiling. Think of keyboards as
and the displays upon which you see the weather forecast as
Together, sensors and senders free computer operators from the labor of speaking machine language. A Von Neumann computer wrapped in Hopper’s compiler and the accompanying sensors and senders is a
What are code runners for? They are for transforming the many kinds of information people care about.
The devices in our pockets and on our desks today all contain code runners wrapped around Von Neumann computers. Their outsides bring us photographs, videos, and music. Via attached cameras and microphones they cast our presence thousands of miles to our friends. I say “Hello!” shaking the air between my vocal chords and my device. Its microphone converts the airy wave of my greeting into digits sent over electric circuits to its Von Neumann computer. Logic recipes route the digits over radio waves and wires to my friend’s device, where they instruct a speaker to vibrate with a wave of the same shape. That wave crosses air to my friend’s ear to tiny bones which bring it to his nerves. Like the microphone, those nerves send an electric signal deeper into his brain for processing, and he hears “Hello!”3
We build computing devices in parallel with how we understand ourselves to operate. Recall how Hopper described her pivotal moment of creation: “what I did was watch myself put together a program and make the computer do what I did.” She wrote a numeric recipe that served in her own place as a writer of numeric recipes. Turing “grew up with the recognition that the body was a machine”4 which led him to the insight that the mystery of computation in the opaque machinery of the human brain could be elucidated by constructing equivalent machinery outside that brain — which he then asked his readers to operate inside their own brains. The close parallels between microphones and ears, speakers and vocal chords, is only one way in which human and machine information processing share a family likeness. Because
made of circuits, compilers, sensors, and senders are for transforming information, they shed light on information machines made of nerves, experiences, senses, and fingers: on us. Because code runners lend the extraordinary flexibility of their Von Neumann circuitry to our logic recipes, we should call them
versatile information machines
Sadly we did not coin that name, or any others that might have properly conveyed the importance of what Hopper and her colleagues invented. Instead, we moved the old term “computer” out to the machinery in which she had wrapped the true, Von Neumann computers. Today we call Von Neumann computers “CPUs” or “processors.” Using “computer”, a part of the device, for the whole system is a synecdoche, like calling your car “my wheels.” Saying “my wheels” doesn’t confuse anyone, but when we decided to change the meaning of “computer” to the whole device, we lost track of the differences among digital computation, logic recipes, and information processing. Horace Dediu compressed this insight, indeed much of this entire essay, beautifully in “The Next 40”:
The word “computer” is already archaic. We stopped using computers to compute in the 40s. We used them to make decisions, keep track of things, speed things up and then to communicate and then to entertain.
Richard Hamming, an accomplished mathematician and computer pioneer contemporary with Von Neumann and Hopper, put the matter this way:
The purpose of computing is insight, not numbers5.
As we will see in out next chapter, the information processing that dominates PC use today was designed without resort to digital computation. Later engineers wrapped that design around code runners — and moved the old name out yet another layer. In so doing, they created information machines for millions of literate adults.
1. More often called the Von Neumann architecture, and more-or-less unfairly. Presper Eckert and John Mauchly among others deserve pride of place. George Dyson, Turing’s Cathedral, p.79.
Turing’s Cathedral is a work of primary scholarship, which respects the complexity of history and the difficulty of disentangling individual agency from the complex community interactions that led to technical advances. This essay takes an opposing tack, telling a series of simple narratives about heroic individuals. Turing’s Cathedral constrains the coherence of its narrative to what can be documented, restricting its audience to the scholarly. This essay constrains the inclusion of history to what supports a simple set of ideas for understanding what computers are for, and risks telling just-so stories in their support.
In addition to the diffuse agency of history researched by Dyson and the personalized agency of the heroic individual sketched here, there is the formal, documented agency of public institutions, on which this essay places some emphasis. You can hear me provoke a rejection of the heroic individual approach from Kevin Kelly here. For what it’s worth, the strong defense of public investment heard in his response does not seem to me like an adequate defense of his stance on public agency.
3. Vannevar Bush, whom we will meet in the next chapter, discussed the parallel between electricity in our brains and electricity used in displays in “As We May Think”:
We know that when the eye sees, all the consequent information is transmitted to the brain by means of electrical vibrations in the channel of the optic nerve. This is an exact analogy with the electrical vibrations which occur in the cable of a television set: they convey the picture from the photocells which see it to the radio transmitter from which it is broadcast. … The impulses which flow in the arm nerves of a typist convey to her fingers the translated information which reaches her eye or ear, in order that the fingers may be caused to strike the proper keys.
4. Charles Petzold, The Annotated Turing, p.49.
5. Richard Hamming, Numerical Methods for Scientists and Engineers, p.3. Recall that we cared about the numbers computed by the bespoke contraptions not for their own sake but for the information they gave us about the state of the world. I stumbled across Hamming’s comment perhaps five times without appreciating it as a fundamental truth. César Hidalgo enlightened me: “Information refers to the order embodied in codified sequences, such as those found in music or DNA, while knowledge and knowhow refer to the ability of a system to process information.” Why Information Grows, p.165