The Strange Story of How Computers Were Born
Imagine you wanted to build a machine that could think. Not really think like a person—but calculate, remember, follow instructions, and even learn from its mistakes. Where would you even start? What would the machine look like? How would you tell it what to do?
This is the puzzle that a handful of people in the 19th and 20th centuries tried to solve. The answer they came up with—the stored-program computer—is so familiar to us now that we barely notice it. But when you look at how it happened, the story is full of surprises. People built machines out of brass gears. They used mercury-filled tubes for memory. They kept some of the most important computers completely secret for decades. And at the center of it all was a strange young man named Alan Turing, who dreamed of a machine that could learn from experience.
The Man Who Wanted to Eliminate Human Error
Before there were computers, there were human computers. This word used to mean a person—often a woman—whose job was to sit at a desk and calculate mathematical tables by hand. Tables of logarithms, tide tables, artillery aiming tables: these had to be correct, because lives depended on them. But humans make mistakes.
In the 1820s, a Cambridge professor named Charles Babbage decided to build a machine that would eliminate those mistakes. His Difference Engine was a purely mechanical device made of brass gear wheels, rods, and ratchets. Numbers were represented by the positions of toothed wheels mounted in columns. You turned a crank, and the machine would automatically produce mathematical tables.
Babbage never finished the full-scale Difference Engine. But he did build working fragments, and one of them—about one-ninth of the complete calculator—is on display in the London Science Museum. Babbage used it to do serious calculations. Then, in 1990, someone finally built the machine from Babbage’s original designs. It worked perfectly.
But Babbage had an even more ambitious idea. He imagined a machine called the Analytical Engine that would be general-purpose. It would have a memory store, a central processing unit, and the ability to make choices based on the results of its own calculations—what we now call conditional branching. The machine would be controlled by a program of instructions stored on punched cards, an idea Babbage borrowed from the Jacquard weaving loom, which used cards to control patterns in fabric.
Babbage never built the Analytical Engine either. But a woman named Ada Lovelace, who worked closely with Babbage, saw something remarkable. She realized the Engine might be used for more than just numbers. It could compose music, she suggested. It could work with symbols that weren’t mathematical at all. This was a genuinely new idea: that a computing machine might be general in a much deeper sense than anyone had realized.
The Secret Machines of World War II
The story of modern computers skips forward to the 1930s. Around this time, people began building machines that used electricity instead of gears. The earliest of these were electromechanical, meaning they contained small electric switches called relays. They worked, but they were slow and unreliable.
A German engineer named Konrad Zuse deserves the credit for building the first working general-purpose program-controlled digital computer, the Z3, in 1941. But the real breakthrough came when engineers started using vacuum tubes—glass bulbs with no moving parts except electrons. These tubes could switch on and off millions of times per second, far faster than any relay.
The first fully functioning electronic digital computer was called Colossus. It was built in secret by a British engineer named Thomas Flowers, and it went to work in February 1944 at a place called Bletchley Park. Bletchley Park was a country house in England that had been turned into a top-secret codebreaking center.
Here’s what was happening. During World War II, the Germans used a cipher machine called Tunny to encrypt their highest-level messages—including communications from Hitler himself. The British had broken the code, but decoding the messages by hand took too long. The information would be useless by the time it was ready. So a mathematician named Max Newman proposed building an electronic machine to automate the process.
The problem was that the government didn’t really support Flowers’ plan. So Flowers, working independently at his own research station, quietly built the world’s first large-scale programmable electronic digital computer anyway. Colossus took up a whole room. It contained about 2400 vacuum tubes. And it worked.
By the end of the war, ten Colossi were running around the clock at Bletchley Park. The official historian later estimated that the codebreaking operation at Bletchley Park—in which Colossus played a major role—shortened the war in Europe by at least two years.
Then something strange happened. After the war, most of the Colossi were destroyed. Those who knew about them were forbidden by the Official Secrets Act from telling anyone. Until the 1970s, hardly anyone knew that electronic computation had been successfully used during the war. For many years, credit went to a later American machine called ENIAC. Which leads to another complication in the story.
The Man Who Almost Got the Credit
In the United States, the first fully functioning electronic digital computer was ENIAC, built at the University of Pennsylvania in 1945. ENIAC was enormous—it used about 18,000 vacuum tubes—and it was designed to calculate artillery aiming tables. It was not a stored-program computer. To set it up for a new job, you had to physically reconfigure the machine by moving plugs and flipping switches, a process that could take days.
A mathematician named John von Neumann joined the ENIAC group in 1944. Von Neumann was already fascinated by the work of Alan Turing—more on him in a moment—and he became the most famous advocate for what’s called the stored-program concept. This is the idea that a computer’s program of instructions should be stored in the same high-speed memory as the data it works on. That way, the computer can treat its own program as data—it can modify its own instructions while running. This is what makes a computer truly flexible.
Von Neumann wrote a report describing this concept, and because he was a famous figure, it became known as the “von Neumann architecture.” For a long time, people called stored-program computers “von Neumann machines.”
But here’s the thing: the stored-program concept wasn’t really von Neumann’s idea. He was just the one who made it famous. The idea came from Alan Turing, and Turing had published it in 1936.
The Lonely Genius
Alan Turing was a British mathematician who, at age 23, published a paper that essentially invented the modern computer. Not as a physical machine, but as an abstract idea. He described an imaginary device—now called a universal Turing machine—that consisted of a limitless memory tape and a scanner that moved back and forth, reading symbols and writing new ones. The scanner’s actions were dictated by a program of instructions stored on the tape itself. That’s the stored-program concept, plain and simple.
Turing’s paper was pure mathematics. He wasn’t trying to build anything. But at Bletchley Park, right from the start, he was interested in the possibility of actually building a computing machine like the one he’d described. He often talked about machines learning from experience, and about solving problems by searching through possible solutions, guided by rule-of-thumb principles—what we now call heuristics. He illustrated these ideas using chess.
After the war, Turing designed an electronic stored-program computer called the Automatic Computing Engine, or ACE. His design called for a high-speed memory of roughly the same capacity as an early Macintosh computer—enormous for the time. It would have been the fastest computer in the world if it had been built as planned. But the project ran slowly due to organizational problems at the National Physical Laboratory, and in 1948, a frustrated Turing left.
Meanwhile, at Manchester University, Max Newman had established the Royal Society Computing Machine Laboratory. Working with engineers F.C. Williams and Tom Kilburn, and with guidance from Newman and Turing, they built the first working general-purpose stored-program electronic digital computer. It was called the Manchester Baby. On June 21, 1948, it performed its first calculation. The program was just seventeen instructions long, stored on the face of a cathode ray tube.
This was the real breakthrough. A much larger version of the machine, with a programming system designed by Turing, became the world’s first commercially available computer, the Ferranti Mark I.
What Makes a Computer a Computer
So what did people finally figure out? What is the essential idea that all these machines share?
First, a computer needs a memory store where numbers can live in “houses” with addresses. Second, it needs an arithmetic unit that can add, subtract, and so on. Third, it needs a way to select houses, connect them to the arithmetic unit, and send the results back to other houses—like a kind of automatic telephone exchange for numbers. Finally, it needs the ability to move control to a different instruction if a certain condition is satisfied, otherwise continue in sequence. This is conditional branching.
But most importantly, a computer needs to store its program in the same form as its data. This means the program can be changed by the computer itself, while it’s running. Turing put it this way: “What we want is a machine that can learn from experience. The possibility of letting the machine alter its own instructions provides the mechanism for this.”
This is the deep idea that Babbage glimpsed but couldn’t realize, that Turing made precise, and that von Neumann popularized. It’s what makes a computer not just a fancy calculator but something fundamentally new.
Still Puzzles
Nobody really agrees on who “invented” the computer. Was it Babbage, who had the idea but couldn’t build it? Turing, who described it mathematically? Flowers, who built the first working electronic one? The engineers at Manchester who put it all together? The machine itself is not a single invention. It’s more like an idea that different people had different pieces of, at different times, under different pressures.
And the question of whether a machine can really learn—or think—is still wide open. Turing thought so. He invented a test for machine intelligence that people still argue about today. If a computer can carry on a conversation so well that you can’t tell it’s a computer, does that mean it’s thinking? Or just pretending very well?
These questions don’t have clean answers. That’s part of why they’re worth thinking about.
Key Terms
| Term | What it does in this story |
|---|---|
| Stored-program concept | The idea that a computer’s program should be stored in its memory like data, so the machine can modify its own instructions |
| Conditional branching | The ability of a computer to choose between different actions based on the result of its own calculations |
| Vacuum tube | A glass bulb with no moving parts that can switch on and off millions of times per second, used in early electronic computers |
| General-purpose machine | A computer that can be programmed to do many different tasks, not just one specific job |
| Heuristic | A rule-of-thumb that cuts down the amount of searching needed to solve a problem |
Key People
- Charles Babbage (1791–1871) – A Cambridge professor who designed mechanical computing machines in the 1800s, including the Analytical Engine, which had the basic ideas of a modern computer but was never built.
- Ada Lovelace (1815–1852) – A mathematician who worked with Babbage and realized that his machine could work with symbols, not just numbers—a genuinely new idea about computing.
- Alan Turing (1912–1954) – A British mathematician who invented the stored-program concept in 1936, worked as a codebreaker during World War II, and later designed early electronic computers. He was also interested in whether machines could think.
- Max Newman (1897–1984) – A Cambridge mathematician who introduced Turing to the idea that led to the Turing machine, and who later founded the laboratory that built the first working stored-program computer.
- Thomas Flowers (1905–1998) – A British engineer who built Colossus, the first fully functioning electronic digital computer, in secret during World War II.
- John von Neumann (1903–1957) – A mathematician who popularized the stored-program concept and became so associated with it that people called computers “von Neumann machines,” even though the idea was Turing’s.
Things to Think About
-
If a computer can modify its own program, is it really “following instructions” in the same way a calculator does? Or is it doing something different?
-
The stored-program concept was invented as pure mathematics, years before anyone built a machine that could use it. Does that mean the idea “counts” as an invention, even though it wasn’t implemented?
-
Colossus was kept secret for decades. If it had been known about sooner, how might the history of computing be different? Does secrecy change what “counts” as an invention?
-
Turing thought machines could learn from experience. Do you think a machine that learns is “thinking”? What would count as evidence that a machine is actually thinking, rather than just pretending?
Where This Shows Up
- Every time you open a laptop or phone, you’re using the stored-program concept. The program that runs your operating system lives in the same memory as your files and photos, and the machine can modify it while running.
- The debate about whether machines can learn is happening right now, in discussions about artificial intelligence. Programs that play chess, compose music, or hold conversations all trace back to ideas Turing was thinking about in the 1940s.
- The question of who gets credit for an invention—and how secrecy or fame distorts the story—is still alive. You can see similar debates around who invented the light bulb, the telephone, or the internet.