What Even *Is* a Computer Program?
You’re using a computer right now. Or a phone. Or a tablet. There’s probably an app open, a game running, some piece of software doing something. But here’s a strange question: what is that software?
You can’t hold it. You can’t drop it on your foot. If you delete it, it’s gone, but you can download it again. If you take apart your phone, you won’t find little pieces of program inside. The program is somehow real, but not in the same way the screen or the battery is real.
Philosophers who study computer science have been arguing about this for decades. What kind of thing is a computer program? Is it more like a recipe (an abstract set of instructions) or more like a physical object (the actual pattern of electricity in the machine)? And what does it mean for a program to be correct? If your calculator app says 2+2=5, is the program wrong, or is the computer broken, or is something else going on?
These questions matter. They matter when you’re writing code for a video game, but they also matter when a self-driving car’s software makes a mistake, or when a hospital’s system loses records. Before you can decide whether a program is working properly, you need to know what a program is in the first place.
Where Does the Software End and the Hardware Begin?
Here’s an obvious way to think about it: there’s the software (the code, the app, the operating system) and there’s the hardware (the processor, the memory, the hard drive). Software is abstract. Hardware is physical. Easy, right?
Not so fast.
A philosopher named James Moor pointed out in the 1970s that this neat division is partly a myth. Think about it this way: a program doesn’t really exist unless it’s written down somewhere. Early programs were literally patterns of physical levers that you had to pull by hand. Today, your program is stored as magnetic patterns on a hard drive or as electrical charges in memory chips. There’s no such thing as a “pure” program that exists without any physical form.
So maybe the difference between software and hardware isn’t about abstract versus physical. Maybe it’s about what role something plays. For a programmer writing code in Python, the software is the high-level instructions they write, and the hardware is the computer that runs it. For an engineer designing a microchip, the software might be the machine code instructions, and the hardware might be the physical circuits. The same thing can count as software to one person and hardware to another. It’s a practical distinction, not a deep philosophical one.
A philosopher named Nurbay Irmak took this further. He argued that software is what he calls an “abstract artifact.” It’s abstract because you can’t point to it in space—if you destroy your laptop, the program still exists, as long as there’s another copy somewhere. But it has temporal properties: it was created at a specific time (when someone wrote it), and it can cease to exist (if all copies are destroyed and nobody remembers it). So software starts and stops existing, but it doesn’t live in any particular place. That’s weird. Most things you can think of are either clearly abstract (like the number 7) or clearly physical (like a rock). Software seems to be a bit of both.
The Many Layers of a Computer
Another way to think about a computational system is to imagine it as a stack of layers, each one built on top of the one below. Different computer scientists have different lists, but here’s a common one:
- Intention: Someone (a user, a customer) wants the computer to do something. “I want an app that blocks spam calls.”
- Specification: The detailed plan for what the system must do. “When a call comes in, check the number against a list. If it’s on the list, don’t ring.”
- Algorithm: The step-by-step method for solving the problem. “For each incoming number, search the list. If found, reject.”
- High-level language (like Python or Java): The actual code that implements the algorithm.
- Assembly/machine code: The much simpler instructions the processor can actually understand.
- Execution: The physical process of the computer actually running the instructions.
At each layer, the thing above is a “specification” for the thing below. The high-level language code specifies what the machine code should do. The algorithm specifies what the high-level code should do. The intention specifies what the whole system should do.
This means that “implementation” isn’t just about turning software into hardware. An algorithm is an implementation of a specification. A high-level program is an implementation of an algorithm. Machine code is an implementation of the high-level program. Each layer “implements” the one above it.
This layered view changes how we think about what a program is. A program isn’t just a set of instructions. It’s a kind of technical artifact, like a screwdriver or a bridge. Technical artifacts have a dual nature: they have a function (what they’re supposed to do) and a structure (the physical stuff they’re made of). For a screwdriver, the function is “screwing things in” and the structure is “metal rod + plastic handle.” For a program, the function is what it’s supposed to accomplish, and the structure is the code itself, written in some programming language. The interesting thing about programs, though, is that the “structure” of one layer (the code) becomes the “function” for the layer below (the machine must execute that code).
What Makes a Program “Correct”?
If you write a program that’s supposed to calculate the average of your test scores, and it gives you the wrong answer, is the program incorrect? Well, it depends on where the problem is.
Maybe the algorithm you chose is wrong for the task. That’s a “conceptual” error at the algorithm layer.
Maybe the algorithm is fine, but your Python code doesn’t implement it correctly. That’s a “material” error—a bug in the code.
Maybe the code is perfect, but there’s a physical problem with the memory chip. That’s an “operational malfunction,” like Turing’s “error of functioning.”
Maybe the code works perfectly, but the specification itself was wrong. You asked for the median test score, not the average, but you wrote the wrong thing in the spec. The program does exactly what you said, but not what you wanted. That’s an “error of conclusion.”
So “correctness” turns out to be a relationship between layers. A program is correct relative to its specification. The specification is correct relative to the user’s intentions. The physical execution is correct relative to the machine code. If any of these relationships breaks, the system as a whole isn’t working properly.
But here’s a problem that philosophers argue about: can you ever prove that a program is correct? Some, like C.A.R. Hoare, thought you could—using mathematical proofs, the way you prove a theorem in geometry. If you can prove that a program always does what its specification says, then you know it’s correct.
Others, like Richard De Millo and James Fetzer, argued that this is much harder than it sounds. Real proofs of correctness for real programs are incredibly long and complicated. They’re not like the elegant proofs you see in math class. They’re more like checking every single step of a billion-step calculation. And even if you do that, the proof itself is run on a physical machine (the computer doing the checking), which might itself have bugs. You can’t escape the physical world entirely.
Most software today isn’t verified by mathematical proof. It’s tested. We run the program on lots of different inputs and see if it produces the right outputs. But testing can only show that bugs exist, not that they don’t exist. You can test a million inputs and find no errors, but the million-and-first input might crash the program. This is a problem that computer scientists still struggle with, especially for “safety-critical” systems like airplane autopilots or medical devices, where mistakes can kill people.
Is Everything a Computer?
There’s one more twist. If we say that a computer is anything that implements an algorithm, then what counts as a computer? A toaster? The human brain? The entire universe?
A philosopher named John Searle famously argued that anything could be interpreted as a computer. You could look at the pattern of raindrops hitting a window and interpret them as calculations. You could look at moving shadows on a wall and call them a computer program. If that’s true, then calling something “a computer” doesn’t tell us anything interesting about it—it’s just a way of describing it. This view is called pancomputationalism (everything is a computer).
Other philosophers push back. Some, like Pat Hayes, say that real computers have a special property: they can change their own stored data. A piece of paper with writing on it doesn’t change the writing by itself. But a program in memory can modify itself. That’s special.
Others, like Gualtiero Piccinini, argue that a system is a computer if the best explanation of its behavior is to describe it as a computing mechanism. We don’t explain why a wall’s shadow moves by saying it’s running a program. But we do explain how a calculator works by describing the computing mechanism inside it. So being a computer isn’t just about having a pattern that could be interpreted as computation. It’s about really being a mechanism that actually performs computations.
So What?
All of this might seem very abstract. But think about it next time you download an app or write a program in class. You’re creating something that’s abstract but also real, that exists but doesn’t have a location, that can be perfectly correct in theory but buggy in practice. You’re building a thing that has a function given by a human (what you want it to do) and a structure given by a machine (the bits and circuits that do the work).
And when it breaks—when your code crashes or your game freezes—you now have some tools for thinking about what went wrong. Is it a bug in your code? A bad algorithm? A hardware problem? A misunderstanding of what the program was supposed to do in the first place?
Philosophers of computer science don’t have all the answers. They’re still arguing about what programs are, what correctness means, and where software ends and hardware begins. But they’ve given us a way to ask better questions about the stuff that runs our world.
Appendices
Key Terms
| Term | What it does in this debate |
|---|---|
| Abstraction | The process of hiding details at one level so you can work at a higher, simpler level |
| Algorithm | A step-by-step procedure for solving a problem |
| Artifact | A human-made object, defined by both its function (what it’s for) and its structure (what it’s made of) |
| Correctness | The relationship between something (like a program) and the specification it’s supposed to satisfy |
| Implementation | The relationship between an abstract description and the concrete thing that makes it real |
| Level of Abstraction | One layer in the hierarchy of a computational system, from intention down to physical execution |
| Specification | A detailed description of what a system must do; it “governs” whether the system is correct |
Key People
- James Moor – A philosopher who argued that the software/hardware divide is a practical myth, not a deep truth.
- Nurbay Irmak – A philosopher who argued that software is an “abstract artifact” — it has a beginning and end in time but no location in space.
- C.A.R. Hoare – A computer scientist who believed that program correctness could be proved mathematically, like a theorem.
- James Fetzer – A philosopher who argued that mathematical proofs of correctness can’t fully guarantee that a physical computer will work correctly.
- John Searle – A philosopher who argued that any physical system can be interpreted as a computer, leading to the “pancomputationalism” problem.
- Alan Turing – A mathematician and computer scientist who distinguished between “errors of functioning” (hardware problems) and “errors of conclusion” (bad designs).
- Gualtiero Piccinini – A philosopher who argues a system is a computer if the best explanation of its behavior involves describing a computing mechanism.
Things to Think About
- If a program is an “abstract artifact” that has no location in space, where is it really? Is it in the mind of the programmer? In the hard drive? In the code on the screen?
- Think of a time your phone or computer did something unexpected. Was it a bug, a bad design, a misunderstanding of what you wanted, or a hardware problem? How would you tell the difference?
- If everything can be interpreted as a computer, does that mean we should treat raindrops, plants, or the weather as “computing” things? What’s lost or gained by calling something a computer?
- When is a program “good enough” to release to users? Perfect correctness might be impossible to prove, but total chaos is unacceptable. Where do you draw the line?
Where This Shows Up
- Bug reports and error messages. When you encounter a crash, you’re seeing correctness fail at some level of the system.
- Video game glitches. A character walking through a wall is a “material error” (the code doesn’t implement the game’s physics correctly).
- The “it works on my machine” problem. Two programmers run the same code on different hardware and get different results — a perfect example of the layers of implementation not matching up.
- Self-driving car accidents. Real-world arguments about whether the car’s software was “correct” or the hardware “malfunctioned” show up in courtrooms, not just philosophy classrooms.
- The “Is AI conscious?” debate. One version of this question is: does a large language model really compute, or is it just a pattern we interpret as computing? That’s the pancomputationalism problem in action.