Philosophy for Kids

Is Your Mind a Computer?

Imagine you’re playing chess against a friend. You look at the board, you think about your next move, you decide, and you move your piece. Now imagine a computer playing chess against your friend. It looks at the board (through a camera), it “thinks” (it runs a program), it decides, and it moves a piece. The computer wins. Your friend is impressed.

But here’s the strange question: was what the computer did really thinking? Or was it just pretending? And if it was just pretending, then what’s the difference between what the computer did and what you did when you thought about your move?

This is the heart of one of the biggest debates in philosophy of mind: Is the mind a kind of computer?

Not like, “oh, my brain is kind of like a computer.” The question is whether the mind literally is a computing system—a machine that processes information by following rules, just like a computer does. Philosophers have been arguing about this for decades, and they still haven’t agreed.


What Even Is a Computer?

Before we can ask whether the mind is a computer, we need to know what a computer is. Not your laptop or your phone—those are physical machines. Philosophers are interested in something more abstract.

In the 1930s, a mathematician named Alan Turing came up with a simple imaginary machine. It’s now called a Turing machine. Here’s how it works:

Imagine an infinitely long tape divided into squares. Each square can have a symbol written on it (like a 0 or a 1). Now imagine a little scanner that can move along the tape, one square at a time. The scanner can read what’s written on a square, erase it, or write a new symbol. And it can be in different “states”—like a mode that says “if you see a 0, do this; if you see a 1, do that.”

That’s basically it. A Turing machine is just a set of simple rules: “If you’re in state A and you see a 0, write a 1, move right, and go to state B.” There’s nothing magical about it—no creativity, no feelings, no understanding. It just follows the rules mechanically.

Here’s the amazing thing Turing figured out: this ridiculously simple machine can, in principle, do any calculation that any computer can do. Every program on your phone, every video game, every weather forecast—all of it can be reduced to the kind of simple symbol-shuffling that a Turing machine does. Turing machines are the theoretical foundation of all modern computing.


The Big Idea: The Computational Theory of Mind

In the 1960s, philosophers and cognitive scientists started to wonder: what if the human mind works the same way? What if thinking is just a kind of computation—a process of manipulating symbols according to rules?

This idea is called the Computational Theory of Mind (CTM for short). The basic claim is that the mind is a computational system. Your beliefs, your decisions, your reasoning—all of these are processes that involve shuffling mental symbols around according to mental rules.

A philosopher named Hilary Putnam pushed this idea hard. He argued that what makes something a mind isn’t what it’s made of (flesh vs. silicon) but how it’s organized. A mind is defined by its functional organization—the pattern of cause and effect between its parts. If you build a system that has the right functional organization, it will have a mind, no matter what it’s made of. This view is called functionalism.

Think about it this way: what makes something a heart is that it pumps blood. If you built a mechanical pump that did exactly what a heart does, it would be a heart—just an artificial one. The material doesn’t matter; the function does. Similarly, if you built a computer that did exactly what a mind does, it would be a mind.


The Language of Thought

But there’s a problem with the simple version of this idea. If you’ve ever tried to explain something to someone, you know that thoughts seem to be made of parts. “The cat is on the mat” is different from “the mat is on the cat,” even though they use the same pieces. Thoughts have structure.

A philosopher named Jerry Fodor argued that if the mind is a computer, then it must have a language of thought—sometimes called “Mentalese.” This isn’t English or Spanish or any language you speak. It’s a mental language that your brain uses to represent information. Just as a computer program manipulates symbols (0s and 1s), your mind manipulates symbols in Mentalese.

Here’s why this matters. Remember how a Turing machine shuffles symbols? Well, those symbols need to mean something. The symbol “CAT” in Mentalese stands for the concept of a cat. When you think “the cat is on the mat,” your mind is combining the Mentalese symbols for CAT, MAT, and the relationship ON in a specific structure. And when you reason, your mind is applying rules to these symbols—rules like “if something is on something else, then that something else is underneath it.”

This language of thought idea explains two important features of human thinking:

Productivity: You can think an infinite number of thoughts. You’ve never thought “the purple giraffe is dancing with the orange elephant” before (probably), but you just did. The language of thought lets you combine a finite number of concepts into an infinite number of thoughts.

Systematicity: If you can think “John loves Mary,” you can also think “Mary loves John.” The structure of your thoughts is systematic because the symbols can be rearranged according to rules.


But Wait: Brains Are Not Computers

Here’s where things get messy. Critics point out that brains and computers work very differently.

Computers are made of silicon chips that process information in discrete steps, one instruction at a time. Brains are made of neurons—cells that fire electrical signals in complex, parallel patterns. Neurons are slow compared to computer chips (about a million times slower), but there are billions of them working simultaneously.

In the 1980s, a movement called connectionism challenged the classical view. Connectionists build models called neural networks—systems of interconnected “nodes” that roughly mimic how neurons work. Instead of following explicit rules, neural networks learn from examples by adjusting the connections between nodes. When you show a neural network thousands of pictures of cats, it learns to recognize cats—not because someone programmed a rule like “if it has whiskers and pointy ears, it’s a cat,” but because the network discovered patterns on its own.

Today’s AI systems, like the ones that can generate realistic images or hold conversations, are built on these neural network principles. They’re not classical computers following step-by-step rules; they’re vast networks of connections shaped by training data.

So which is the better model for the mind? The classical Turing machine or the neural network?

Some philosophers and scientists say neural networks are better because they’re more biologically realistic. The brain is a network of neurons, not a Turing machine with a central processor and a tape. Others argue that neural networks are just implementing classical computation at a lower level—that the brain, despite looking like a neural network, still performs computations over symbols.


The Hard Problem: What About Consciousness?

Even if the mind is a computer, there’s something computers don’t seem to have: experience. When you bite into a chocolate bar, you don’t just process information about texture and flavor—you feel something. The chocolate tastes good (or bad). There’s something it’s like to be you.

Computers, as far as we can tell, don’t have experiences. They process information, but they don’t feel anything. When a chess computer wins, it doesn’t feel triumphant. When it loses, it doesn’t feel disappointed. Call up ChatGPT and complain about your day—it will give you a sympathetic response, but it doesn’t actually feel sympathy.

This is a big problem for the computational theory of mind. Even if you could build a computer that perfectly mimics human behavior, would it really be thinking, or would it just be simulating thought? Would it have a mind, or would it just be a really fancy puppet?

Philosophers are deeply divided on this. Some say yes—if something behaves exactly like a thinking being, then it is a thinking being. Others say no—there’s something about consciousness that can’t be captured by computation alone.


Can Anything Compute?

Here’s another weird problem. If the mind is a computing system, then what isn’t? The philosopher Hilary Putnam (the same one who promoted functionalism) later argued that anything can be seen as implementing any computation. Your desk, the weather, a pile of sand—if you look hard enough, you can find patterns that match the rules of any program.

If that’s true, then saying “the mind is a computer” doesn’t tell you anything interesting. Everything is a computer, in a sense. The claim becomes trivial.

Most computationalists think they can avoid this problem by imposing stricter conditions on what counts as a real computation. A system doesn’t just have to happen to follow certain patterns; it has to actually use those patterns in the right way to process information.


So What’s the Answer?

Nobody really knows. Philosophers still argue about this, and the debate has only gotten more complicated as AI has advanced.

Here’s where things stand:

  • Classical computationalists think the mind is like a Turing machine, manipulating symbols according to rules.
  • Connectionists think the mind is more like a neural network, learning patterns through experience.
  • Skeptics think computation can’t capture consciousness or genuine understanding.
  • Pluralists think different models are useful for different purposes, and we shouldn’t pick just one.

What’s fascinating is that this isn’t just an abstract philosophical debate. It has real consequences for how we think about AI. If the mind really is a computer, then building a thinking machine is just a matter of getting the program right. If it isn’t, then we might need something radically different—something we haven’t discovered yet.

And here’s the weirdest thing of all: you are the one trying to figure this out. Your own mind is the thing in question. You’re using your brain to ask whether your brain is a computer. There’s something strangely self-referential about the whole project.

Maybe that’s the deepest mystery of all.


Key Terms

TermWhat it does in this debate
Turing machineThe simplest possible model of a computer—a symbol-shuffling machine that follows mechanical rules
Computational Theory of Mind (CTM)The claim that the mind literally is a computing system, not just like one
Language of thought (Mentalese)The idea that your brain uses an internal symbolic language to represent information and perform computations
FunctionalismThe view that mental states are defined by their functional roles (cause-and-effect patterns), not by what they’re made of
ConnectionismAn approach that models the mind using networks of simple units (like neurons) rather than rule-following symbol manipulation
Neural networkA system of interconnected nodes that learns patterns by adjusting the strength of connections between them
ProductivityThe fact that you can think an unlimited number of thoughts using a finite set of mental symbols
SystematicityThe fact that your ability to think certain thoughts is systematically connected to your ability to think other related thoughts
Triviality argumentThe objection that almost any physical system could be seen as performing any computation, making the theory meaningless

Key People

  • Alan Turing (1912–1954): A British mathematician who invented the Turing machine and helped crack Nazi codes in WWII. He asked whether machines could think.
  • Hilary Putnam (1926–2016): An American philosopher who argued that the mind is a functional system (functionalism) but later argued that almost anything could be seen as a computer.
  • Jerry Fodor (1935–2017): An American philosopher who championed the language of thought idea and argued that thinking involves manipulating mental symbols.
  • John Searle (1932–): An American philosopher who argued that computers can’t truly think because they lack understanding—they just manipulate symbols without meaning (the Chinese Room argument).

Things to Think About

  1. If you had a perfect computer simulation of a human brain, would it be conscious? What if you ran the simulation at 1/100th speed? What if you ran it on a giant network of mechanical gears instead of silicon chips?

  2. When ChatGPT writes a poem, is it actually being creative? Or is it just rearranging patterns it learned from human writing? What’s the difference between that and what you do when you write a poem?

  3. If the mind is a computer, then your thoughts are just computational processes. Does that change how you think about free will? About responsibility? About what it means to be you?

  4. Why do you have feelings? From a computational perspective, what purpose does consciousness serve? Could a computer process information and make decisions just as well without feeling anything?

Where This Shows Up

  • Artificial intelligence: Every AI system—from chess programs to ChatGPT to self-driving cars—raises questions about what computation can and can’t do.
  • Video games: When you play a game with NPCs (non-player characters), are they “thinking” in any real sense? Game AI developers wrestle with these questions.
  • Neuroscience: Scientists are trying to figure out exactly what kind of computation the brain performs, and whether it’s anything like what computers do.
  • The law: Courts have to decide whether AI systems can be held responsible for their actions, or whether they’re just tools. This depends partly on whether they’re “thinking.”
  • Everyday life: When Google Maps finds you the fastest route, it’s doing something that looks a lot like reasoning. Is it?