How Thoughts Get Attached to Things: Causal Theories of Mental Content
Imagine this: you’re walking home from school, and you see a brown shape moving behind a bush. Your brain does something quick—it produces a thought. Maybe the thought is dog. Or maybe it’s fox. Or maybe it’s what was that?
Now here’s a strange question: what makes that thought about anything at all? What makes one of your inner mental events mean dog, rather than just being a random firing of neurons? How does anything in your head get to be about something outside your head?
This is the puzzle that philosophers call the problem of mental content. And one of the most influential attempts to solve it is called a causal theory of mental content. The basic idea sounds simple: a thought means what it does because the thing it’s about causes the thought. Dogs cause your “dog” thoughts, so those thoughts mean dog. Simple, right?
Well, not so fast.
The Disjunction Problem
Here’s why it’s not simple. Suppose you have a mental symbol—let’s call it “X”—that’s supposed to mean dog. Most of the time, dogs cause “X” to fire in your brain. But sometimes other things do too. A fox at dusk might trigger “X”. A question like “What kind of animal says ‘woof’?” might trigger “X.” A weird-shaped shadow might do it. And if you took LSD (not that you would), who knows what neural fireworks might happen.
So now we have a problem. If “X” is caused by dogs and by foxes, why say “X” means dog rather than dog-or-fox? There’s always this option: we could just keep adding things to the meaning. If a blow to the head sometimes makes you think of dogs, why not say “X” means dog-or-blow-to-the-head? If everything causes “X” under some weird condition, then “X” could mean anything—which is the same as meaning nothing.
Philosophers call this the disjunction problem, and it’s the central headache for causal theories. The challenge is: how do you separate the real content-determining causes from the accidental ones?
Let’s look at the main attempts to solve this.
First Attempt: Normal Conditions
Here’s one idea: maybe “X” means dog because under normal conditions, only dogs cause “X”. Foxes cause “X” only when it’s dark or you’re far away. Hallucinogenic drugs cause “X” only under abnormal chemical conditions. So if we just specify what counts as normal—good lighting, clear view, no drugs—then the content-determining causes are the ones that operate under those conditions.
This sounds promising, but it runs into problems. First, what counts as “normal”? The weather? Your state of mind? The time of day? It’s hard to specify non-arbitrarily.
Second, even under normal conditions, there are still causal intermediaries. When you see a dog, light reflects off the dog, hits your retina, triggers your optic nerve, and eventually fires some neurons. Each step in this chain causes the next. So why does “X” mean dog rather than retinal-image-of-dog or pattern-of-neural-firing-in-the-visual-cortex? Normal conditions don’t help you pick which link in the causal chain determines the content.
Third, why should “X” keep meaning dog even when conditions aren’t normal? If “X” means dog only under perfect viewing conditions, then what happens when you see a dog in the dark? Does “X” suddenly mean something else? That doesn’t match our experience—we can still think about dogs in the dark.
Second Attempt: Evolutionary Functions
Maybe the answer lies in what our mental symbols are for. A thermometer has the function of measuring temperature, not air pressure, even though both affect it. The function determines what it means. Similarly, maybe your “dog” neurons have the function of detecting dogs—that’s why they’re there, that’s what evolution designed them to do.
Here’s how the story goes. Suppose some ancient rabbits developed a random mutation: a set of neurons that fired in the presence of dogs, and this firing triggered a freezing response. Those rabbits were less likely to be caught and eaten. They survived better, had more babies, and passed on the genes for those dog-detecting neurons. Eventually, most rabbits had them. So the function of those neurons is to detect dogs, because that’s what natural selection selected them for.
Functions give us a way to separate content-determining causes from accidental ones. It’s not the function of your dog-neurons to fire in response to LSD, questions, or foxes at dusk. Those are mistakes—misrepresentations. The real content is determined by what the system was supposed to detect.
This is an elegant idea, but it has problems too. First, what exactly did natural selection select for? Did it select for sensitivity to dogs, or for sensitivity to dog-shaped-things-that-might-be-predators? Those aren’t the same thing, and it’s not obvious how to tell which one was the actual target of selection.
Second, it seems unlikely that natural selection shaped the fine details of individual neurons. Evolution might make your brain bigger or give you more visual cortex, but does it really wire individual neurons to respond to dogs specifically? Dogs haven’t been around long enough in evolutionary terms for that to work. And what about things that are very recent—cars, computers, smartphones? Natural selection couldn’t have designed neurons for those, yet we can think about them.
Third Attempt: Learning and Development
Maybe functions aren’t inherited—maybe they’re learned. During childhood, you’re trained to associate certain mental symbols with certain things. Your parents and teachers correct you: “No, that’s a fox, not a dog.” Eventually, your “dog” symbol acquires the function of detecting dogs. After that, when a fox triggers “X”, it’s a mistake—a false tokening.
This approach avoids the problems with evolution. It doesn’t require evolution to have shaped your neurons for every concept you have. You learn new concepts throughout your life.
But it has its own problems. How do you know when learning is finished? Children use words incorrectly all the time—does that mean they’re still learning, or does it mean what they’ve learned has a different content than we thought? If a child sometimes calls foxes “dog”, maybe she hasn’t learned that “X” means dog yet. Or maybe she’s learned that “X” means dog-or-fox. There’s no non-arbitrary way to decide.
And then there’s the teacher problem. How did the first teacher learn? If all content comes from learning, someone had to learn it first, and that someone had no teacher. So this approach seems to push the problem back rather than solving it.
Fourth Attempt: Asymmetric Dependence
Jerry Fodor, a philosopher who thought about this for decades, came up with a different approach. Instead of appealing to normal conditions or functions, he focused on the dependence between different causal laws.
Here’s the idea. There might be a law-like connection between dogs and “X”: dogs cause “X”. There might also be a law-like connection between foxes and “X”: foxes cause “X”. But—and this is the key—the fox-”X” connection depends on the dog-”X” connection in a way that doesn’t go the other way around. Foxes cause “X” only because foxes are mistaken for dogs, and dogs cause “X”. If you somehow broke the dog-”X” connection, the fox-”X” connection would break too. But if you broke the fox-”X” connection, the dog-”X” connection would remain.
This is what Fodor calls asymmetric dependence. The content-determining cause is the one that the others depend on. “X” means dog because that’s the fundamental causal connection; all the other causes (foxes, questions, shadows) depend on it.
This gets some things right. Questions like “What animal says ‘woof’?” cause “dog” thoughts only because dogs themselves do—the question depends on the dog connection. Same with foxes: they cause “dog” thoughts only because they look similar to dogs.
But problems remain. What about indistinguishable substances? For centuries, people thought jadeite and nephrite were the same thing (jade). If “X” is caused by both, which one determines the content? Neither depends asymmetrically on the other—they cause “X” in exactly the same way, through identical appearances.
And there’s a deeper worry: this approach seems to assume what it’s trying to explain. Why would foxes cause “dog” thoughts only because dogs do? The obvious answer is: because “X” means dog, and foxes look like dogs. But that’s using meaning to explain meaning, which is cheating if you’re trying to explain meaning non-circulary.
The Bigger Questions
So where does this leave us? After decades of work, there’s no causal theory that everyone agrees works. Each attempt solves some problems but creates others. Philosophers still argue about this.
But the attempt to build a causal theory matters, even if it hasn’t fully succeeded. Here’s why.
First, it’s a bet about how minds work. If you think minds are physical systems—brains—then you need to explain how physical stuff can be about other stuff. How can a pattern of neurons firing mean something? Causal theories say: through causal connections with the world. That’s a naturalistic explanation—it doesn’t appeal to magic or souls or mysterious mental powers.
Second, it raises deep questions about what meaning even is. If you can’t reduce meaning to causation, what can you reduce it to? Maybe meaning is something that can’t be fully explained in non-mental terms. Maybe the mind is just different from everything else in the universe. Or maybe we need a totally different approach.
Third, the failures of causal theories teach us something. Each attempt highlights a feature that any theory of meaning must handle: the possibility of error (the disjunction problem), the role of the environment (the distality problem), the fact that we can think about things that don’t exist (unicorns, the planet Vulcan), and the fact that some thoughts are about themselves (“This thought is false”).
Why This Matters
This isn’t just abstract philosophy. If we ever want to build genuinely intelligent machines—robots that don’t just process symbols but actually think about the world—we’ll need to know how meaning gets into physical systems. The causal theory is one proposal for how to do that.
It also matters for understanding ourselves. You have thoughts right now. Some of them are about this article, some about what you’ll have for dinner, some about that weird thing your friend said yesterday. What makes those thoughts about those things? The answer isn’t obvious, and that’s part of what makes the puzzle so fascinating.
Nobody has fully solved it yet. But the attempt has forced philosophers to think very hard about what meaning is, how minds connect to worlds, and whether a purely physical system can truly be about anything at all.
Appendix: Key Terms
| Term | What it does in the debate |
|---|---|
| Mental content | What a thought is about — the thing it means or refers to |
| Causal theory of mental content | The idea that thoughts get their meaning from what causes them |
| Disjunction problem | The challenge of saying why “X” means dog rather than dog-or-fox-or-anything-else-that-causes-it |
| Content-determining causes | The causes that actually give a thought its meaning (as opposed to accidental causes) |
| Function (in this context) | What a mental symbol is supposed to detect — what it was designed or trained for |
| Asymmetric dependence | When one causal connection depends on another, but not vice versa — used to identify which cause sets the content |
| Naturalistic explanation | An explanation that doesn’t appeal to anything supernatural or mysterious — just physical causes and effects |
Appendix: Key People
- Dennis Stampe (20th century philosopher) — One of the first to develop a modern causal theory of mental content, drawing attention to the problem of distinguishing real content-determining causes from accidental ones.
- Fred Dretske (1932–2013) — Developed a causal theory based on information and function, arguing that mental symbols gain meaning through learning and the acquisition of functions.
- Jerry Fodor (1935–2017) — A major figure in philosophy of mind who wrestled with causal theories for decades, proposing the Asymmetric Dependency Theory as a solution to the disjunction problem.
- Ruth Millikan (1933–present) — Developed a teleosemantic theory (related to causal theories) that grounds mental content in evolutionary function.
Appendix: Things to Think About
-
If thoughts get their meaning from what causes them, how do we think about things that don’t exist? I can think about unicorns, but no unicorn has ever caused any of my thoughts. Does this mean causal theories can’t handle imaginary things?
-
Suppose you grew up in a place where there were no dogs, but people told you stories about them. When you finally see a dog for the first time, are you having a thought about dogs, or are you just connecting a word to an experience? When did your “dog” thought get its meaning?
-
If the same neural firing pattern can be caused by many different things, and only some of them set its meaning, how do we know which ones matter for us? Could our “dog” thoughts really be about something different than we think, just because we’ve been confused in systematic ways?
-
Fodor’s asymmetric dependency theory says that accidental causes depend on real causes. But is that always true? Could there be a case where blowing your head causes a “dog” thought without depending on any prior dog-thought connection?
Appendix: Where This Shows Up
- Artificial intelligence: When programmers train neural networks, they face the same disjunction problem: how does the network learn what a “dog” is, rather than just memorizing specific dog pictures?
- Animal communication research: Scientists studying vervet monkey alarm calls debate whether the calls mean “eagle” or just trigger certain behaviors — the same kind of content question.
- Everyday arguments about intentions: When someone says “I didn’t mean it that way,” they’re drawing a line between what caused their behavior and what it actually meant — a small-scale version of the causal theorist’s problem.
- Eyewitness testimony: The fact that people can misidentify things (and that these misidentifications depend on real identities) is exactly the structure the asymmetric dependency theory tries to capture.