Philosophy for Kids

What Does It Mean for a Thought to Be *About* Something?

You are reading this sentence. Right now, words on a screen are about things—about philosophy, about minds, about you. That seems obvious. But if you stop and think about it, it’s actually pretty weird. How can a bunch of marks on paper, or vibrations in the air, or electrical signals in a brain, be about anything at all?

This is not a trick question. It’s a real puzzle that philosophers have been trying to solve for a long time. Your beliefs are about things: you believe that your backpack is on the floor, that pizza is delicious, that tomorrow is a school day. Your desires are about things too: you want to finish that video game, you want your friend to stop being mad. Even your perceptions are about things: you see a tree, not just a jumble of colors and shapes.

Philosophers call this property of being about something intentionality. It sounds fancy, but it just means “directedness toward the world.” And here’s the mystery: how does a lump of gray stuff inside your skull—a brain—manage to point at things in the world? How can a bunch of neurons be about a pizza that doesn’t even exist yet?

One major family of answers is called teleological theories of mental content. “Teleological” comes from a Greek word meaning “purpose” or “end.” These theories say: thoughts are about what they are supposed to be about. The key idea is that mental states have functions, just like hearts have the function of pumping blood, and those functions determine what the thoughts mean.


The Problem of Error

To see why anyone would think this, let’s start with the simplest possible theory: a thought means whatever causes it. If you have a belief that there’s a cat in the room, and that belief is usually caused by actual cats, then maybe the belief means “cat.”

The problem is obvious. What happens when you mistake a small fox for a cat? On this simple theory, your belief can’t be wrong. If a small fox caused the belief, then the belief must mean “cat or small fox.” Every time you make a mistake, the meaning of your thought just expands to include whatever caused it. You can never be wrong. But we are wrong all the time. So any decent theory has to explain how misrepresentation is possible—how a thought can be about one thing but fail to match reality.

This is where functions come in. Think about a smoke alarm. Its function is to detect smoke. But sometimes a piece of burnt toast will set it off. The alarm is supposed to respond to smoke, not toast. That “supposed to” is what makes it possible for the alarm to malfunction. Teleological theories say the same thing about your brain: your mental states have functions, and those functions determine what they’re supposed to represent. When they misfire, you make a mistake.


What Are Functions, Anyway?

Now we need to get clearer about this idea of “function.” Obviously, we’re not talking about the kind of function you learn in math class. We’re talking about biological functions—what something is for.

Most philosophers who defend teleological theories of content favor what’s called an etiological theory of functions. “Etiology” just means “story about origins.” On this view, a thing’s function depends on its history. The heart has the function of pumping blood because hearts were selected for pumping blood over millions of years of evolution. Hearts that pumped blood helped organisms survive and reproduce, so the genes for making hearts that pump blood got passed on. The function is what the thing was selected for.

This is important because it allows for malfunction. A heart that can’t pump blood still has the function of pumping blood—it just isn’t doing its job. Similarly, the part of your brain that detects danger has the function of detecting actual danger. If it fires when there’s no danger (like when you’re scared of a shadow), it’s malfunctioning. It’s misrepresenting.

But wait—there’s a complication. Not all functions come from evolution by natural selection. Some come from learning. When you learn that a certain sound means your mom is home, your brain develops a mechanism that has the function of indicating that specific event. Some functions come from cultural transmission, like when you learn that a thumbs-up means “good job.” The point is that functions can arise in different ways, as long as there’s some kind of selection process—some history of successful use that explains why the mechanism exists.


Three Major Versions of the Theory

Not everyone agrees on exactly how functions determine content. There are three main camps.

The Informational View

One early version, developed by Fred Dretske, says that mental states have the function of carrying information about the world. Your visual system, for example, has the function of indicating what’s in front of you. When light hits your retina and your brain creates a visual experience, that experience is supposed to carry information about actual objects.

This sounds good until you think about the details. Consider a simple bacteria that has a tiny magnet inside it. The magnet points north, which in the ocean (where the bacteria lives) also points downward, toward oxygen-free sediment where the bacteria can survive. What does the magnet “represent”? Does it represent “north”? “Down”? “Safe sediment”? All of these are true in the bacteria’s normal environment. But when a scientist holds a bar magnet nearby and the bacteria swims upward to its death, we want to say it made a mistake. But what exactly was it mistaken about? On Dretske’s theory, it’s hard to say, because the magnet has the function of indicating all of these things at once. This is called the problem of distal content—the problem of figuring out which thing in the causal chain (light, image on retina, object in the world) the representation is actually about.

The Consumer-Based View

Ruth Millikan takes a different approach. She points out that representations are always part of a system: there’s a producer that makes the representation, and a consumer that uses it. Think about a beaver slapping its tail on the water to warn other beavers of danger. The producer is the beaver that slaps. The consumers are the other beavers that hear the slap and dive for cover. The content of the signal—what it means—is determined by what the consumers need it to mean in order for their response to work properly. The signal means “danger” because when it actually corresponds to danger, the consumers survive; when it doesn’t, they waste energy diving for no reason.

Millikan applies this to the brain. Your visual system produces representations; your motor systems consume them to guide behavior. What a visual representation means depends on what would have to be true for the motor systems to do their jobs properly—for you to catch prey, avoid predators, find your way home. This is why, on Millikan’s view, a frog’s visual representation might mean “frog food” rather than “small dark moving thing,” because that’s what the frog’s prey-catching system needs to be true for the frog to eat successfully.

This view has a strange consequence. Remember the bacteria with the magnet? On Millikan’s view, it doesn’t represent anything at all, because there’s no consumer system that uses the magnet’s orientation to guide behavior in a way that requires a specific mapping to the world. The magnet just causes the bacteria to move. That might be correct—maybe bacteria really don’t have thoughts.

The Causal-Informational View

Karen Neander offers a third version that combines elements of both previous views. She focuses on the producer side of things. On her view, a sensory-perceptual system has the function of responding to certain causes. Your visual system has the function of responding to red things by producing a RED state. If it produces a RED state in response to something that isn’t red, that’s a malfunction—a misrepresentation.

Neander’s theory solves the problem of distal content in an elegant way. Your visual system was selected for responding to distal objects (like chairs) by way of responding to proximal stimuli (like patterns of light on your retina). But the selection was for the distal response, not the proximal one. The proximal response was just a means to the distal end. So the content of the visual representation is the distal object—the chair—not the light patterns or the retinal images.

This also handles the frog case differently. On Neander’s view, the frog’s visual system was selected for responding to small dark moving things (because those things were nutritious), but it wasn’t selected for responding to nutritiousness itself—because the frog’s visual system can’t detect nutritiousness directly. So the frog represents “small dark moving thing,” not “frog food.” Some philosophers think this is more plausible, especially if you’re trying to explain how the frog’s brain actually works in scientific terms.


Big Problems for All These Theories

None of these views is obviously correct, and philosophers have raised some serious objections.

The Swampman Problem

Imagine that lightning strikes a swamp in exactly the right way to create a perfect physical copy of you—every molecule, every neuron, exactly as they are right now. This “Swampman” looks exactly like you, acts exactly like you, and would say everything you would say. But Swampman has no history. He was created by pure accident, not by evolution or development or learning.

On the teleological view, Swampman has no mental content. His brain states have no functions, because functions depend on history and selection. But that seems crazy. If you punched Swampman, he would feel pain and say “ow.” If you asked him what he had for breakfast, he would tell you. Intuitively, he has thoughts and feelings just like you do.

Proponents of teleological theories have various responses. Some say the intuition is wrong—we’re just fooled by the appearance of design, like we might be fooled by a perfectly detailed rock that looks like a watch. Others say that even if Swampman could have thoughts in some possible world, that doesn’t show anything about what thoughts actually are in our world. This is still a live debate.

The Problem of Fancy Concepts

Even if teleological theories can explain how we think about food, predators, and other things that matter for survival, can they explain how we think about democracy, quarks, or carburetors? These things don’t seem to have any direct connection to evolutionary fitness. Your ancestors didn’t need to think about carburetors to survive and reproduce.

Teleological theorists have two main responses. First, they point out that not all functions come from evolution. Learning and cultural transmission can create new functions on shorter timescales. Once you have basic concepts, you can combine them to think about new things. Second, they argue that this is just a hard problem that all naturalistic theories face—it’s not unique to teleological approaches. The fact that we don’t have a complete answer yet doesn’t mean the approach is doomed.

The Too-Liberal Problem

Some critics say teleological theories attribute thoughts to too many things. If having a function is enough for having content, then plants might have representations, and bacteria, and maybe even thermostats. Is that plausible? Do sunflowers think about the sun when they turn toward it?

Some teleological theorists embrace this conclusion. They say that representation comes in degrees, and simple organisms have simple representations. Others try to add extra requirements—like the need for a consumer system, or the need for perceptual constancy mechanisms—to draw a line between genuine representation and mere causal sensitivity. Nobody has found a completely satisfying line yet.


Why This Still Matters

Here’s a strange thing about this whole debate. You’ve been having thoughts your whole life, and you’ve never needed a philosophical theory to do it. The puzzle isn’t how to have thoughts—you’re doing it right now. The puzzle is how to explain what thoughts are in a way that fits with everything else we know about the world.

If you’re a physicalist—if you think everything is made of physical stuff, including your mind—then you need to explain how physical stuff can be about other physical stuff. Teleological theories are one of the best attempts to do this. They say that meaning arises from function, and function arises from history. Your thoughts mean what they do because of what your brain was designed (by evolution, by learning, by experience) to do.

The debate isn’t settled. Philosophers still argue about which version of the theory works best, whether any version can handle all the objections, and whether maybe the whole approach is wrong. But the question itself—how can a brain be about anything?—is one of the deepest puzzles there is. And it’s a puzzle you carry around with you every moment you’re awake.


Appendices

Key Terms

TermWhat it does in this debate
IntentionalityThe property of being about something; what makes a thought a thought rather than just a brain event
Teleological theoryAny theory that explains mental content by appealing to what mental states are supposed to do
Etiological functionA function determined by history—what something was selected for
MisrepresentationThe possibility of being wrong; a good theory of content must allow this
Distal contentThe problem of figuring out which link in a causal chain a representation is about (the object itself vs. the light it reflects)
ConsumerIn Millikan’s theory, the system that uses a representation to guide behavior
ProducerIn Millikan’s theory, the system that creates a representation

Key People

  • Fred Dretske (1932–2013): An American philosopher who tried to combine the idea that thoughts carry information with the idea that they have functions.
  • Ruth Millikan (born 1933): An American philosopher who argued that what a thought means depends on how it’s used by other parts of the mind or brain.
  • Karen Neander (1954–2020): A German-American philosopher who argued that sensory representations mean whatever they were selected to respond to.
  • David Papineau (born 1947): A British philosopher who argued that desires come first—their content is what they’re supposed to bring about—and beliefs get their content from how they help satisfy desires.

Things to Think About

  1. If Swampman has no mental content because he has no history, does that mean you could never build a conscious robot? Or could a robot have a history (of training, of learning) that counts?

  2. The frog that snaps at everything small, dark, and moving—does it think it’s snapping at “food” or at “small dark moving thing”? How could you tell? Does the answer matter for how you’d study the frog’s brain scientifically?

  3. Plants turn toward sunlight. Bacteria swim away from toxins. Do they have representations? If not, where exactly do you draw the line between “mere response” and “genuine thought”?

  4. The teleological theory says content depends on history. But you can have a brand-new thought right now—about something you’ve never thought about before, like a purple giraffe wearing a top hat. How does history help explain that thought?

Where This Shows Up

  • Artificial intelligence: When people argue about whether AI systems “actually understand” language or just manipulate symbols, they’re arguing about a version of the content question.
  • Biology and neuroscience: Scientists who study animal behavior or brain function sometimes need to decide what an animal’s brain states “mean”—this is a practical version of the theoretical problem.
  • Everyday arguments about meaning: When you say someone “misunderstood” you, or that a sign “means” something different than it says, you’re dealing with the same basic puzzle about how things get to be about other things.