What Makes a Feeling Conscious? The Puzzle of Awareness Inside Your Head
Here’s a strange thing: you can see things without knowing you’re seeing them. You can feel pain without realizing you’re in pain. You can have desires you don’t know you have.
Most of us think of our minds as one big stage where everything that happens is lit up, present, felt. But that’s not true. Lots of mental activity happens in the dark. You’ve probably experienced this yourself: you’re driving somewhere familiar, your mind wanders, and suddenly you “come to” and realize you’ve been driving for five minutes without consciously noticing any of it. Yet you must have been seeing—you didn’t crash. So there were visual experiences happening in your brain, but they weren’t conscious visual experiences.
This leads to a puzzle that philosophers have been arguing about for decades: What makes the difference between a mental state that is conscious and one that isn’t? What turns a thought or perception from something that just happens inside you into something that feels like something to have?
The Core Idea: Awareness of Awareness
Here’s one natural answer, and it’s the answer that a whole family of theories (called “higher-order theories”) gives: a mental state becomes conscious when you are aware of that state. When you’re just seeing without knowing you’re seeing, your visual state is unconscious. But when you’re also aware that you’re seeing—when you have some kind of second-level awareness of your first-level seeing—then that seeing becomes conscious. It lights up.
This sounds simple, but it leads to deep disagreements. What kind of “awareness” are we talking about? Is it like a second sense, an “inner eye” that watches your own mind? Is it just a thought you have about your own mental states? Does the awareness have to actually happen, or just be possible? And can non-human animals have this kind of self-awareness?
The Transitivity Principle
The basic intuition behind higher-order theories is often called the Transitivity Principle. It says: a conscious mental state is one whose subject is, in some way, aware of being in it. Think about it this way: if you’re in a mental state and you have no clue you’re in it, that state is unconscious. If you are aware of it, it’s conscious. The higher-order theorist says this isn’t just a correlation—it’s what makes the state conscious.
This seems to match real cases. Consider “blindsight.” Some people who have damage to their visual cortex report being blind in part of their visual field. But if you ask them to guess what’s in that “blind” area—whether there’s a vertical or horizontal line, for example—they guess correctly far more often than chance. They’re seeing, but they don’t know they’re seeing. They have no awareness of their own visual states. And those states don’t feel like anything to them. They’re unconscious.
So what would it take to turn a blindsight perception into a conscious one? According to higher-order theorists, you’d need to add some kind of awareness of that perception—a second mental state that’s about the first one.
The Main Split: Inner Sense vs. Higher-Order Thought
Philosophers who agree on the higher-order approach disagree about what kind of higher representation does the job. The two main camps are:
Inner Sense Theory
This view, which goes back to the philosopher John Locke in the 1600s, says we have something like an “inner sense” or “inner eye” that scans our own mental states, the way our outer senses scan the world. Just as your eyes produce perceptions of colors out there, your inner sense produces perceptions of your own mental states in here. When a first-level visual state gets scanned by inner sense, it becomes conscious.
One advantage of this view: it explains why we can recognize our own experiences directly. You don’t need to figure out via reasoning that you’re seeing red—you just know. That’s because your inner sense gives you a kind of direct perception of your own seeing.
But the view has problems. For one thing, if there’s an inner sense, why don’t we notice any distinctive feel to its operation? When you pay attention to your experience of a red rose, you just find yourself focusing on the redness of the rose—not on any extra layer of awareness. This is sometimes called the “transparency” of experience. Also, an inner sense would need to be a complicated physical system in the brain, scanning other brain states. It’s hard to say how such a thing could have evolved.
Higher-Order Thought Theory
This view says the higher-order awareness isn’t a perception—it’s a thought. A conscious mental state is one that you have a higher-order thought about, a thought that says something like “I am now seeing red” or “I am feeling pain.”
The philosopher David Rosenthal has developed this view in detail. On his account, when you have a conscious experience of red, two things are happening: a first-order perception of red, and a separate higher-order thought about that perception—a thought that you are having that experience. Crucially, this higher-order thought is usually unconscious itself. It doesn’t need to be another conscious state on top, or you’d get an infinite regress of awareness-of-awareness-of-awareness.
This view has a clear advantage: we know humans have the capacity to think about their own mental states, and we can tell plausible stories about why that capacity evolved (for planning, reasoning, predicting others’ behavior, and so on). But there’s a tricky question: what about the richness of conscious experience? When you look out at a complex cityscape, you seem to be conscious of an enormous amount of detail simultaneously. Does that mean you have hundreds of higher-order thoughts happening at once, all unconsciously? That seems implausible.
A Third Option: Dispositional Theory
Some philosophers try to avoid this problem by saying that a conscious state isn’t one that actually gets thought about—it’s one that is available to be thought about. On this “dispositional” view, your rich conscious experience of the city is conscious because your perceptual states are all sitting there ready to trigger higher-order thoughts, even if those thoughts don’t actually occur. The philosopher Peter Carruthers defended this view for a while (though he later changed his mind).
This avoids the problem of needing millions of actual thoughts. But it raises a new question: how can mere availability make a state feel like something? If a perception just sits there, not actually being thought about, what gives it the conscious “glow”?
Self-Representational Theories: When a State Thinks About Itself
A more recent twist on higher-order theories says that the first-order state and the higher-order state aren’t really separate. Instead, a conscious state is one that represents itself. It doesn’t just represent red—it also represents itself as an experience of red. This idea goes back to the philosopher Franz Brentano in the 1800s.
Think of it like this: an experience of red might be a single mental state that has two jobs at once. It tells you about the world (there’s something red out there) and it tells you about itself (you’re having an experience of red). This avoids the worry that you need a whole separate mental process running alongside every conscious experience. The state does both jobs itself.
But it’s hard to explain how a single mental state could pull this off. How does it “point to” itself? Critics say this is mysterious without a detailed story about how brains could make it happen.
The Big Objections
Higher-order theories face some serious challenges. Here are three of the biggest.
The “Rock” Objection
Think about a rock. If you become aware of a rock—if you look at it or think about it—does the rock become conscious? Of course not. So why should becoming aware of a mental state make that state conscious? What’s so special?
Higher-order theorists reply: mental states are the kind of thing that can be conscious, whereas rocks aren’t. Your awareness of a rock doesn’t give the rock any special property because rocks aren’t the right sort of thing to have mental properties. But your awareness of your own perception of the rock does give that perception the property of being conscious, because perceptions are the right sort of thing.
Some critics find this reply unsatisfying. They want to know why mental states are the right sort of thing.
The Animal Consciousness Problem
This is a big one. If consciousness requires higher-order thoughts (or inner sense, or self-representation), then animals without those capacities might not be conscious. But most of us strongly believe that dogs, cats, horses, and probably many other animals have conscious experiences. There’s “something it’s like” to be a bat, as the philosopher Thomas Nagel famously put it.
Do cats have the concept of “experience”? Do they think thoughts like “I am now seeing a mouse”? If they don’t, then higher-order thought theory might say they’re not conscious. Many people find this implication deeply implausible—and also morally troubling, since it could affect how we think about animal suffering.
Higher-order theorists respond in various ways. Some argue that animals might have simpler forms of higher-order awareness that don’t require full-blown concepts. Others bite the bullet and say our intuition about animal consciousness might be wrong—we might just be imagining what it would be like for us to be a cat, and wrongly assuming the cat feels the same.
The Explanatory Gap
Finally, there’s the problem that even if higher-order theories are true, they might not really explain consciousness. You can describe all the higher-order thoughts and perceptions you want, but does that tell you why these processes feel like anything at all? Couldn’t there be a creature that has all the right higher-order representations—first-order perceptions getting targeted by higher-order thoughts—but still has no inner “feel” to its experiences?
Philosophers who press this objection think higher-order theories are missing something essential. They say there’s an “explanatory gap” between any functional or representational description and the actual subjective quality of experience. Higher-order theorists reply that this sets the bar for explanation too high. We don’t need to be able to imagine how the explanation works for it to be correct—we just need good evidence that the explaining properties actually constitute the explained ones.
Still an Open Question
After decades of debate, no version of higher-order theory has won universal acceptance. Each version handles some problems well and runs into others. The basic intuition—that consciousness involves some kind of awareness of our own mental states—remains powerful for many philosophers. But exactly how to cash that out, and whether it really solves the mystery of consciousness, is still very much up for grabs.
Part of what makes this debate fascinating is that it’s not just technical. It connects to questions about what it’s like to be a bat, about whether a computer could be conscious, about how we should treat animals, and about whether science can ever fully explain the most intimate fact about each of us: that it feels like something to be alive.
Key Terms
| Term | What it does in the debate |
|---|---|
| Higher-order theory | The family of views that say a mental state becomes conscious when it is the object of some kind of higher-order awareness |
| Transitivity Principle | The basic intuition that a conscious state is one whose subject is aware of being in it |
| Inner sense theory (HOP) | The view that higher-order awareness works like a perception—an “inner eye” scanning your own mental states |
| Higher-order thought theory (HOT) | The view that higher-order awareness is a thought, not a perception—a thought about your own mental state |
| Actualist HOT | The version saying a conscious state must actually be the target of a higher-order thought |
| Dispositionalist HOT | The version saying a conscious state just needs to be available to trigger a higher-order thought |
| Self-representational theory | The view that a conscious state represents itself, not just the world—it’s both first-order and higher-order at once |
| Blindsight | A condition where people can respond to visual stimuli without any conscious awareness of seeing them; used as evidence that perception can be unconscious |
| Explanatory gap | The sense that even a perfect description of brain processes or mental representations doesn’t fully explain why there should be subjective experience |
Key People
- John Locke (1632–1704): English philosopher who first proposed something like inner sense theory, calling it “reflection” — the mind’s perception of its own operations.
- David Rosenthal: Contemporary philosopher who developed the most detailed version of actualist higher-order thought theory, arguing that consciousness consists in unconscious higher-order thoughts about our mental states.
- Peter Carruthers: Philosopher who defended dispositionalist HOT theory for many years (arguing consciousness is about availability to higher-order thought) before switching to a first-order view.
- Thomas Nagel: Philosopher famous for arguing that consciousness is subjective and can’t be captured by objective science, using the example of “what it’s like to be a bat.”
- William Lycan: Philosopher who defended inner sense theory for many years, though he later moved away from it toward an attention-based account.
Things to Think About
-
When you’re daydreaming while walking and “come to,” were you having conscious experiences during the daydream? What about the visual experiences that kept you from walking into a pole? If they weren’t conscious, what makes you think they were really experiences at all?
-
If higher-order thought theory is right, then babies and animals who can’t think about their own mental states might not be conscious. Does this seem right to you? If not, what alternative account could explain why they are conscious?
-
Could there be a creature that has all the right higher-order thoughts—it thinks “I am seeing red,” “I am feeling pain”—but those thoughts and perceptions just don’t feel like anything? If you can imagine this, does it show that higher-order theories miss something essential?
-
The rock objection says: thinking about a rock doesn’t make it conscious, so why should thinking about a mental state make it conscious? What do you think the best reply is? Does it fully answer the challenge?
Where This Shows Up
- Artificial intelligence: If consciousness requires higher-order awareness, an AI that can monitor and report on its own processing might be closer to consciousness than one that just processes data. This matters for debates about AI rights and safety.
- Animal welfare laws: Whether animals are conscious is central to how we treat them. If higher-order theories imply that many animals lack consciousness, this could challenge the basis for animal protection laws—or force us to find other grounds for protecting them.
- Neurology and psychology: Blindsight and other conditions (like “neglect,” where patients ignore half of their visual world) are studied not just for medical reasons but because they reveal how consciousness can come apart from perception. Understanding higher-order theories helps make sense of these conditions.
- Everyday life: The idea that you can perceive without being aware of perceiving isn’t just philosophical—it’s part of your daily experience. That moment of “coming to” while driving is a real puzzle about what consciousness is, and you’ve experienced it yourself.