Philosophy for Kids

What Makes You Conscious? What Your Brain Is Doing When You're Aware

Here’s a strange fact: right now, you’re having an experience. You’re seeing these words, maybe hearing the hum of a fan or feeling the chair under you. There’s something it feels like to be you right now. And nobody—not the smartest neuroscientist in the world—can fully explain why that’s true.

Think about it. Your brain is a lump of tissue, about three pounds of cells and electricity. It processes information, sends signals, runs computations. But unlike a computer, it also experiences things. When a computer processes a photo of a sunset, there’s nothing it’s like to be that computer. When you see a sunset, there is something it’s like. You feel the warmth, the colors, maybe a sense of calm.

Why? How does a bunch of neurons firing turn into you having an experience?

This is the central puzzle of the neuroscience of consciousness, and it’s weird enough that most neuroscientists don’t try to solve the whole thing at once. Instead, they break it into smaller questions. Let’s look at what they’ve figured out—and what still baffles them.

The Two Questions

If you’re going to study consciousness scientifically, you need a way to ask questions that can actually be tested. Neuroscientists have settled on two:

Generic consciousness asks: What makes a mental state conscious rather than unconscious? When you’re dreaming, you’re conscious. When you’re under anesthesia, you’re not. What’s the difference in your brain?

Specific consciousness asks: Why does one conscious experience have the content it does? Why does seeing a red apple feel different from hearing a bell ring? What’s going on in your brain that makes those experiences different?

These two questions might sound simple, but they lead to very different experiments and theories.

The Hard Problem

Before we get into the science, we need to talk about something philosopher David Chalmers called the “hard problem” of consciousness.

The easy problems—and they’re not actually easy, just easier—are things like: How does the brain process visual information? How do we pay attention to one thing instead of another? How do we store and recall memories? These are hard scientific questions, but we know how to investigate them. We can measure brain activity, build models, run experiments.

The hard problem is different. It’s: Why does all this information-processing feel like anything at all? Why isn’t it just happening in the dark, like a computer? Why is there a you in there having the experience?

Some neuroscientists just set this problem aside and get on with their work. Patricia Churchland, a philosopher who works closely with neuroscience, says: “Learn the science, do the science, and see what happens.” Maybe the hard problem will dissolve once we understand more. Or maybe it won’t—and that’s part of why this debate is still alive.

How Do We Know Someone Is Conscious?

Before you can study consciousness, you need to track it. How do you know when someone is conscious of something?

The most obvious way is to ask them. This is called introspection—looking inward and reporting what you find. If I show you a picture and you say “I see a cat,” I have pretty good evidence that you’re consciously seeing a cat.

But introspection has problems. People aren’t always reliable. You might think you saw something clearly when you actually didn’t. Or you might be conservative in your reports—if you’re not sure you saw something, you’ll say you didn’t, even if you had a faint experience. Scientists are still arguing about how trustworthy introspection is.

Some researchers use metacognitive approaches instead. They ask people to report how confident they are in their judgments. “How sure are you that you saw that cat?” Confidence ratings can be measured more precisely and compared across different situations. But they still rely on the person’s own report.

Other researchers use no-report paradigms. These are clever experiments that track consciousness without asking the person to say anything. For example, in binocular rivalry experiments (which we’ll talk about soon), researchers can tell which image a person is experiencing by tracking their eye movements instead of asking them. This gets around some of the problems with introspection, but it has its own limitations.

There’s also the intentional action inference: if someone acts deliberately based on what they perceive, you can infer they’re conscious of it. If your friend catches a ball thrown at them, they probably saw it coming. But this inference isn’t always safe, as we’ll see.

The Big Theories

Several major theories try to answer the generic consciousness question. Each points to different brain features as the key to consciousness.

Global Neuronal Workspace Theory

Think of your brain as having many specialized systems: one for vision, one for hearing, one for memory, one for planning actions, and so on. Normally, each system does its own thing. But sometimes—when you’re consciously aware of something—information gets “broadcast” to multiple systems at once. It enters a global workspace.

This theory says: a mental state is conscious when and only when the information it carries enters this global workspace and becomes accessible to many brain systems at once. That’s what it means to be aware of something: your whole brain can use that information.

The workspace is built from long-range connections between different brain areas, especially in the front of the brain (prefrontal cortex) and the back (sensory areas). When information enters the workspace, you see a burst of widespread brain activity.

One problem: when people report being conscious of something, they also have to access that information to report it. So the brain activity we see might be about reporting, not about being conscious. The theory might be explaining access consciousness rather than phenomenal consciousness.

Recurrent Processing Theory

This theory says you don’t need widespread broadcasting to be conscious. Instead, consciousness happens when information bounces back and forth within sensory areas themselves.

Here’s how vision works: when light hits your eyes, signals travel forward through a hierarchy of visual areas—from V1 to V2 to higher areas. That’s called feedforward processing. But there are also connections going backward, from higher areas to lower ones. When signals start looping around in these circuits, you get recurrent processing.

According to this theory, recurrent processing in visual areas is enough for visual consciousness—even if the information never reaches the global workspace or the prefrontal cortex. This means you could be conscious of something without being able to report it or use it in reasoning. Some philosophers think this happens all the time: you see more than you can say.

Higher-Order Theory

This theory takes a different approach. It says: to be in a conscious state, you need to be aware of being in that state. That requires a higher-order mental state that represents the first-order state.

In plain language: if you’re seeing a red apple but have no awareness at all that you’re seeing it, then the seeing isn’t conscious. Conscious seeing requires that you also represent to yourself that you’re seeing.

This higher-order representation probably happens in the prefrontal cortex, the front part of your brain. Some versions of this theory say you can have a conscious experience even without any sensory activity in the back of your brain—as long as the prefrontal cortex is representing that experience. This would explain things like vivid dreams that seem real even though your senses are shut down.

Information Integration Theory

This is the most mathematically ambitious theory. It says consciousness is integrated information—information that’s both highly differentiated (lots of distinct states possible) and highly integrated (the parts can’t be separated without losing information).

The theory assigns a number, Φ (phi), to any system. A high Φ means the system is conscious. The more integrated the information, the more conscious. This theory predicts that some systems we don’t think of as conscious might be (like certain simple circuits with the right connections), and some systems we think might be conscious might not be (like the cerebellum, which has many neurons but a very modular, unintegrated structure).

This theory has led to some strange predictions. A two-dimensional grid of inactive logic gates—doing nothing at all—might have a high Φ and therefore be conscious of nothing in particular. Most neuroscientists find this implausible.

Front or Back?

One way to group these theories is by where they point in the brain.

The posterior hot zone (back of the brain, especially visual areas) is where recurrent processing theory says consciousness happens. Some information integration theorists also emphasize this region.

The prefrontal cortex (front of the brain) is where higher-order theory and global workspace theory put a lot of emphasis.

Experiments that test these theories often try to see whether you can have consciousness without prefrontal activity, or prefrontal activity without consciousness. The results are still being debated.

Unconscious Vision: The Strange Cases

Some of the most interesting evidence comes from people with brain damage who can see without being conscious of seeing.

Visual Agnosia and Patient DF

Patient DF suffered brain damage from carbon monoxide poisoning. She lost the ability to recognize objects visually. Show her a pencil and she can’t tell you what it is. She can’t describe its shape or orientation.

But here’s the strange part: if you put a letter-slot in front of her and ask her to post a card through it, she can do it perfectly, even though she can’t tell you whether the slot is horizontal or vertical. Her brain processes visual information about shape and orientation—just not consciously. The information goes to her action systems but not to her awareness.

This suggests there are at least two visual streams in the brain: a ventral stream (for recognizing what things are, linked to conscious awareness) and a dorsal stream (for guiding actions, which can operate unconsciously). Patient DF’s ventral stream was damaged, but her dorsal stream was intact.

Blindsight

Even more striking is blindsight. Patients with damage to their primary visual cortex (V1) report being blind in part of their visual field. They say they can’t see anything there.

But if you force them to guess about what’s in that blind area—is there a moving dot? which direction is it going?—they guess correctly far more often than chance. Some can even navigate around obstacles they claim not to see.

Are these patients really unconscious of what they’re “seeing”? Some researchers argue that they actually have degraded, faint conscious experiences but are too conservative to report them. When you give them more response options (not just “see” or “don’t see” but degrees of clarity), they sometimes report having faint experiences. The debate continues.

Making People See Things That Aren’t There

To really test what makes conscious content specific, neuroscientists have tried to directly manipulate brain activity and change what people experience.

In one famous experiment, researchers stimulated neurons in area MT (a motion-processing area) while monkeys performed a motion-detection task. The monkeys’ behavior shifted as if they had seen more motion in the direction those neurons preferred. The stimulation seemed to change what the monkeys consciously perceived.

Even more dramatic: researchers stimulated the somatosensory cortex (which processes touch) of monkeys who had learned to compare the frequencies of two vibrations. When they used stimulation instead of an actual vibration, the monkeys performed just as well. It was as if the monkeys were having a tactile hallucination—experiencing something that wasn’t there.

These experiments show that you can, in principle, create conscious experiences by directly activating the right neurons. But we still don’t know exactly why that works. What’s the secret ingredient that turns neural activity into experience?

What’s Still Unknown

Here’s the honest truth: nobody knows the answer. There are competing theories, each with some evidence and some problems. The hard problem remains hard.

What we do know is that consciousness is tied to specific brain processes—recurrent processing, higher-order representation, information integration, or some combination. We know that damage to certain areas can eliminate consciousness of specific things. We know that stimulating certain areas can create experiences.

But the big question—why all this neural activity feels like something—still haunts the field. Maybe future discoveries will dissolve the mystery. Maybe we’ll realize we were asking the wrong question. Or maybe consciousness is one of those puzzles that never fully yields.

For now, the neuroscience of consciousness is a field where the most exciting discoveries are probably still ahead of us. And that’s part of what makes it fascinating.


Key Terms

TermWhat it does in this debate
Generic consciousnessLabels the question: what makes a mental state conscious rather than unconscious?
Specific consciousnessLabels the question: what makes a conscious experience have the content it does (seeing red vs. hearing a bell)?
Hard problemThe puzzle of why physical brain activity produces subjective experience at all
IntrospectionThe method of looking inward and reporting what you’re conscious of
Global neuronal workspaceA theory that consciousness is information being broadcast throughout the brain
Recurrent processingA theory that consciousness happens when signals loop back within sensory areas
Higher-order theoryA theory that consciousness requires being aware of being in a mental state
Information integration (Φ)A theory that consciousness is the amount of integrated information a system holds
Ventral streamThe “what” pathway for vision, linked to conscious recognition
Dorsal streamThe “how” pathway for vision, guiding actions, can operate unconsciously
Binocular rivalryAn experimental setup where different images to each eye cause alternating conscious experiences

Key People

  • David Chalmers – Philosopher who formulated the “hard problem” of consciousness and argued that standard scientific explanations might never fully explain subjective experience.
  • Stanislas Dehaene – Neuroscientist who developed the Global Neuronal Workspace theory, arguing that consciousness is information broadcast across the brain.
  • Victor Lamme – Neuroscientist who championed Recurrent Processing Theory, arguing that feedback loops in sensory areas are sufficient for consciousness.
  • David Milner and Melvyn Goodale – Researchers who developed the two-visual-streams theory (ventral for perception, dorsal for action) based on studies of patient DF.
  • Patricia Churchland – Philosopher who argues we should stop worrying about the hard problem and just do the science.

Things to Think About

  1. If you could perfectly copy the neural activity of someone having an experience (say, in a computer simulation), would that simulation also be conscious? Why or why not?

  2. Patient DF can navigate around obstacles she doesn’t consciously see. Is she really unconscious of them, or might she have some faint awareness she can’t articulate? How would you design an experiment to find out?

  3. If the higher-order theory is right, then animals that can’t think about their own mental states might not be conscious at all. Does that seem right, or does it conflict with how you think about your pet’s inner life?

  4. The global workspace theory says you’re only conscious of what’s “broadcast” across your brain. But you’ve probably had the experience of suddenly realizing something you were aware of a moment ago but not paying attention to. Was that earlier awareness conscious or not?

Where This Shows Up

  • Artificial intelligence debates: If we build an AI that processes information like a brain, will it be conscious? The theories above give different answers.
  • Medicine: Determining whether patients in vegetative states are conscious affects decisions about life support and treatment. The intentional action inference has been used to argue that some “unresponsive” patients are actually aware.
  • Animal rights: Different theories of consciousness lead to different judgments about which animals are conscious and deserve moral consideration.
  • Virtual reality and simulation: If you could directly stimulate someone’s brain to create experiences, as in the tactile vibration experiments, would that be as real as ordinary experience?