Philosophy for Kids

How Do We Know What Animals Are Thinking?

You’re watching a dog at the park. She stops, tilts her head, and stares at a bush. A squirrel darts out. The dog chases it. Later, she digs up a bone she buried months ago. You’d probably say she remembered where she put it, and that she wanted to catch that squirrel.

But is that really what’s going on? Could she just be following a simple rule—“if something small moves fast, chase it”—without any thoughts or feelings at all? And how would we ever tell the difference?

This is the puzzle at the heart of a field called comparative cognition. Scientists who study animal minds want to know what other creatures are actually thinking and feeling, not just what they’re doing. But there’s a problem: we can’t ask them. So researchers have to design experiments that reveal what’s happening inside, and decide what counts as good evidence. That turns out to be surprisingly difficult—and surprisingly philosophical.


A Strange Discovery: The Scrub Jay Who Remembers

Imagine you’re a bird called a western scrub jay. You live in an environment where food comes and goes—sometimes there’s plenty, sometimes there’s nothing. So you hide food in hundreds of different spots to eat later. This is called “scatter-hoarding.”

Here’s what’s weird. If you hide a worm (which rots quickly) and come back to look for it after a few days, you’ll skip that spot and search for something else instead. But if you hide a nut (which lasts for months), you’ll go right back to where you buried it, even weeks later.

To a human watching, this looks like the bird is remembering what it hid, where it hid it, and when it hid it. In humans, we call this “episodic memory”—the ability to mentally travel back in time and relive a specific event. For years, scientists thought only humans could do this.

But here’s the twist: cuttlefish (which are more closely related to snails than to birds) do something similar. These animals split off from the bird lineage over 550 million years ago. Yet they also seem to remember “what, where, and when” about the food they eat. This is called convergent evolution—two distantly related species independently evolving the same solution to the same problem, like wings in birds and insects.

So either cuttlefish and scrub jays both evolved this remarkable memory separately, or something simpler is going on. Maybe they’re just following learned rules, like “don’t dig here if the food you hid here is probably rotten.” No memory of the past required.

How do you decide which explanation is right?


The Simple Explanation vs. The Complex One

This brings us to the central fight in comparative cognition. On one side: associative learning. This is the idea that animals (and humans) learn by forming connections between events. If a bell rings every time food arrives, you learn to associate the bell with food. If pushing a lever gives you a treat, you learn to push the lever. No thoughts, no memories, no minds—just patterns.

On the other side: complex cognition. This includes things like causal reasoning (“I made this happen”), theory of mind (“she sees what I see”), episodic memory (“I remember hiding that worm”), and future planning (“I’ll need food later”).

Here’s the problem. For almost any clever animal behavior you can observe, you can invent an associative learning story that explains it. The associative story is usually simpler—it doesn’t require the animal to have thoughts, memories, or plans. And many scientists think simpler explanations should be preferred.

This idea has a famous name: Morgan’s Canon, named after the psychologist Conwy Lloyd Morgan. He said that when you have two explanations for an animal’s behavior, you should pick the one that uses “lower” mental faculties—the simpler one. For a long time, this was treated as a golden rule.

But philosophers have poked holes in this. First, what counts as “simpler” isn’t obvious. Is an associative learning rule really simpler than a rule like “remember where the food is”? Sometimes yes, sometimes no—it depends on your background assumptions. Second, Morgan’s Canon was supposed to be a general rule, but simplicity might only matter in specific cases. Third, if we always favor simple explanations, we might miss real complexity. We might end up causing animals to look dumber than they actually are, just because we never let ourselves see evidence of their intelligence.

This last point is important. Some philosophers argue that worrying too much about false positives—attributing thoughts to animals that don’t have them—has led us to ignore false negatives: failing to see real intelligence when it’s there. They call this anthropodenial: refusing to acknowledge human-like qualities in animals, not because the evidence is bad, but because of a bias that humans are special.


Anthropomorphism vs. Anthropodenial

Let’s get more concrete. A human comes home and tells you, “My dog looked guilty when I came in. He knew he chewed the shoe.” Is that wrong? Could dogs actually feel guilt?

An anthropomorphic explanation (attributing human qualities to animals) might say: yes, the dog has a sense of right and wrong and feels bad when he breaks the rules. A skeptic might say: the dog has learned that when you come home and the shoe is destroyed, you get angry. The dog has learned to act submissively when you’re angry, because that reduces the chance of punishment. No guilt required.

Which is right? We don’t know for sure. But here’s where it gets tricky. Humans have a strong tendency to see minds everywhere. In a famous experiment from the 1940s, researchers showed people a video of three simple shapes—a triangle, a circle, and a square—moving around on a screen. Almost everyone described it in human terms: “The triangle was chasing the circle, and the square was trying to get away.” We can’t help it. So if we’re predisposed to see minds even in triangles, we might be especially prone to seeing them in animals.

But here’s the flip side. We also have a strong tendency to think humans are special and unique. This anthropocentrism (putting humans at the center) makes us resist seeing animal intelligence. As one researcher put it, “Cries of anthropomorphism are heard particularly when a ray of light hits species other than our own.” In other words, we’re quick to accuse someone of being too generous to animals, but slow to accuse them of being too stingy.

So we have two biases pulling in opposite directions. One makes us see minds too easily. The other makes us refuse to see them. Neither is obviously the right default.


When Science Gets Political: Why This Matters

This isn’t just an abstract debate. The results of animal cognition research are used in real court cases.

In 2013, a group of lawyers filed a petition for a writ of habeas corpus on behalf of a chimpanzee named Tommy, who was living alone in a dark shed. A writ of habeas corpus is a legal order that says “produce the body”—it’s used to challenge whether someone is being held in prison unlawfully. If Tommy had won, it would have been the first time in US history that a nonhuman animal was recognized as having the right not to be treated as property.

The case depended in part on evidence about chimpanzee cognition. The lawyers argued that chimpanzees have enough mental complexity that solitary confinement causes them real harm. Scientists testified about what chimpanzees can and cannot think and feel.

Should the standards of evidence be different when the outcome affects whether a living being gets legal protection? Many philosophers say yes. If we’re going to use animal cognition research to make decisions about welfare and rights, we might need to set the bar differently than we would for a pure scientific question. If the cost of being wrong is high (for example, we fail to protect animals that can actually suffer), maybe we should be more willing to accept evidence of complex cognition, even if it’s not perfect.

But others worry that lowering the bar introduces bias in the other direction—we might end up attributing thoughts and feelings to animals that don’t have them, which could also lead to bad decisions.


The Underdetermination Problem

Here’s a deeper philosophical issue. Philosophers call it underdetermination: the evidence often doesn’t tell us enough to decide between competing explanations.

Suppose a crow solves a puzzle by using a tool to reach food. An associative learning explanation might say: the crow learned by trial and error that poking a stick into a tube sometimes produces food. A complex cognition explanation might say: the crow understood that the stick could act as an extension of its body to retrieve something out of reach.

Both explanations fit the data. The crow got the food. The question is how. And more data doesn’t always help, because clever scientists can often invent new associative stories to fit almost any result.

One recent proposal for dealing with this is called signature testing. Instead of just asking “does the animal succeed at the task?”, researchers should look for specific patterns of errors, biases, and limits. If an animal has a real understanding of cause and effect, it should make certain kinds of mistakes but not others. Its behavior should have a “signature” that’s hard to fake with simple learning rules.

Another approach is to think in dimensions rather than categories. Instead of asking “does this animal have causal reasoning?” (yes/no), we can ask more specific questions: what kind of causal information can it pick up? How flexibly can it use that information? Can it combine different types of information? This makes the picture messier but more honest.


What About AI?

Recently, a new twist has emerged. Researchers have started comparing animal cognition to artificial intelligence. The idea is that AI systems face similar challenges to animals: they need to navigate environments, solve problems, and predict outcomes. And AI systems are transparent in a way that animals aren’t—you can examine their code and see exactly what they’re doing.

Some researchers have built testbeds where both animals and AIs are given the same tasks—like finding food behind obstacles, or learning which objects are safe to touch. So far, children aged 6–10 outperform AIs on complex tasks like object permanence (knowing that something still exists even when you can’t see it). But AIs can do some simple tasks as well as animals.

This raises a strange question: if an AI system solves a problem the same way an animal does, does that mean the animal is “just running a program”? Or does it mean the AI is more like a mind than we thought? Philosophers are still arguing about this.


What We Still Don’t Know

You might be hoping for an answer by now: “So do animals actually think or not?” The honest answer is: nobody really knows, and philosophers still disagree about how to find out.

We know that animals do remarkable things—remembering food locations, using tools, cooperating, deceiving each other. But we don’t know for sure what it’s like to be them, or whether their inner lives are anything like ours. The methods for studying animal minds are clever, but they’re also limited. And the biases we bring to the problem—both for and against seeing minds—make it even harder.

What’s clear is that the question is worth asking. Because how we answer it shapes how we treat the other creatures we share the planet with.


Appendices

Key Terms

TermWhat it does in this debate
Associative learningThe idea that animals learn by forming connections between events (like bell + food = salivation) without any thoughts or understanding
Complex cognitionFancier mental abilities like reasoning, planning, memory, and understanding others’ minds
Convergent evolutionWhen two distantly related species evolve the same solution to a similar problem (like wings in birds and insects)
Morgan’s CanonThe rule of thumb that scientists should prefer simpler explanations for animal behavior over more complex ones
AnthropomorphismAttributing human-like qualities (thoughts, feelings) to animals, possibly without good evidence
AnthropodenialRefusing to see human-like qualities in animals, possibly because of a bias that humans are special
UnderdeterminationWhen the available evidence doesn’t tell you which of two competing explanations is right
Signature testingLooking for specific patterns of errors and limits in animal behavior, rather than just seeing if they succeed at a task

Key People

  • Conwy Lloyd Morgan — A British psychologist who argued that scientists should prefer simpler explanations for animal behavior (his “canon” is still debated today)
  • Fritz Heider and Marianne Simmel — Psychologists who showed that people will describe moving shapes in human terms, revealing our tendency to see minds everywhere
  • Kristin Andrews — A philosopher who argues that we should take animal minds seriously and that folk psychology (our everyday understanding of minds) is a useful starting point for research

Things to Think About

  1. If you had a pet and it acted guilty, how would you decide whether it actually felt guilt or was just acting to avoid punishment? What kind of experiment could distinguish these?

  2. Morgan’s Canon says to prefer simpler explanations. But what counts as “simpler”? Is a rule like “if you see a human angry, act submissive” simpler than “feel guilty about doing wrong”? In what sense?

  3. Suppose we discover that an AI system can perfectly simulate guilt—it acts just like a guilty dog when it “misbehaves.” Would that mean the AI actually feels guilt? If not, why would it be different for the dog?

  4. If animal cognition research is used in court cases about animal rights, should scientists set a lower bar for accepting claims about animal minds? Or should they keep the same standards as in pure research? What are the risks either way?

Where This Shows Up

  • Your own pets. The next time your cat or dog does something that “seems” smart or emotional, ask yourself: what alternative explanations exist? How would you test them?
  • Animal welfare debates. Laws about how we treat farm animals, zoo animals, and research animals often depend on what scientists think those animals can feel and understand.
  • AI and robot ethics. As we build smarter machines, the same questions about mind and consciousness come up: how do we tell if an artificial system is really “thinking” or just following rules?
  • Criminal justice. Humans also act guilty when they’re innocent sometimes. The same problem of distinguishing real guilt from learned fear shows up in courtrooms—we’re just better at asking humans directly.