What Do We Both Know? The Puzzle of Common Ground
Here’s a strange thing about talking to another person. When you say “She did it,” the person you’re talking to usually knows who you mean. When you say “Put it over there,” they know what you’re talking about and where you mean. This happens hundreds of times a day, and it almost never goes wrong.
But how?
Think about it for a second. When you speak, you’re relying on a huge pile of assumptions about what the other person already knows, believes, and accepts. You assume they know who “she” is. You assume they know what “it” refers to. You assume they understand the words you’re using. You assume they’re paying attention and not deliberately misunderstanding you. All of this — the whole invisible pile of shared information that makes communication possible — is what philosophers and linguists call common ground.
The basic idea isn’t hard to grasp. When two people talk, they need a shared foundation to build on. But the deeper you dig, the stranger it gets. What exactly is common ground? How do we know what’s in it? How does it change when someone says something? And here’s the really brain-bending question: Can common ground ever be infinite?
What’s in the Container?
One way to think about common ground is as a kind of container — a big glass box filled with all the information that everyone in the conversation agrees on. If you and I both believe it’s raining outside, that fact goes in the container. If we haven’t decided whether it’s warm enough for swimming, that question stays out.
This is called the “container view,” and it was developed by a philosopher named Robert Stalnaker in the 1970s. On this view, when someone makes an assertion — says something like “The cat is on the mat” — they’re trying to add information to the container. If the conversation goes well, everyone accepts the new information, and the container gets updated. The set of possible worlds everyone considers “live” gets smaller.
This is neat and simple. But it has a serious problem.
Think about what happens when someone makes a promise. Suppose Barney tells Betty, “I’ll mow the lawn.” On the container view, you’d describe this by saying that all worlds where Barney doesn’t mow the lawn get removed from the container. But that’s exactly what you’d say if Barney had asserted that he’ll mow the lawn — like he was reporting a fact about the future rather than making a promise. The container view can’t tell the difference between an assertion and a promise. It treats them the same way.
Worse, the container view can’t even distinguish between common ground and information that people just happen to share by coincidence. If you and I both believe it’s raining, and we’ve never talked about it, we each have the same information. But that’s not the same thing as it being common ground between us. The container view has no way to mark the difference. The set of worlds is the same either way.
So while the container view is useful as a first pass, it’s too simple to do the real work philosophers need it to do.
The Mentalist View: Beliefs Stacking Up
A different approach starts with what’s inside people’s heads. On the mentalist view, common ground is made of mental states — beliefs, knowledge, or something like that. When it’s common ground that the cat is on the mat, that means not only that both people believe it, but that both people believe that both believe it, and so on.
This gets complicated fast, so let’s go slowly.
Suppose you and I are talking. We both believe that it’s Tuesday. That’s level 1 — we each have the belief. But for this to be common ground, we also need to know that the other person believes it. That’s level 2. And we need to know that the other person knows we know. That’s level 3. And so on.
On some versions of this view, common ground goes on forever. If it’s common ground between us that it’s Tuesday, then we both believe it’s Tuesday, we both believe we both believe it’s Tuesday, we both believe we both believe we both believe it’s Tuesday… infinitely. This is called unbounded common ground.
That sounds crazy. Do we really have infinite beliefs about each other’s beliefs? Some philosophers say yes, and they argue that it’s a necessary part of how communication works. But there’s a problem: to have unbounded common ground, you’d need to have infinitely many distinct beliefs. And that seems impossible for a creature with a finite brain.
Other philosophers say that common ground only goes a few levels deep — maybe three or four. After that, nobody’s really keeping track. On this bounded view, common ground is something like: we both believe it’s Tuesday, we both believe we both believe it’s Tuesday, and we both believe we both believe we both believe it’s Tuesday. Beyond that, the stack gets wobbly.
Which view is right? There’s a real debate here, and it hasn’t been settled. But the debate gets even more interesting when you realize there’s another whole way to think about common ground — not as something inside people’s heads at all.
The Normative View: What We Owe Each Other
Here’s a different angle. Instead of thinking about what people believe, think about what they’re committed to.
This might sound like a small shift, but it changes everything. Commitments aren’t beliefs. You can be committed to something you don’t believe (for example, a lawyer defending a client they think is guilty). And you can believe something you’re not committed to (for example, you might believe your friend is lying without ever saying so). Commitments are social relations between people — they’re about what you owe each other, what you’re responsible for, what you’ve agreed to.
When Fred tells Wilma “I saw a squirrel on the roof,” something happens between them that’s different from just exchanging information. Fred becomes committed to the truth of what he said. If Wilma accepts what he said — and she usually will, by nodding or saying “okay” — she shares that commitment. Now they’re jointly committed to the claim that Fred saw a squirrel on the roof.
This is where it gets interesting. Once you think of common ground as shared commitments, you get a different kind of infinity than the one that troubled the mentalist view. Because commitments have a logical structure: if Fred is committed to Wilma about the squirrel, then Wilma is committed to accepting that commitment. And Fred is committed to her acceptance. And she’s committed to his commitment to her acceptance. And so on.
The surprising result is that, on this view, you can’t have a single commitment between two people without having infinite commitments. Every commitment implies an infinite stack of further commitments about that commitment. But this infinity is much less troubling than the mentalist one, because commitments aren’t mental states. You don’t have to “think” about all those higher-order commitments. They’re just there, built into the structure of the social relationship.
The philosopher who first developed this kind of approach was David Lewis, a Princeton philosopher who got interested in common ground while studying how human conventions work. Lewis argued that what matters isn’t what people actually believe, but what they have reason to believe. And reasons to believe can stack infinitely without any psychological strain.
But Wait — What About Real Life?
You might be wondering: does any of this actually matter? Do people really need all this complicated machinery to talk to each other?
Here’s a way to test it. Think about situations where common ground is hard to establish.
In face-to-face conversation, common ground is relatively easy. You can see the other person, hear their responses, nod and adjust. If you’re not sure whether something is common ground, you can ask. But what about a TV news broadcast? The anchor is speaking to millions of people she’s never met. What’s common ground between her and her audience? It’s hard to say.
What about a message in a bottle? Someone writes a note, seals it, throws it in the ocean. Years later, on a different continent, someone finds it and reads it. Is there common ground between the writer and the reader? The writer didn’t know the reader existed. The reader doesn’t know who the writer was. They can’t possibly have the kind of mutual awareness that common ground seems to require.
Or consider a book by Charles Darwin. You’re reading it now, 160 years after it was written. Darwin is dead. You can’t interact with him. He can’t adjust what he’s saying based on your reactions. Is there common ground between you? Some philosophers think the answer is no — that in cases like this, communication happens without common ground. That would mean common ground isn’t necessary for communication after all.
This is a genuinely open question. Nobody has a fully worked-out answer yet.
So What Is Common Ground, Really?
After all this, you might be hoping for a final answer. But there isn’t one.
Different philosophers and linguists have different views, and the debate is still active. The container view is too simple. The mentalist view runs into problems about infinity and psychology. The normative view handles infinity better but raises new questions about what commitments really are. And it’s not even clear whether common ground is necessary for all forms of communication, or only for face-to-face conversation.
What almost everyone agrees on is this: common ground involves some kind of mutuality. It’s not enough that two people happen to share the same information. There has to be a recursive structure — I know that you know that I know that you know… However many levels deep that goes, and whatever it’s made of (beliefs, commitments, reasons), that’s the heart of common ground.
And here’s one last twist. Even if we figure out what common ground is made of, there’s still the question of whether mutuality is enough. Two people can be mutually happy about something — both happy, both aware the other is happy, both aware the other is aware, and so on. But mutual happiness isn’t common ground. So mutuality can’t be the whole story.
The philosopher who started all this, Paul Grice, gave his famous lectures on common ground in 1967. More than fifty years later, we’re still working out what he started. If you find this frustrating, good — that means you’re paying attention. The puzzle of common ground isn’t solved yet, and it might be one of those puzzles that gets more interesting the longer it stays open.
Key Terms
| Term | What it does in the debate |
|---|---|
| Common ground | The shared information, beliefs, or commitments that speakers rely on to communicate |
| Container view | A simple model where common ground is like a box of agreed-upon information |
| Mentalist view | The idea that common ground is made of mental states like beliefs |
| Normative view | The idea that common ground is made of social commitments between people |
| Mutuality | The recursive structure where everyone knows that everyone knows that everyone knows… |
| Bounded common ground | The view that common ground only goes a few levels deep |
| Unbounded common ground | The view that common ground goes on forever |
Key People
- Paul Grice – A British philosopher who introduced the term “common ground” into pragmatics in the 1960s and basically started the whole conversation
- Robert Stalnaker – An American philosopher who developed the container view of common ground using possible worlds
- David Lewis – A Princeton philosopher who argued that common ground is about reasons to believe, not actual beliefs, and connected it to the study of human conventions
- Herbert Clark – A psychologist who studied how people actually establish and maintain common ground in real conversations
Things to Think About
-
If common ground goes on forever — if every commitment implies infinite further commitments — does that mean we’re all committed to infinitely many things we’ve never thought about? Is that a problem?
-
When you read a book by someone who died long ago, do you have common ground with the author? If not, how does communication work in that case? If yes, what kind of common ground is it?
-
Can there be common ground between people who don’t like each other? Between enemies? What would that look like?
-
If common ground requires mutuality — I know you know I know — then how does a baby learn language? The baby doesn’t have that kind of recursive awareness, but communication still happens.
Where This Shows Up
- Artificial intelligence: When building chatbots or virtual assistants, programmers have to decide how much common ground the AI should assume with its human user
- Law: Courts have to decide what counts as “common knowledge” when determining whether a jury can consider certain facts without proof
- Social media: When you post something online, who is your audience? What common ground do you share with them? This affects everything from jokes to political arguments
- Everyday misunderstandings: Most real communication failures happen because someone assumed too much common ground — or too little