Philosophy for Kids

How Words Build Meaning: The Puzzle of Compositionality

Imagine you’ve never heard this sentence before:

“The platypus that lives behind the school library has been stealing everyone’s lunch money.”

You’ve never seen those words in exactly that order. And yet, you understood it immediately. How?

This is a genuinely strange fact about language. You can take a finite set of words you know, combine them in new ways you’ve never encountered, and understand the result without any extra instruction. You do this thousands of times a day. When you read a text from a friend, hear a teacher explain something new, or see a sign you’ve never seen before, you’re pulling off this trick constantly.

Philosophers and linguists have a name for what makes this possible. They call it compositionality.


What It Means for Something to Be Compositional

Here’s the basic idea in plain form:

The meaning of a complex expression is determined by its structure and the meanings of its parts.

Think of it like this. If you know what “dog” means, and you know what “bark” means, and you know what it means to put those two words together into a sentence, then you can figure out what “The dog barks” means. You don’t need to learn “The dog barks” as a separate fact, the way you had to learn what “dog” means. The sentence’s meaning comes from its parts and how they’re arranged.

This might sound obvious. But here’s why it’s interesting: it’s not obvious that language has to work this way. It could have been that every sentence was basically a separate chunk you had to memorize, like idioms. “Kick the bucket” doesn’t mean what its words suggest—you just have to learn it as a unit. If all of language were like that, you’d never understand a sentence you hadn’t heard before. You’d be stuck with only the sentences you’d memorized.

But that’s not how language works. You can generate and understand an infinite number of sentences. Compositionality is the explanation philosophers usually give for this ability.


Different Kinds of Meaning

Here’s where things get a little tricky. When we say “the meaning” of a word or sentence, we might mean different things.

Consider the word “I.” When I say it, “I” refers to me. When you say it, “I” refers to you. The word’s standing meaning—what you learn when you learn English—is roughly “the person speaking.” But its occasion meaning—what it actually picks out on a particular use—changes depending on context.

This matters for compositionality. If I say “I am hungry,” the meaning of the whole sentence depends partly on who’s speaking. But that’s fine: the occasion meaning of “I” depends on context, and the sentence’s meaning depends on that occasion meaning. Compositionality doesn’t say context can’t matter. It just says that once you fix the meanings of the parts (including whatever context-sensitivity they have), the meaning of the whole is determined by how they’re put together.

A more controversial question is whether compositionality is about meaning or about reference—what words actually point to in the world. “The morning star” and “the evening star” both refer to the planet Venus. But do they mean the same thing? Many philosophers think not. If compositionality were just about reference, then swapping “morning star” for “evening star” in any sentence should preserve truth. But consider:

“The ancient Greeks believed the morning star was a god.”

“The ancient Greeks believed the evening star was a god.”

The first might be true and the second false, even though both refer to Venus. So compositionality of reference clearly fails. Most philosophers think compositionality is about something richer than reference—about meaning in a fuller sense.


Three Arguments That Language Is Compositional

The Argument from Novel Sentences

This is the one we started with. You understand sentences you’ve never heard. The best explanation is that you know the meanings of the words and the rules for combining them. You don’t have to learn each sentence separately. This is called the argument from productivity.

There’s a wrinkle, though. The existence of idioms shows that not every expression works compositionally. “Spill the beans” doesn’t mean what its parts suggest. Some estimates say English has around 25,000 idioms. So maybe compositionality is just a rough tendency, not a strict rule. But defenders argue that idioms are exceptions that prove the rule—we notice them because they break the pattern. Most of language still works compositionally.

The Argument from Patterns

Here’s another observation. If you understand “The dog chased the cat,” you almost certainly also understand “The cat chased the dog.” This seems trivial—but it’s actually quite revealing. It shows that your ability isn’t tied to specific sentences. You can recombine the same pieces in different arrangements and still understand them.

This is called systematicity. If language weren’t compositional, there’d be no reason to expect this. You might understand one sentence about dogs chasing cats but not the reverse, the way you might understand one idiom but not another related one. But that’s not how language works. Understanding one pattern gives you the other for free.

The Argument from Learning

Children learn language from hearing sentences, not isolated words. If sentences didn’t have compositional structure, it’s hard to see how kids could figure out what individual words mean. They’d have to memorize whole sentences and somehow extract the parts. But if sentences are compositional, then hearing “The dog runs” and “The cat runs” and “The dog sleeps” lets you figure out what “dog,” “cat,” “runs,” and “sleeps” mean by comparing them.

This is still an active area of research, but the basic idea is plausible: compositionality makes language learnable in a way it wouldn’t otherwise be.


Could Language Fail to Be Compositional?

The arguments above are strong, but not airtight. Here are some places where philosophers have worried compositionality might break down.

Context-Sensitive Words

Consider the word “cut.” Cutting grass is different from cutting a cake, which is different from cutting a deal. Does “cut” have one meaning that shifts depending on context, or does it have multiple meanings? Radical contextualists think that every word shifts its meaning depending on context. If that’s true, then what a sentence means depends on more than just the words and their arrangement—it depends on the whole situation.

Defenders of compositionality respond that this just means word meanings are sensitive to context. That’s fine. Compositionality doesn’t say word meanings can’t change with context—it says that given the context and the word meanings, the sentence meaning follows. You still don’t need to learn each new sentence as a separate fact.

Modifiers That Don’t Compose

Think about these two sentences:

“Everyone will succeed if they work hard.”

“No one will succeed if they goof off.”

They look like they have the same structure: a quantifier, then a conditional. But if you translate the first into logic, you get something like “For every person, if they work hard, they will succeed.” The natural translation of the second would be “There is no person such that if they goof off, they will succeed.” But that’s not what the sentence means! What it actually means is “For every person, if they goof off, they will not succeed.”

Somehow, the word “no one” changes how the “if” works. This is a puzzle for compositionality: same structure, same parts, but the contribution of “if” seems to depend on which quantifier it’s paired with.

Philosophers have proposed various solutions, but nobody has a fully agreed-upon answer. It’s an open debate.

Belief Sentences

Back to an earlier example:

“Carla believes that eye doctors are rich.”

“Carla believes that ophthalmologists are rich.”

If “eye doctor” and “ophthalmologist” mean the same thing, then compositionality says these sentences should mean the same thing. But Carla might believe one and not the other if she doesn’t know they’re the same profession.

This is one of the oldest and hardest problems in philosophy of language. Responses range from denying that the words are really synonyms to building hidden complexity into the grammar of belief sentences. None of the solutions is universally accepted.


What’s at Stake

If language is compositional, that tells us something deep about how our minds work. It means we have a finite mental vocabulary and a finite set of rules that can generate infinite meanings. This is what makes human language different from animal communication systems, which typically have a fixed set of signals.

If language is not fully compositional—if meanings leak around the edges in ways that depend on extra-linguistic knowledge, beliefs about the world, or subtle features of context—then our understanding of how language works is more complicated. We’d need to explain how we manage to understand novel sentences even though compositionality isn’t the whole story.

Most working linguists and philosophers assume compositionality as a working hypothesis. It’s proved enormously useful in building theories that make accurate predictions. But whether it’s actually true—a deep fact about how language is structured—is still a live question.

Nobody really knows, once and for all. And that’s part of what makes it interesting.


Appendix: Key Terms

TermWhat it does in this debate
CompositionalityThe idea that a complex expression’s meaning comes from its structure and the meanings of its parts
ProductivityOur ability to understand new sentences we’ve never heard before
SystematicityThe fact that understanding one sentence pattern lets you understand related ones
Standing meaningThe conventional meaning of a word, independent of any particular use
Occasion meaningWhat a word or sentence means on a specific occasion of use, including context
IdiomAn expression whose meaning isn’t predictable from its parts (like “kick the bucket”)
ReferenceWhat a word or phrase points to in the world (like the actual planet for “Venus”)

Appendix: Key People

  • Gottlob Frege (1848–1925): A German philosopher and mathematician who first clearly stated that the meaning of a sentence is built from the meanings of its parts. He’s considered the founder of modern logic and philosophy of language.
  • Jerry Fodor (1935–2017): An American philosopher who argued strongly that language and thought are compositional, and used this to argue against certain theories of how the mind works (like connectionism).
  • Barbara Partee: A linguist who produced influential examples (like the marbles example) that seem to challenge compositionality, helping to sharpen the debate.
  • Charles Travis (1943–): A philosopher who argued that words are much more context-sensitive than most people think, using examples like a painted green leaf to make his point.

Appendix: Things to Think About

  1. If compositionality is true, does that mean every word has a single, definite meaning? Or could words have fuzzy, shifting meanings as long as they shift in predictable ways?

  2. Think of some idioms you know. Can you tell a clear story about what makes them different from normal expressions? Is there a sharp line, or does it blur?

  3. Suppose we discovered a language where every sentence had to be learned separately—no compositionality at all. What would that language be like? Could you ever learn it? How would it be different from human language?

  4. The chess notation example in the article showed a system where understanding parts doesn’t fully tell you what’s going on. Can you think of other systems like that—where understanding the pieces isn’t enough, and you need extra knowledge?

Appendix: Where This Shows Up

  • Computer programming: Programming languages are designed to be compositional. When you write print("hello") + print("world"), you can figure out what the whole program does from its parts. This is one reason programming works.
  • Learning a new language: If you’re learning Spanish, compositionality is what lets you form new sentences instead of just repeating memorized phrases. Teachers call this “generative” ability.
  • Artificial intelligence: Large language models (like the one that helped write this) have to learn compositionality from data, and whether they truly do it or just fake it is a hot debate in AI research.
  • Mathematics: Mathematical notation is compositional. When you see (2 + 3) × 4, you know how to compute it because you know the parts and the rules for combining them. This is no accident—mathematics was designed this way.