What Makes a Connective?
You know how in math class, you learn that “and” means both things have to be true, and “or” means at least one of them is true? That seems simple enough. But philosophers and logicians have noticed something strange: when you try to write down rules that capture exactly what “and” and “or” do, you run into puzzles. The rules don’t always behave the way you’d expect.
Here’s a weird fact to start with. Suppose you have a language with only one connective — let’s call it #. You want # to behave like “or” (disjunction). You write down rules that seem to capture it. But it turns out that there are valuations — ways of assigning truth and falsity to sentences — that are consistent with all your rules but where # doesn’t behave like “or” at all. On some of these valuations, # acts like “and” sometimes, or like something else entirely. Somehow the rules don’t pin down what you thought they did.
That’s the puzzle this article is about: what does it take for a set of rules to actually fix what a logical connective means?
What Does a Connective Even Do?
Before we can talk about what connectives are, we need a way to talk about how they behave. Think of a connective as a machine that takes sentences in and spits new sentences out. “And” takes two sentences — say “It’s raining” and “It’s cold” — and produces “It’s raining and it’s cold.” The job of a connective is to build complex sentences from simpler ones.
But here’s the thing: connectives aren’t just about making sentences. They’re about what follows from what. The real question about a connective isn’t just “what sentences can I form with it?” but “what inferences does it license?”
When logicians study this, they usually focus on something called a consequence relation. That’s just a fancy name for the idea that some sentences follow from others. “It’s raining and it’s cold” follows from “It’s raining” and “It’s cold” taken together. That’s what “and” does. The consequence relation captures all the patterns of what follows from what.
Truth-Functions: The Simple Story
The simplest way to think about connectives is through truth-functions. A truth-function is a rule that tells you, given the truth-values of your input sentences, what truth-value your output sentence gets. For “and,” the rule is: a conjunction is true only when both parts are true. For “or” (the inclusive kind), the rule is: a disjunction is true when at least one part is true.
If you think about it, these rules completely determine how the connectives work — or so you’d think. If you know whether “It’s raining” is true and whether “It’s cold” is true, you know whether “It’s raining and it’s cold” is true. That seems watertight.
But here’s where it gets interesting, and slightly unsettling.
The Carnap Problem
A philosopher named Rudolf Carnap noticed something in the 1940s that still bothers logicians today. Suppose you have a language with only “and” in it. You write down the obvious rules: from A and B you can infer A∧B; from A∧B you can infer A; from A∧B you can infer B. These are what logicians call introduction and elimination rules — they tell you how to get into and out of sentences with “and.”
Now, suppose you collect all the valuations that are consistent with these rules — meaning, valuations where whenever the rules say something follows, it really does (truth is preserved from assumptions to conclusion). What do these valuations look like?
For “and,” they turn out to be exactly the valuations where “and” behaves truth-functionally. All of them. The rules for “and” are strong enough that any valuation consistent with them has “and” working the way you expect.
But now try the same thing with “or.” Write down the obvious rules: from A you can infer A∨B; from B you can infer A∨B; and a rule that lets you reason by cases — if A leads to C and B leads to C, then A∨B leads to C. These rules seem just as natural.
Collect all the valuations consistent with these rules. And here’s the shock: not all of them have “or” behaving truth-functionally. Some valuations will have A∨B true even when both A and B are false. That shouldn’t happen if “or” means what we think it means.
This is the Carnap Problem: the usual rules for “or” don’t force the connective to mean what we intend. There are ‘rogue’ valuations that satisfy the rules but don’t respect the truth-functional behavior of disjunction.
Why Does This Happen? Conjunctive Combinations of Valuations
The reason has to do with something mathematicians call conjunctive combinations. Take two valuations that are both perfectly fine — both respect the rules for “or” and behave truth-functionally. Now construct a new valuation that says a sentence is true exactly when both of the original valuations said it was true. This new valuation is called the conjunctive combination of the two.
Here’s the catch: if you take two valuations that both have “or” behaving truth-functionally, their conjunctive combination might not have “or” behaving truth-functionally. In fact, it almost certainly won’t.
This matters because the set of valuations consistent with any consequence relation is always closed under conjunctive combinations. If two valuations are consistent, their combination is too. So if your rules for “or” let in even a single pair of valuations whose combination misbehaves, you’re stuck with valuations that don’t respect the truth-function.
For “and,” conjunctive combinations preserve the truth-function. For “or,” they don’t. That’s the difference.
What About Other Connectives?
This phenomenon isn’t limited to “or.” In fact, logician Dov Gabbay figured out exactly which truth-functions are immune to this problem. He called them projection-conjunction truth-functions. These are the truth-functions where the output is true exactly when some specified set of the inputs are all true.
For a one-place connective, the projection-conjunction truth-functions are: the identity (output equals input), and the constant-true function (output always true, regardless of input). For a two-place connective, they include: the first projection (output equals first input), the second projection, constant true, and conjunction.
Notice what’s missing from this list: disjunction, implication, negation, and most other interesting connectives. They all fail the test. Their rules don’t force them to be truth-functional on all consistent valuations.
What “And” Has That “Or” Doesn’t
This asymmetry between “and” and “or” is philosophically interesting because it tells us something about what the rules for a connective can and can’t do. The introduction and elimination rules for “and” are perfectly matched — they pin down the connective completely. The rules for “or” aren’t.
This gets at a deeper question: what constitutes the meaning of a connective? Some philosophers think the meaning is given entirely by the rules you use to reason with it. If that’s right, then “and” and “or” have meanings that are different in kind — “and” is fully determined by its rules, while “or” leaves room for interpretation.
Other philosophers think this just shows we need better rules, or a different framework for understanding them. If you switch from talking about single-conclusion consequence relations (where you say “these sentences together imply this one sentence”) to multiple-conclusion consequence relations (where you say “these sentences together imply that at least one of these other sentences must be true”), the problem changes. In the multiple-conclusion setting, you can write rules for “or” that pin it down — for example, the rule that A∨B implies A,B (meaning: if you have A∨B, then at least one of A or B must be true).
Tonk: When Rules Go Bad
There’s a famous example that shows why this matters. In the 1960s, philosopher Arthur Prior invented a connective called tonk. Tonk had two rules: an introduction rule that let you infer A tonk B from A (like the introduction rule for “or”), and an elimination rule that let you infer B from A tonk B (like the elimination rule for “and”).
This combination is disastrous. Using tonk, you can prove any sentence from any other. From “It’s raining,” you infer “It’s raining tonk the moon is made of cheese.” Then from that, you infer “The moon is made of cheese.” Tonk lets you prove anything.
Prior’s point was that you can’t just make up any rules and call it a connective. Some sets of rules don’t genuinely define anything coherent — they just break the system.
Philosopher Nuel Belnap responded that the problem with tonk isn’t that the rules are somehow wrong in isolation, but that adding tonk to a system makes it non-conservative — it lets you prove new things in the old vocabulary that you couldn’t prove before. You could prove “The moon is made of cheese” using only rules about rain and tonk, but you shouldn’t be able to prove anything about the moon from facts about rain.
This idea of conservative extension — that adding a new connective shouldn’t let you prove new things about the old connectives — became a standard test for whether a connective is legitimate. If adding your connective creates new inferences in the old vocabulary, something has gone wrong.
The Two Faces of Determination
All this leads to a distinction between two ways a connective can be “determined” by a logic.
A connective is fully determined when the logic forces it to behave truth-functionally. Every valuation consistent with the logic has the connective associated with a specific truth-function. “And” is fully determined in classical logic. “Or” is not — there are consistent valuations where it doesn’t behave truth-functionally.
But a connective can be weakly determined — or what some logicians call pseudo-truth-functional — when every valuation consistent with the logic has some truth-function associated with the connective, but different valuations might associate different truth-functions. The connective is truth-functional on each valuation, but which truth-function it is varies from valuation to valuation.
These distinctions matter because they help us understand what different logical frameworks can and can’t do. Single-conclusion consequence relations can’t fully determine “or.” Multiple-conclusion consequence relations can. The choice of framework affects what’s expressible.
Why This Still Matters
This might seem like technical nitpicking, but it connects to real issues. When people argue about what a word like “or” means in ordinary language, they’re sometimes arguing about exactly the kind of thing logicians study here. Does “or” have a single, truth-functional meaning? Or does its meaning shift depending on context?
More broadly, the question of what makes a connective meaningful is part of a larger debate about what makes any piece of language meaningful. If logical rules can fail to pin down meaning, maybe other kinds of rules can too. And if some connectives are more “stable” than others under different logical frameworks, maybe meaning isn’t as simple as we’d like to think.
The puzzle Carnap noticed — that rules can underdetermine meaning — hasn’t gone away. It’s just become more interesting.
Appendices
Key Terms
| Term | What it does in this debate |
|---|---|
| Connective | A device that builds complex sentences from simpler ones, like “and,” “or,” or “if-then” |
| Consequence relation | A pattern of what follows from what — the core of how a logic works |
| Truth-function | A rule that determines the truth-value of a compound sentence from the truth-values of its parts |
| Valuation | An assignment of “true” or “false” to every sentence in a language |
| Consistent with | A valuation is consistent with a consequence relation when it never makes the assumptions true and the conclusion false |
| Conjunctive combination | A new valuation formed by taking the “both true” of two valuations — a sentence is true on the combination just when it was true on both originals |
| Projection-conjunction truth-function | A truth-function that outputs true exactly when some specified set of inputs are all true; the only truth-functions forced by their rules in single-conclusion logic |
| Conservative extension | Adding a new connective doesn’t let you prove new things about the old vocabulary — a test for whether a connective is legitimate |
| Fully determined | A connective whose rules force it to behave as a specific truth-function on all consistent valuations |
| Pseudo-truth-functional | A connective that has some truth-function on each valuation, but might have different ones on different valuations |
Key People
- Rudolf Carnap: A 20th-century philosopher and logician who noticed that standard logical rules don’t force the intended truth-functional interpretation of connectives like “or” and “if-then.”
- Dov Gabbay: A logician who proved exactly which truth-functions are “strongly classical” — forced by their rules in single-conclusion logic.
- Arthur Prior: A philosopher who invented the connective “tonk” to show that not just any set of rules defines a meaningful connective.
- Nuel Belnap: A philosopher who responded to Prior by arguing that the real test for a connective is whether adding it conservatively extends the logic.
Things to Think About
-
The rules for “and” force it to be truth-functional, but the rules for “or” don’t. Does this mean “and” has a simpler or more definite meaning than “or”? Or does it just mean we need different rules to capture “or”?
-
Tonk shows that some rules are too strong — they let you prove anything. Could there be rules that are too weak — so weak that almost any connective could satisfy them? What would that look like?
-
Suppose you’re designing a language for talking to aliens. You want to teach them what “and” and “or” mean using only rules. How would you do it? Would you need different strategies for the two connectives?
-
The Carnap problem shows that single-conclusion logic can’t fully pin down “or,” but multiple-conclusion logic can. Does this mean multiple-conclusion logic is “better”? Or does it just capture a different aspect of meaning?
Where This Shows Up
- Debates about logic and translation: When philosophers argue about whether people who use different logics really mean different things by “or” and “if-then,” they’re discussing versions of this problem.
- Computer science and AI: Designing logical systems for artificial intelligence requires thinking about what rules actually pin down meaning — otherwise the AI might interpret logical vocabulary in unexpected ways.
- Linguistics: Debates about whether words like “or” have a single core meaning or shift depending on context connect directly to questions about what logical rules do and don’t determine.
- Philosophy of language: The broader question of whether rules can fix meaning — for logical vocabulary and beyond — touches on how language works in general.