Philosophy for Kids

Chance and Randomness: Not the Same Thing

Here’s a strange thing: we use the words “chance” and “random” almost as if they mean the same thing. “It happened by chance” and “it was random” feel like the same idea. But when philosophers and mathematicians look closely, they find that these two concepts come apart in surprising ways. Sometimes things happen by chance that aren’t random. And sometimes things are random without happening by chance.

This might sound like splitting hairs. But it matters—for science, for gambling, for how we understand the world. Let’s see why.

What Do We Mean By “Chance”?

Imagine a coin sitting on your palm, about to be flipped. We say it has a 50% chance of landing heads. What does that mean?

Philosophers have argued about this for a long time, but they mostly agree on some basic ideas about what chance must be like:

First, chances are objective. They’re not just what you happen to believe. If a coin is fair, it has a 50% chance of heads whether you know that or not, whether you believe in probability or not.

Second, chances guide what it’s rational to believe. If you know a coin has a 50% chance of heads, you should be 50% confident it will land heads. (Unless you have some special extra information—like that someone glued it down.)

Third, if something has a chance of happening, it must be possible for it to happen. If there’s a chance you’ll roll a six, then rolling a six must be something that could happen.

Fourth, chances connect to what actually happens, but not too tightly. If you flip a fair coin 100 times, you’d expect roughly 50 heads. But you wouldn’t be shocked at 47 or 53. You would be shocked at 100 heads—even though that’s technically possible. So chances aren’t the same as actual frequencies, but they’re related.

The best examples of real chances come from physics. Radioactive atoms decay with specific probabilities. Quantum mechanics—our most fundamental theory of the microscopic world—is built around probabilities that seem to be basic features of reality, not just our ignorance. A uranium atom doesn’t have a hidden clock ticking down; it genuinely has a chance of decaying at any moment.

What Do We Mean By “Random”?

“Random” is trickier. People use it in different ways. Some philosophers use it to mean “produced by chance.” But that would make our question trivial—of course chance and randomness would be the same thing if that’s what “random” means.

Instead, mathematicians have developed a different idea: product randomness. This is about the outcome itself, not how it was produced. A sequence of coin flips like HTHHTTH is random. A sequence like HHHHHHH is not. But why? What makes one sequence random and another not?

Intuitively, random sequences are disorderly and patternless. They have no short description that captures them. Think about it: you could describe the all-heads sequence by saying “a hundred heads in a row.” That’s a very short description. But to describe a genuinely random sequence, you basically have to list every single outcome. There’s no shortcut.

This leads to a precise mathematical definition called Kolmogorov randomness: a sequence is random if the shortest computer program that could produce it is about as long as the sequence itself. If you can describe it more compactly—if there’s a pattern you can exploit—it’s not truly random.

There are other mathematical approaches to randomness that converge on roughly the same idea. Random sequences satisfy all the statistical properties you’d expect: digits appear with roughly equal frequency, there’s no way to predict the next digit from the past ones, and so on.

One striking result: almost all possible infinite sequences are random. The orderly ones are the rare exceptions. This matches our intuition—there are vastly more ways to be disorderly than orderly.

The Commonplace Thesis (and Why It’s Tempting)

Here’s the appealing idea: things happen by chance exactly when the sequence of outcomes is random. Call this the Commonplace Thesis (CT).

It feels right. If you flip a fair coin many times, you expect a random-looking sequence. And if you see a random-looking sequence, you suspect it was produced by chance. This is how we think about random sampling, about scientific experiments, about games of chance.

But it turns out the relationship isn’t that simple. There are cases where chance and randomness come apart—in both directions.

Chance Without Randomness

A fair coin could land heads 100 times in a row. The chance of that happening is tiny—it’s (1/2)^100, almost zero. But it’s not zero. If it did happen, those outcomes would have happened by chance (the coin is still fair). Yet the sequence would not be random. It’s highly compressible: “100 heads” is a short description.

This is an extreme case, but the same thing happens with less extreme bias. A fair coin tossed 1000 times has a decent chance of landing heads more than 700 times. Such a sequence would be compressible (long runs of heads) and therefore not random. But it happened by chance.

Biased processes give us another case. A coin that’s 70% weighted toward heads produces outcomes by chance—each flip genuinely has a 70% chance. But the resulting sequence won’t be random. It will be compressible because of the bias, and it will violate the statistical properties we expect of random sequences.

There are also processes where chance is present but the outcomes depend on past history. Suppose you draw colored balls from an urn without replacing them. Each draw has a chance, but the chances keep changing. The resulting sequence won’t be random because the first half carries information about the second half.

So chance doesn’t guarantee randomness.

Randomness Without Chance

Now the other direction. Many systems produce random-looking outcomes without any chance being involved.

The most famous example is chaotic dynamics. The “baker’s transformation” is a simple mathematical system that’s completely deterministic—given the starting state, every future state is fixed. Yet it produces sequences that are just as random as a fair coin. If you look at certain properties of the system over time, you get a Bernoulli process: independent, identically distributed outcomes. But nothing chancy is happening. The system is following strict rules.

Real physical systems can do this too. The weather is a classic example of chaotic dynamics. It’s deterministic (in principle), yet we can’t predict it more than a few days out. The sequence of rainy and sunny days might look random—might be random in the mathematical sense—without any chance being involved. This is “deterministic randomness.”

Another case: short sequences. If you flip a coin only once, the outcome “H” is technically part of a random sequence. But did it happen by chance? Well, maybe. But consider an unrepeatable event like the Big Bang. That event is part of a necessarily short sequence (it happened once), which qualifies as random. Yet it’s strange to say it happened “by chance” in the same sense as a coin flip.

So randomness doesn’t guarantee chance either.

What About Determinism?

Some philosophers think the connection runs through determinism. The argument goes: chancy processes are indeterministic (they could have gone differently), and indeterministic processes produce random sequences. Therefore chance and randomness are linked.

This argument has problems on both ends.

First, there can be indeterminism without chance. The famous “Norton’s dome” is a system in Newtonian physics where a ball at the top of a dome can spontaneously start moving, for no reason at all. This is genuinely indeterministic—the past doesn’t fix the future. But there’s no probability distribution over the outcomes. No chance. Yet if you ran this system many times, you could get any sequence of outcomes, including random ones.

Second, there can be determinism with randomness. As we saw with chaotic systems, deterministic processes can produce random sequences. And some philosophers argue that deterministic “chance” is possible—that a deterministic world could still have objective probabilities. This is controversial, but it shows the connection isn’t straightforward.

The Bottom Line

Chance and randomness are distinct concepts. Chance is about the process that produces an outcome—whether it involves genuine probability, whether it could have gone differently in a way that’s governed by objective chances. Randomness is about the product—the outcome sequence itself—whether it’s patternless and incompressible.

They often go together. Fair coins produce random-looking sequences. Random-looking sequences usually come from chancy processes. But the link is only evidential, not conceptual. You can have one without the other.

This matters because we often use randomness as evidence for chance. In science, if we see a random-looking pattern, we infer that some chance process produced it. Usually that’s right. But it’s not guaranteed. And conversely, just because something happened by chance doesn’t mean the outcomes will look random.

The two ideas are close cousins, but they’re not twins.


Key Terms

TermWhat it does in this debate
ChanceAn objective, mind-independent probability that guides what we should believe and tells us what’s possible
Product randomnessA property of an outcome sequence—it’s patternless, incompressible, and passes statistical tests
Process randomnessSometimes used to mean “produced by chance,” but that makes the question trivial
Kolmogorov randomnessA precise definition: a sequence is random if the shortest program that outputs it is about as long as the sequence itself
DeterminismThe idea that the past plus the laws of nature fixes exactly one possible future
IndeterminismThe idea that the same past and laws can lead to different futures
Commonplace Thesis (CT)The tempting but wrong idea that chance and randomness are the same thing
Chaotic dynamicsDeterministic systems that produce seemingly random behavior because of extreme sensitivity to initial conditions

Key People

  • David Lewis — A philosopher who argued for a specific connection between chance and rational belief (the Principal Principle), and who spent a lot of effort trying to make chance fit into a deterministic worldview
  • Per Martin-Löf — A mathematician who gave one of the first rigorous definitions of randomness for sequences, using the idea of statistical tests
  • Andrey Kolmogorov — A mathematician who defined randomness in terms of the length of the shortest computer program needed to produce a sequence
  • Richard von Mises — An early thinker who tried to define randomness in terms of gambling systems and limit frequencies
  • Carl Popper — A philosopher who argued that chance is a real feature of the world, not just a reflection of our ignorance

Things to Think About

  1. If you had a computer that could generate truly random numbers, and a friend who could generate pseudorandom ones (like the ones computers usually produce), how would you tell the difference? Could you ever be certain which was which?

  2. The article says almost all infinite sequences are random. But your own experience is mostly with short sequences. How does this change how you think about randomness? Are short sequences ever really random?

  3. Suppose someone claimed that everything is determined—that the entire future is fixed by the past and the laws of physics. Would that mean nothing happens “by chance”? Or could there still be a useful notion of chance in a deterministic world? What would it mean?

  4. Think about a time you described something as “random.” Did you mean it happened by chance? That it was surprising? That it had no pattern? Are these the same thing, or are you mixing up different ideas?

Where This Shows Up

  • Cryptography depends on having genuinely random numbers—if your “random” numbers are actually predictable, your security can be broken. This is why cryptographers care deeply about the difference between chance and pseudorandomness.
  • Scientific experiments use random assignment to determine who gets a treatment and who gets a placebo. The randomness ensures that the groups are balanced in the long run. But scientists don’t need fundamental indeterminism—they just need a procedure that produces a random-looking sequence.
  • Machine learning uses randomness to train models, but it also tries to detect patterns in data. Distinguishing real patterns from random noise is a core problem.
  • The stock market is often described as “random,” but economists argue about whether price movements are truly random or just unpredictable because of our limited knowledge. This matters for whether you can beat the market.
  • Your own intuitions about what’s random affect how you see the world. People tend to see patterns where none exist (like seeing faces in clouds) and miss patterns that are there (like not noticing that a friend’s behavior follows a predictable cycle). Understanding randomness helps you avoid both mistakes.