Can Machines Think?
Imagine you’re texting with a friend — except you’re not sure it’s your friend. The replies come quickly, they make sense, they’re funny in the same way your friend is funny. Later you find out you were actually texting with a computer program. Would that change anything? Should it?
This isn’t just a party trick. It’s a test.
In 1950, the mathematician Alan Turing proposed that instead of asking “Can machines think?” (a question he thought was too vague to answer), we should ask: Can a machine carry on a conversation so naturally that a human judge can’t tell it apart from another human? If it can, Turing said, then for all practical purposes, we should say it’s thinking. This test — now called the Turing Test — has haunted artificial intelligence (AI) ever since. Nearly 75 years later, no machine has passed it for real. (A few have claimed to, but they cheated or fooled nobody who was paying attention.)
So here’s the core puzzle: What would it take to build a machine that really thinks? And would we even know if we’d done it?
What Exactly Are We Trying to Build?
If you ask five people what “artificial intelligence” means, you’ll get six answers. Some say AI is about building machines that act like humans — so a human can’t tell the difference. Others say it’s about building machines that think like humans — that process information the same way our brains do. Still others say the goal is to build machines that think rationally (making perfect decisions based on logic) or act rationally (doing whatever gets the best results).
These four possibilities actually define most of the arguments in AI. The most popular textbook in the field, by Stuart Russell and Peter Norvig, argues that AI should aim for the last one: building “intelligent agents” that act rationally. An intelligent agent is just a fancy name for something that takes in information from its environment (through sensors) and produces actions (through motors or screens or speakers) aimed at achieving some goal.
You’ve already met these agents. Every time Netflix suggests a movie, or Google maps finds you a route, or your spam filter catches a phishing email — that’s an intelligent agent at work. But these agents are incredibly narrow. They’re brilliant at one thing and useless at everything else. The spam filter can’t play chess. The chess computer can’t recommend a movie. And none of them can do what a 12-year-old does effortlessly: learn something new by reading about it.
How AI Actually Works: Three Approaches
AI researchers have three main toolboxes, and they often argue about which one is best.
The Logic Approach
One group of researchers thinks intelligence is basically reasoning — taking what you know and using logic to figure out what follows. They build systems that store information as formal statements (like “All birds have feathers” and “Tweety is a bird”) and then use rules of logic to draw conclusions (“Tweety has feathers”).
This approach works great for some problems. It’s how your calculator knows that 7 × 8 = 56. It’s how theorem-proving software can sometimes find proofs that human mathematicians missed. But there’s a problem: real-world reasoning isn’t always logical in the neat, mathematical sense. If I tell you “Tweety is a bird,” you’ll assume Tweety can fly. But if I then add “Tweety is a penguin,” you’ll change your mind. This kind of “defeasible” reasoning (where new information can override old conclusions) is surprisingly hard to capture with pure logic.
The Brain-Like Approach
Another group looks at actual brains and tries to copy them — not in detail, but in broad strokes. They build artificial neural networks: webs of simple computing units (called “nodes” or “neurons”) connected by links that have different strengths. Each node takes input from its neighbors, does a simple calculation, and passes the result on. Learning happens by adjusting the connection strengths.
For decades, these networks were wimpy. But in the 2000s, two things changed. Computers got fast enough to run really big networks, and researchers figured out better ways to train them. The result was deep learning: neural networks with many layers that can learn to recognize patterns from raw data without a human telling them what features to look for.
This is what lets your phone recognize your face, what makes self-driving cars possible, and what powers voice assistants like Siri and Alexa. It’s also what Google used to build AlphaGo, which in 2016 beat Lee Sedol — one of the world’s best Go players — in a match that experts thought was decades away from happening.
But here’s the strange thing: these systems don’t “understand” anything in the way we do. A neural network that recognizes cats in photos has no concept of “cat-ness.” It’s just learned statistical patterns. Show it a picture of a cat that looks slightly different from anything in its training data, and it might confidently declare it’s a waffle iron. This isn’t a bug that can be fixed with more data — it’s a fundamental difference between pattern-matching and understanding.
The Probability Approach
A third approach acknowledges that the world is uncertain. Instead of trying to know things for sure, these systems work with degrees of belief — probabilities. A detective trying to solve a murder doesn’t know for certain who did it, but can assign probabilities based on evidence. Bayesian networks (named after the mathematician Thomas Bayes) let AI systems do this kind of reasoning: updating beliefs as new evidence comes in, and making decisions that maximize expected success.
IBM’s Watson, which famously crushed human champions on the quiz show Jeopardy!, used a mix of these approaches — especially probabilistic ones — to answer questions across an enormous range of topics. But Watson couldn’t hold a conversation. Ask it something that required on-the-spot reasoning about a novel situation, and it would fall flat. As one critic put it: Watson is great at Jeopardy! and terrible at everything else.
The Hardest Problems AI Hasn’t Solved
Despite all the progress, AI still can’t do some things that human children do without thinking. Three in particular stand out.
Reading. Step back and think about what you’re doing right now. You’re reading sentences and learning things — things that weren’t in your head before. This seems obvious, but it’s incredibly hard for machines. The formal models of learning that AI uses all assume learning means discovering a hidden function from examples. (If I show you that 1 maps to 1, 2 to 4, 3 to 9, you might “learn” the squaring function.) But reading doesn’t work that way. When you read about AI, you’re not just memorizing input-output pairs. You’re building a mental model of a whole domain, one that lets you answer new questions, make connections, and explain things to someone else. No machine comes close to this.
Creativity. Look through the index of any AI textbook. You probably won’t find “creativity.” This is remarkable, because creativity is arguably the human ability we value most. There have been attempts to build creative machines — programs that compose music, write stories, or produce paintings — but these systems don’t decide to be creative. They follow rules or statistical patterns that their human programmers set up. The creativity is really in the designer, not the machine.
Subjective consciousness. This is the biggest one. You know what it’s like to be you. You have experiences — the color blue, the feeling of cold, the taste of chocolate. Philosophers call this “phenomenal consciousness,” and it’s the most important thing in your life. (If you stopped having experiences, you wouldn’t care about anything.) And AI has nothing to say about it. The textbooks don’t even try. This isn’t because researchers are lazy — it’s because nobody has any idea how to build conscious experience into a machine, or even how to tell if a machine has it.
Some philosophers, like John Searle, argue that machines can never have real understanding or consciousness. His famous “Chinese Room” thought experiment goes like this: Imagine Searle himself sitting in a room with a big rulebook. People outside slide notes written in Chinese under the door. Searle doesn’t know Chinese, but he uses the rulebook to look up what Chinese characters to write back. To the people outside, it looks like the room understands Chinese. But Searle, inside, is just manipulating symbols by following rules. He doesn’t understand a word. According to Searle, that’s all computers do — manipulate symbols by following rules — so they can never really understand anything.
Not everyone buys this argument. But the fact that it’s still debated, decades after it was first proposed, tells you something about how far AI is from genuine understanding.
The Future: Should We Be Excited or Terrified?
Some people think AI will keep getting better until it surpasses human intelligence — a moment called “the Singularity.” After that, superintelligent machines would design even smarter machines, and human civilization as we know it would be over. Some, like philosopher Nick Bostrom, worry these machines might have goals that don’t include keeping humans around. (After all, we’re made of useful atoms they could repurpose.)
Others think this is science fiction. John Searle points out that unless machines are conscious, they can’t really want anything — including wanting to destroy us. But this might be cold comfort: a non-conscious machine designed to win a war might still kill everyone, even if it doesn’t “want” to in the human sense.
The safest prediction is probably the dullest one: AI will keep improving at narrow tasks. It will drive cars, translate languages (badly but usefully), recommend products, and do certain kinds of medical diagnosis. It will take over many jobs, as machines always have. But general intelligence — a machine that can learn to do anything a human can do, that can read a book and actually understand it, that can be creative and conscious — remains a distant dream.
Turing thought machines would pass his test by the year 2000. They didn’t. Descartes, writing in the 1600s, argued that machines could never match human flexibility of thought. So far, he’s been right.
The question — Can machines think? — is still open. And it may stay open for a long time.
Key Terms
| Term | What it does in the debate |
|---|---|
| Turing Test | A way of defining machine intelligence: if a machine can fool a human into thinking it’s human in conversation, it counts as thinking |
| Intelligent agent | Any system that takes in information from its environment and acts to achieve goals |
| Neural network | A computing system loosely inspired by the brain, made of simple units connected by adjustable links |
| Deep learning | Training neural networks with many layers to automatically find patterns in raw data |
| Chinese Room | A thought experiment arguing that symbol-manipulating computers can never genuinely understand anything |
| General intelligence | The ability to handle many different kinds of problems — the kind of intelligence humans have |
Key People
- Alan Turing (1912–1954): British mathematician who broke Nazi codes in WWII, invented the concept of the programmable computer, and proposed the Turing Test for machine intelligence.
- John Searle (born 1932): American philosopher who invented the Chinese Room argument to try to prove that computers can’t really think.
- Stuart Russell and Peter Norvig: The authors of the most influential AI textbook; they argue AI should aim to build “intelligent agents” that act rationally.
- Hubert Dreyfus (1929–2017): Philosopher who argued that AI based on symbolic logic would fail because real human expertise isn’t based on following rules.
Things to Think About
-
You’re playing an online game against someone. You’re pretty sure it’s a computer. Does it matter? Would it change anything if you found out it was a human? What’s the difference?
-
The Chinese Room argument says a computer manipulating symbols can never understand anything. But many people think humans are also just biological computers. If that’s true, doesn’t the argument prove too much?
-
A self-driving car has to choose between hitting a group of people or swerving and killing its passenger. Is this a moral decision? If the car isn’t conscious, can it be making a real moral choice?
-
Suppose we build a machine that can do everything a human can do — write novels, fall in love, argue about philosophy — but we know it’s just following a program. Would you call it conscious? What evidence would change your mind?
Where This Shows Up
- Everyday technology: Voice assistants, spam filters, recommendation systems, face recognition — all are narrow AI systems using the techniques described above.
- Self-driving cars: One of the highest-stakes applications, combining neural networks (for seeing the road) with probabilistic reasoning (for handling uncertainty) and logical planning.
- Video games: Enemy characters in games like StarCraft or Civilization use AI to make decisions in real time. Game AI is a whole research field.
- Social media algorithms: The systems that decide what you see on TikTok or YouTube are AI agents optimizing for engagement — and they’re powerful enough to shape what millions of people believe.