Back to blog

The Burden of Proof Has Shifted

Philosopher Markus Gabriel dismantled his own argument against machine intelligence. The logic is uncomfortably clean.

aiphilosophy

The Burden of Proof Has Shifted

Can machines think? Philosophy was pretty clear on this for decades: No. Thinking requires a biological body, consciousness, evolution. AI merely simulates. Markus Gabriel held this position for years — and I found it convincing.

Until he dismantled his own argument on the ZEIT podcast Alles gesagt.

The Crack in the Wall

The most famous case against machine intelligence is Searle's Chinese Room (1980): a person in a room follows translation rules without understanding Chinese. The room appears to understand, but nobody inside does. Conclusion: computers manipulate symbols. No understanding, no thought.

Gabriel argued along similar lines for years. In Der Sinn des Denkens (2018), thinking was a sense for him, bound to the human body, a product of evolution. Intelligence is a natural phenomenon. Full stop.

Then he found a hidden assumption in his own reasoning: the whole argument presupposes that a model of intelligence cannot itself be intelligent. Every biological-substrate argument depends on this premise. And it's unproven.

Searle's Rainstorm, Reversed

The strongest version of the counter-argument comes from Searle himself, and neuroscientist Anil Seth uses it today: a simulation of the digestive system doesn't digest anything. A simulation of a rainstorm doesn't make anything wet. So why should a simulation of a brain produce thought?

Sounds watertight at first. But Gabriel flips it: a weather model predicts rain accurately without it raining inside the computer. The model doesn't need to reproduce the phenomenon to deliver something real. It captures the structure of weather well enough for useful, reliable work — without being weather.

Now, to be fair: this doesn't prove that machines think. A weather model produces predictions, not rain. By the same logic, AI produces useful outputs, not thought. The analogy cuts both ways.

But what it does show: the categorical exclusion doesn't hold. "It doesn't rain inside the computer, therefore the model has nothing to do with weather" — nobody says that. The model captures something real about the structure of the phenomenon. Whether that something amounts to intelligence depends on a question nobody has answered: is intelligence the biological process — or the functional pattern of reasoning, problem-solving, and language? If it's the pattern and AI reproduces the pattern, then AI is intelligent. If it's the process, AI is a useful model — but not the thing itself.

What Fascinates Me About This

Gabriel's contribution isn't an answer. He reverses the question. If we can no longer rule out machine intelligence on principle, it's no longer on us to prove that AI thinks. It has to be shown that it doesn't.

And historically, that's never quite worked out. AI was always "the thing that can't do X yet." Chess, Go, poetry, programming — all turned out to be possible after all.

I'm not a philosopher. I don't know what the right answer is. But I find Gabriel's logic hard to argue with.

None of this settles whether machines are conscious. Seth makes a careful case that consciousness may require biological substrates, and he's probably right to keep that question open. But consciousness and intelligence are not the same thing. But a blanket denial of machine intelligence? That's a problem.


Sources:

  1. Markus Gabriel's evolved position is discussed in the ZEIT podcast "Alles gesagt" (March 2026).
  2. His earlier position appears in "Der Sinn des Denkens" (Ullstein, 2018). His latest book on the topic is "Ethische Intelligenz" (Ullstein, 2026).
  3. The Chinese Room argument is from John Searle, "Minds, Brains, and Programs" (Behavioral and Brain Sciences, 1980) — including the original rainstorm analogy.
  4. Anil Seth develops his biological naturalism in "Conscious Artificial Intelligence and Biological Naturalism" (Behavioral and Brain Sciences, 2025) and "The Mythology of Conscious AI" (Noema, 2026).