The Ceiling

A Stanford paper just described, in precise mathematical terms, what AI cannot do. The industry has been quiet about it.

A few weeks ago, I wrote about the people narrating the AI moment and why their financial interests made that narration worth questioning. A paper came out of Stanford recently. Two credible researchers looked at what large language models can actually compute, not what they say, not what the demos show, but what the architecture permits.

There is a ceiling. Every model processes a task in a fixed number of steps determined by the length of the input. If a task genuinely requires more steps than that, the model cannot complete it correctly. It will produce something. That something will not be right. This isn’t a training or a data problem. It is a structural problem.

The paper also points out that asking one AI to verify another AI’s work doesn’t fix this. And reasoning models, the industry’s current answer to everything, don’t escape it either. More tokens in the “think” step is not the same as more computational ability.

The tasks sitting above this ceiling are not obscure edge cases. Complex legal reasoning, medical diagnosis, autonomous code deployment. The things being promised on every earnings call. Last month I was pondering on who benefits from the alarm. Now smart people are helping figure out what the technology can actually do.

Two different questions. Same answer.


P.S. The full argument, with the researchers’ credentials and the paper’s actual mechanics, is coming to Zero Parsec soon. I’ll link it here when it’s up.