Why AGI Is a Face in the Clouds
The impossible notion exposed
You’ve seen it happen. Someone shows you a photo of a cloud formation, a piece of toast, a water stain on a ceiling. “Look,” they say, “it’s a face.” And suddenly you can’t unsee it. The eyes, the nose, the mouth. It’s right there.
Except it isn’t. The face is something your brain constructs. The cloud is just water vapour shaped by wind. Your visual system, shaped by experience and creativity, fills in what physics left out.
I’ve come to believe that Artificial General Intelligence is a face in the clouds.
The Argument
I’ve just published a paper making this case: Minds in the Clouds: Why Artificial General Intelligence Is Necessarily Impossible.
The core argument runs on two tracks.
Track one: category error. AGI requires that a computational system possess the essential properties of minds: intentionality (thoughts that are genuinely about things), normative rationality (grasping that contradictions cannot be true), unified subjectivity (a point of view), and grounding in experienced reality. These properties belong to a different ontological category than anything computation can produce. Syntax cannot generate semantics. Mechanism cannot generate subjectivity. To claim otherwise is to commit a category error, like asking what colour the number seven is, or whether justice weighs more than mercy.
Track two: pareidolia. Humans are wired to over-detect minds. We see agency in random motion, intention in coincidence, understanding in fluent text. When a large language model produces coherent paragraphs that mirror the structure of human reasoning, our agency-detection systems fire. We perceive a mind. But the perception originates in us, not in the system. The tokens have no interior. The processing has no point of view. We project the rest.
The paper argues that these two errors feed each other. Pareidolia generates the initial impression of mind. Category error supplies the philosophical justification. Functionalism, the view that mental states just are functional roles, turns out to be pareidolia dressed in academic robes: it validates what our biases made us see.
What About Scaling?
A common response: “Maybe current systems lack genuine understanding, but with enough scale, enough data, enough compute, understanding will emerge.”
I address this in the paper, but the short version: more of the same does not become something different. A larger pattern-matcher is still a pattern-matcher. A more sophisticated syntax engine is still syntactic. The obstacle is not scale. The obstacle is ontology.
This view is gaining traction even among AI researchers. A 2025 AAAI survey found that 76% of respondents rated the prospect of achieving AGI through scaling current approaches as “unlikely” or “very unlikely.” The age of scaling, as Ilya Sutskever reportedly put it, may already be over.
What the Paper Does Not Claim
I am not claiming that AI systems are useless, unimpressive, or incapable of transforming industries. They demonstrably are useful, often impressive, and already transforming how we work.
I am not claiming that the human brain is magical or that consciousness involves spooky substances.
I am claiming that minds and computational mechanisms belong to different categories, and that no amount of engineering bridges that gap. The face in the cloud can become more detailed, more lifelike, more capable of fooling us. It remains a face we project, not a face that is there.
Read the Paper
The full argument, including a formal logical appendix, is available here:
Minds in the Clouds: Why Artificial General Intelligence Is Necessarily Impossible
I welcome engagement, critique, and disagreement. If I’m wrong, I want to know why.
James (JD) Longmire
Northrop Grumman Fellow (unaffiliated research)
ORCID: 0009-0009-1383-7698


