News·September 30, 2025

What AI Can't Do: The Unbreachable Limits of AI

As large language models (LLMs) evolve from fascinating curiosities into influential tools, it’s understandable to feel swept up in the excitement. We often fixate on today’s quirks—hallucinations and context windows—and treat them as waypoints on a road toward AGI, as if each will vanish with enough scale.

But some constraints aren’t bugs to be patched; they’re rooted in the architecture and the interface between people and machines. Here are four hard-coded limits that will shape LLMs, no matter how powerful they become.

1) The Purpose Void: All Gas, No Driver

An LLM won’t wake up with a midlife crisis. It won’t develop a passion for Renaissance poetry, a petty grudge against a rival AI, or a noble ambition to cure disease. Drives—grand or trivial—emerge from biology: the messy imperative to survive and matter.

An AI has no body to protect, no ego to bruise, no mortality to contemplate. It’s an engine of execution waiting for a destination set by a mind thatwantsto go somewhere. At its best, it’s the ultimate navigator. But it will never ask: Are we there yet?

2) The Language Barrier: Trapped in the World of Words

LLMs are masters of language—and also its prisoners. Their reality is largely built from text, which means they can only simulate what can be described. That sets two boundaries.

The natural limit:countless human experiences—the intuitive trust in a friend, the muscle memory of riding a bike, the taste of a fresh strawberry—sit beyond words. A model can write a poem about them, but it can neverknowthem.

Our own limit:humans struggle to articulate what we truly want. Instructions are fuzzy, lossy translations of intent. The model is left trying to build a perfect castle from a blurry blueprint.

3) The Paradox of Choice: A Universe of Plausible Answers

As models scale, they don’t just produce one answer—they open a universe of plausible ones. Ask for a slogan, and you might get a hundred—witty, profound, or absurd. None are logically “wrong.”

Abundance becomes its own problem. The challenge shifts from extracting a correct answer to selecting the one that matches your unspoken taste and goal. The more paths the model can take, the harder it becomes to steer it toward the one in your head. It’s like directing an improv actor who can perform a million perfect scenes—except for the exact one you imagined.

4) The Communication Bottleneck: No Shared Reality, or Lived Point of View

The physical world model is only part of what humans share. We also carry private, shifting layers that resist serialization: preferences, values, tastes, memories, taboos, humor, moods, relationship context, status games. We don’t justknowfacts and norms; we inhabit perspectives and desires that define what “good” meansfor us.

An LLM doesn’t. Every interaction forces us to compress those tacit layers through the narrow bottleneck of a prompt. Even if the facts are perfectly stated, thesubtleties—tone, vibe, timing, trade-offs, exceptions—rarely fit. They aren’t easily “shipped” as data; they’re lived, iterated, and constantly renegotiated.

As tasks grow more complex, the unstated rules multiply faster than we can specify them. The true barrier isn’t merely the model’s understanding of physics or society; it’s language’s limited bandwidth for conveying a whole person—perspective, desire, and nuance included.

The Punchline

No matter how capable they appear, LLMs remain constrained by the weakest link in the chain: our ability to translate rich, internal worlds into the cold, explicit logic of a prompt.

They are powerful engines—forever working through an imperfect translator: us.

Link to the original article.

All News