AI & Technology8 min read

What AI Can't Do

Realistic expectations in an age of hype

Every week brings new AI announcements. AGI is coming. Jobs will disappear. AI will solve climate change and cure cancer.

Some of this will happen. Much of it won't. The difference matters enormously for businesses making practical decisions about AI investment.

The Three Categories of AI Claims

Drawing on research from "AI Snake Oil" by Arvind Narayanan and Sayash Kapoor:

AI that works: Image recognition, speech-to-text, language translation, recommendation systems. Mature and reliable.

AI that's improving: Large language models, code generation, creative assistance. Impressive with real limitations.

AI snake oil: Predictive systems claiming to forecast inherently unpredictable outcomes—crime, job performance, market movements.

What LLMs Actually Do

What they do: - Predict likely next tokens based on training data - Recognise patterns in language and structure - Generate fluent, coherent text - Follow instructions and adapt to context

What they don't do: - Reason in the way humans reason - Understand truth or verify facts - Have goals, intentions, or beliefs - Learn from individual conversations

Recent Apple research demonstrated that LLMs struggle with genuine mathematical reasoning—suggesting pattern matching rather than true understanding.

The Hallucination Problem

LLMs generate plausible-sounding text without reference to truth. This isn't a bug to be fixed—it's fundamental to how they work.

No amount of prompt engineering eliminates hallucination. It can be reduced and mitigated—but not eliminated.

The Responsible Approach

Using AI responsibly means appropriate tasks, human oversight, transparency, continuous evaluation, and realistic expectations.

The hype cycle eventually settles into productive use. The businesses that thrive are those that skip the hype and go straight to practical value.

Ready to implement AI in your business?