Your AI Isn’t Intelligent. You Just Want It to Be.

Calling AI “not really intelligent” feels like a gotcha, and it’s not meant as one. The more interesting question isn’t whether these systems are intelligent in some philosophically satisfying sense. It’s whether the distinction matters for how you use them.
The Air Canada chatbot is the case study that should be pinned above every product manager’s desk. In November 2022, a man named Jake Moffatt visited the airline’s website the day his grandmother died, looking for information about bereavement fares. The chatbot told him he could apply for a discounted fare retroactively, within 90 days of purchase. He bought full-price tickets. He filed the claim. Air Canada rejected it, pointing to a different page of the same website that said retroactive applications weren’t permitted. Moffatt sued. The British Columbia Civil Resolution Tribunal ordered Air Canada to pay him 2 in damages and fees, and rejected the airline’s astonishing argument that the chatbot was “a separate legal entity responsible for its own actions.”
It does. Not because the tools aren’t powerful as they clearly are but because the mental model you bring to any tool shapes how well you use it. A hammer used like a screwdriver produces bad results and confusion about why. An AI used like an oracle produces confident errors and confusion about who’s responsible.

What You’re Actually Talking To

This matters because it explains the failure mode that trips everyone up: confidence without accuracy. When an AI tells you something wrong, it doesn’t tell you nervously. It doesn’t hedge. It delivers the incorrect information with the same smooth, assured tone it uses when it’s completely right. It can’t do otherwise. It has no mechanism for knowing the difference.
The chatbot didn’t fail in some exotic edge case. It failed on its most basic function, in the most emotionally loaded context imaginable. No amount of staring at the demo would have caught it. Because demos reward fluency, and fluency is not the same as reliability.
And perhaps most importantly: stay curious about the failure modes rather than defensive about the capabilities. The people getting the most out of these tools aren’t the ones who trust them most. They’re the ones who have developed a sharp instinct for where the wheels come off.
For organisations, the stakes are higher. Companies are making significant infrastructure, hiring, and strategy decisions based on demo performance that doesn’t predict production behaviour. They see a model nail every test case in the boardroom and assume they’ve solved the problem they needed solving. They deploy. Then reality shows up.

Why We Keep Getting Fooled

Use it for shape, not substance. AI is exceptional at giving you a starting structure, surfacing angles you hadn’t considered, getting a rough draft to 70% in minutes. It is less reliable as the final word on anything where an error would matter.
There’s a moment most people have, somewhere between the third and fifth conversation with a good AI chatbot, where something clicks. The response was too good. Too on point. Too… understanding. And a little voice says: maybe there’s something actually there.
Notice when you’re anthropomorphising. When you catch yourself feeling like the AI “gets” you, that’s a good moment to remember what’s actually happening. The warmth is real in the sense that it’s present in the output. The understanding behind it is something else entirely.

The Expensive Version of This Problem

But the process underneath is fundamentally different. There’s no understanding happening in the way you experience understanding. There’s pattern completion at a scale and fidelity we’ve never seen before.
By Randy Ferguson
The people who get the most out of AI right now are the ones who hold two ideas simultaneously: this thing is genuinely remarkable, and I am still the one who has to think. That combination with genuine appreciation and clear-eyed scepticism is harder to maintain than either pure enthusiasm or dismissal. But it’s the only position that actually serves you.
Practically, that means a few things.

What Intelligent Use Actually Looks Like

None of this means you shouldn’t use these tools. Used well, they are genuinely one of the most useful things to happen to knowledge work in a generation. The shift required is treating them like a very fast, very well-read, occasionally overconfident collaborator rather than an oracle.
Here’s the most useful mental model nobody tells you upfront: a large language model is, at its core, an extraordinarily sophisticated autocomplete engine. It has read an almost incomprehensible amount of human text, books, articles, forums, code repositories, academic papers and it has learned, with stunning precision, what words tend to follow other words in what contexts.
This isn’t stupidity. It’s biology. The same instinct that makes you feel slightly guilty ignoring a Roomba that got stuck in the corner makes you feel like the chatbot actually cares whether your presentation goes well.
For individuals, mistaking fluency for intelligence mostly leads to minor embarrassments: a confidently wrong fact in a work email, a recipe that skips a step, a legal summary that missed the key clause. Annoying, fixable, not catastrophic.
Verify anything consequential. Not because the AI is usually wrong as it usually isn’t, but because it has no way to flag when it is. The responsibility for knowing the difference sits with you, not the tool. Research into hallucination rates across leading models finds error rates vary wildly depending on the task, from below 1% on simple factual recall to well above 25% in specialised domains like academic citations or medical references. The number that matters is not the average. It’s the rate in your specific use case, where being wrong has a real cost.
This isn’t a “AI is overhyped” piece, that’s a tired argument made mostly by people who haven’t used the tools seriously. The models are genuinely extraordinary. They can write, reason, summarise, code, and explain with a fluency that would have seemed like science fiction a decade ago. But fluency is not intelligence. And confusing the two has consequences.

The Actually Interesting Question

The technology will keep improving. Our instincts about it will lag behind. That gap, between what these systems actually are and what interacting with them feels like, is where most of the real AI literacy work needs to happen. And so far, we’re not doing nearly enough of it.
The honest answer is that we’re wired for this. Humans are social animals who evolved to read other minds. When something communicates in fluent, contextually appropriate language, when it remembers what you said two messages ago, when it picks up on the emotional register of your question and mirrors it back, our brains start doing what they always do: they start inferring an inner life.
There isn’t. And the gap between that feeling and reality is quietly responsible for most of the bad decisions being made about AI right now.
AI companies know this. Some of them are thoughtful about it, nudging models toward language that doesn’t overclaim sentience, building in reminders that you’re talking to a tool. Others are less thoughtful, because a user who feels genuinely understood is a user who comes back. The warm, curious, engaged AI persona is partly a product decision. It works on us because it was designed to.
When you ask it a question, it doesn’t look up the answer. It doesn’t reason through it the way you do. It generates the most statistically plausible response given everything it has absorbed. This is the foundational mechanic: given a sequence of words, the model calculates the probability distribution over its vocabulary for what the next token should be, selects it, and repeats. One token at a time, thousands of times per response. Most of the time, the most plausible response also happens to be correct, helpful, and impressively well-phrased. This is why it feels like intelligence. The output looks exactly like what an intelligent person would produce.

Similar Posts