The Age Of The Confident Idiot

The Age Of The Confident Idiot
Photo by Alexey Demidov / Unsplash

A while ago I received a report that was clearly written by ChatGPT. Not because of the tone, which was generic enough to have come from anywhere, but because it contained three factual errors that anyone with even a passing knowledge of the subject would have caught. When I pointed them out, the response was a shrug. “Oh, I didn’t really check it. The AI did it.”

That small exchange makes me shudder. Not the laziness of it, which is frustrating enough, but the confidence. There was no embarrassment. No recognition that sending out incorrect work with your name on it should bother you. Just a breezy acceptance that the machine did the thinking so they didn’t have to, and that this was somehow fine.

We’ve entered a period where looking competent has never been easier and being competent has never mattered less. AI tools hand everyone the vocabulary, the structure, the appearance of expertise. You can produce a strategy document, a research summary, or a detailed email on any subject in seconds. It reads well. It sounds authoritative. It might even be mostly right. The person who produced it might not understand a single word of what they’ve sent, and increasingly, nobody checks.

I wrote years ago about what I called the Morris Point, that specific moment on the Dunning-Kruger curve where you learn just enough to be dangerously overconfident before reality humbles you. That dip used to be unavoidable. You’d start something new, feel brilliant about it for a few weeks, then hit the wall where you realised how little you actually knew. The discomfort of that moment is where real learning begins. It’s the point where most people quit, and the ones who push through come out the other side with genuine understanding.

AI has effectively paved over that dip. You can now skip the uncomfortable part entirely. Why struggle through the learning curve when you can produce work that looks like it came from the other side of it? Why sit with the discomfort of not knowing something when a chatbot will give you a confident answer in three seconds? The gap between knowing nothing and appearing to know everything has never been narrower, and I think that’s a genuine problem.

The issue isn’t that people are using tools to help them work. I use AI tools regularly. I’ve been open about using them as a thinking partner for my writing, to pressure-test arguments and catch blind spots. The difference is that I’m still doing the thinking. I read every output, I question the answers, I bring my own knowledge to the table and use the tool to sharpen it rather than replace it. That’s augmenting. What I keep seeing around me is something else entirely: people treating AI as a substitute for understanding, and then walking around with the confidence of someone who actually did the work.

I’ve seen it in meetings where someone quotes AI-generated data with the authority of a person who ran the analysis themselves. I’ve seen it in emails where the sender clearly has no idea what they’ve written, just that it sounds professional. I’ve had AI slop making my working life harder for months now. Healthcare letters, supplier communications, press releases, all clearly generated, all unchecked, all sent with a straight face. The confidence is the part that gets me. These aren’t people who know they’re winging it. They genuinely believe the output is good enough, that the act of prompting a machine is the same as the act of knowing.

There used to be a social cost to being confidently wrong. If you made claims you couldn’t back up, someone would eventually call you on it. Your ignorance would become apparent through follow-up questions, through practical application, through the basic process of testing ideas against reality. Now you can generate a response that deflects those questions, that sounds like it came from someone who’s thought deeply about the subject, and most people won’t dig further. The surface has become the substance.

I have a science degree that I’ve done nothing with professionally, and the most useful thing it taught me was how to spot when someone doesn’t know what they’re talking about. Not through some special ability, just through the habit of asking “how do you know that?” and watching the answer fall apart. That question has become more important than ever, and fewer people are asking it. We’ve collectively decided that speed and volume matter more than accuracy and depth.

The thing about real knowledge versus performed knowledge is that the gap only shows up under pressure. When everything is going smoothly, the person who used AI to write the report and the person who actually understands the subject look identical. It’s when something goes wrong, when the plan needs adapting, when a client asks a question that wasn’t in the prompt, that the difference becomes obvious. Performed knowledge crumbles the moment you step outside the script.

I keep coming back to something I wrote about AI and the shortcuts people take with it: just do the work. The entire point of learning isn’t the end product. It’s the process. The struggling, the failing, the sitting with something difficult until it clicks. When you hand that process to a machine, you don’t save time. You skip the part that makes you competent. You end up with the output and none of the understanding that should have come with it.

The worry is we’re building a workforce of people who can produce the appearance of expertise on demand and collapse the moment they’re asked to demonstrate it. That might sound dramatic, but spend a week paying attention to the AI-generated communication flowing through your inbox and tell me I’m wrong. The confident idiot isn’t new. Every generation has had people who talked a bigger game than they could play. What’s new is that the tools have caught up with the ambition, and the gap between appearance and reality has become almost invisible.

The scary part isn’t the people who know they’re faking it. It’s the ones who don’t.