A Very Human Vision for AI
A very human vision for going all-in on AI | The Vergecast
I think there is a world where we all may be in this collective experiment where we're deluding ourselves into thinking these things are more effective than they are.
This Vergecast episode gets at something I've been thinking about for months. We're all rushing headfirst into AI integration, convinced it's making us more productive, when really we might just be creating new problems to solve with the old ones.
The distinction between outsourcing your agency and augmenting it is crucial. One makes you dumber. The other might actually help.
When someone copies an AI-generated email response without reading it, that's outsourcing. When someone asks an AI to help them think through a complex problem, that's augmenting. The first one removes you from the process. The second one keeps you in control.
The problem is most people are doing the former whilst convincing themselves they're doing the latter. I've written before about AI slop making my life harder. Emails from professionals, company press releases, healthcare letters. All unchecked. All littered with issues. All sent with the presumption that their AI did the work for them, so why should they bother reading it?
And we are in this moment where it's like, okay, how much worse are we actually all willing to let things be in the name of them happening so much faster and more efficiently?
This is the real question. We've accepted that AI outputs are going to be wrong sometimes. We've accepted that responses will be generic and bland. We've accepted that communication will be less meaningful. All because it's faster.
But faster isn't better if the quality drops below a useful threshold. No one is getting any extra work done. Companies are spending billions on tools that are making workers miserable and killing actual productivity.
The collective delusion is that faster equals better. That automating everything is progress. That outsourcing your thinking to a machine means you're working smarter instead of harder.
I use AI tools regularly. When I'm working on a blog post, I'll often use them to help me think through arguments, to check if I'm missing an obvious counterpoint, or to formulate ideas more clearly. Sometimes I'll paste in a rough draft and ask it to highlight where my thinking is muddy or where I'm being repetitive. That's augmenting.
The difference is I'm still doing the thinking. I'm still making the decisions about what stays and what goes. The AI helps me see my own work from a different angle, like having someone read your draft and point out the weak spots. I've used them to help with my writing and to work through problems. But I read every output. I edit everything. I check for mistakes. The tool augments what I'm doing, it doesn't replace me doing it.
The people who treat AI as a way to avoid thinking entirely are the same people who will complain when their emails are misunderstood, their projects fail, and their relationships suffer from bland, meaningless communication. You can't outsource being present in your own life.
The vision for AI should be human. It should help us think better, not stop us thinking at all.