AI Is The New Middleman
Apple’s notification summaries told BBC readers that Luigi Mangione had shot himself. They told others that Rafael Nadal, a married Spanish man, had come out as gay. They told darts fans that Luke Littler had won a championship that hadn’t happened yet. None of these things were true. Apple’s AI had taken real notifications from a real news organisation and rewritten them into something false, then presented them as if the BBC had published them. The BBC spent weeks trying to get Apple to fix it. Apple eventually disabled notification summaries for news and entertainment apps entirely, italicised the remaining summaries, and labelled the feature as beta. It took sustained public pressure from a national broadcaster to get that far.
Notifications already told you what you needed to know. A headline arrived, you read it or you didn’t, and the words were the ones the journalist had written. Apple looked at that perfectly functional system and decided it needed an AI layer on top, not because users were struggling with notifications, but because Apple Intelligence needed a visible use case to justify its existence. The feature shipped, it broke almost immediately, and rather than removing it Apple treated the whole thing as a design problem to iterate on. The assumption was never questioned: that inserting AI between you and your information is inherently a good idea that just needs refinement.
This is the pattern everywhere now.
Search
Google puts an AI Overview between you and the article you searched for. A paragraph of generated text appears at the top of the results, assembled from content scraped from websites that the same search engine is simultaneously burying in its rankings. Pew Research found that users who see an AI summary click on actual search results at roughly half the rate of those who don’t. More recent data from Ahrefs puts it at 58% fewer clicks to the top-ranking page when an AI Overview is present.
Think about what that means in practice. Someone writes an article. It’s good. It ranks well. Google then uses that article to generate a summary that sits above it in the search results and prevents most people from ever clicking through to read the original. The writer gets the ranking but not the traffic. Google gets to keep the user on its page for longer, which means more opportunity to show ads. The AI Overview isn’t connecting you with information. It’s intercepting it.
Google’s VP of Search, Liz Reid, claimed in August 2025 that organic click volume to websites has been “relatively stable.” Publishers immediately challenged this. Press Gazette tracked the top 50 US news websites throughout 2025 and found that by July, only 6 of 50 showed year-on-year traffic growth. CNN’s traffic dropped 38% year-on-year in July alone. The gap between what Google says is happening and what publishers are experiencing is vast, and it’s getting wider.
Social
Meta is filling your feed with AI-generated content “imagined for you” and planning to deploy AI profiles that behave like real accounts. Zuckerberg calls this the third era of social media. The first was friends, the second was creators, and the third is apparently content made by nobody for nobody, served by a recommendation engine that thinks it knows what you want better than you do.
Users have created over 20 billion AI images across Meta’s AI products, with the company’s Vibes app accelerating that number since September. That number is staggering if you sit with it for a moment. Twenty billion images that weren’t taken by anyone, don’t depict anything real, and exist to be scrolled past in a feed alongside photos from your actual friends and family. Meta’s VP of product for generative AI, Connor Hayes, told the Financial Times the company expects AI characters to “exist on our platforms, kind of in the same way that accounts do.” They’ll have bios, profile pictures, and the ability to generate and share content. The plan is to make them indistinguishable from real users, which raises the obvious question of what a social network is when a significant portion of the people on it aren’t people.
The company already uses your AI chat interactions to target ads. If you ask Meta AI about family holidays, you’ll start seeing hotel ads in your Reels. They announced this in October 2025 and framed it as personalisation. Meta’s privacy and data-policy manager, Christy Harris, said users “already thought we were doing this,” which is a remarkable admission that the expectation of surveillance has become so normalised that the company can just confirm it and move on.
The middleman problem
I wrote about platform middlemen a while back, about how platforms position themselves between two sides of a transaction and take a cut. AI has accelerated that to an absurd degree. The middleman is no longer just taking a cut, it’s rewriting the thing being exchanged. When Google summarises an article, it’s not connecting you with the writer’s words. It’s giving you its own version and hoping you don’t notice the difference. When Apple rewrites a notification, it’s deciding what mattered in that message before you get to read it. When Meta generates content for your feed, it’s replacing human creativity with a statistical prediction of what will keep you scrolling.
The thing these companies have in common is that none of them asked permission. Google didn’t ask publishers if they’d like their content summarised into a paragraph that prevents anyone from visiting their site. Apple didn’t ask the BBC if it was alright to rewrite their headlines. Meta didn’t ask users if they wanted AI-generated images mixed into their feeds alongside photos from actual humans. These are decisions made by companies that spent billions on AI infrastructure and now need to show a return on that investment, regardless of whether the product is useful or wanted.
Perplexity has already started showing ads inside AI-generated answers. Google will follow. The trajectory is obvious because it’s the same trajectory every platform follows: launch something useful, gain trust, insert advertising. The difference is that AI summaries compress this cycle. The useful stage barely exists before the ads arrive, because the AI layer was never really about helping you find information. It was about keeping you on the platform long enough to see something that makes money.
The trust cost
Every one of these AI layers degrades trust in a way that’s hard to reverse. When Apple’s notification summary gets it wrong, the BBC’s credibility takes the hit, not Apple’s. When Google’s AI Overview contains an error, you blame the source it cites, not Google. When Meta fills your feed with AI-generated images alongside real photos, the line between authentic and synthetic dissolves, and you stop trusting any of it. The AI middleman is uniquely insulated from the consequences of its own mistakes because it always points at someone else’s content as the source.
This is the part that gets to me. I use AI tools and I’m not pretending I don’t, but I choose to open them, I go to them with a specific task, and I understand what I’m getting. That’s a different thing entirely from having AI inserted into services that already worked. My notifications were fine before Apple decided they needed rewriting. Google search results, for all their problems, at least pointed me at websites made by people. My social media feed, terrible as it was, contained content made by humans. All of those things now have an AI layer sitting on top of them, uninvited, rewriting and summarising and generating on my behalf, and not one of them is better for it.
Every time one of these layers appears, something gets lost in translation. The original words, the original intent, the original person who made the thing. Replaced by a confident, fluent, frequently wrong machine that exists not to connect me with the source but in many cases remove me from it and keep me engaged.