Zach Seward being clear that AI is not like you and me:
Aristotle, who had a few things to say about human nature, once declared, “The greatest thing by far is to have a command of metaphor,” but academics studying the personification of tech have long observed that metaphor can just as easily command us. Metaphors shape how we think about a new technology, how we feel about it, what we expect of it, and ultimately, how we use it.
I highlighted a lot of this article to save for leather musing, but it got me thinking about things immediately. I’d recommend reading the entire post if you are even remotely interested in AI as it’s pretty eye-opening, well written and diligently researched.
The decisions made by the creators of technology and particularly AI dictate a lot of the things we think about it. What’s more is most people will not even be aware of the effects of portraying your product as if it were a person. The fact is, we give AI much more slack than we would with other things because it is portrayed with a friendly, eager to help tone and that’s by design.
LLMS don’t just spurt back walls of text, they portray the answers in conversational styles, leading to increased levels of trust. Because you can’t be mad at something that apologises for being wrong so provocatively. Spurring in us a forgiving nature as if they were our friend. Artificial Intelligence doesn’t get things spectacularly wrong after all, they simply “hallucinate”.
As Zack puts it brilliantly, “AI isn’t doing shit. It is not thinking, let alone plotting. It has no aspirations. It isn’t even an it so much as a wide-ranging set of methods for pattern recognition”. Imagine if you looked up a topic in an encyclopaedia, only for it to be entirely wrong and reference things that don’t exist, you wouldn’t tolerate it. Yet Search GPT is already getting things wrong, and that’s OK because it is portrayed as being just like us. Well, it’s not.