The Problem Isn’t the Tool: AI, Desire, and Human Accountability

 AI vs Human Reasoning: GPT-3 Matches College Undergraduates - Neuroscience  News

I came across an article on The Conversation titled ‘We Need to Stop Pretending AI is Intelligent – Here’s How,’ written by Guillaume Thierry. After a spring term spent teaching across various modules in media and music—where the topic of artificial intelligence continually surfaced in student discussion—I have found myself increasingly attentive to articles and conversations that grapple with its implications. This particular piece caught my attention because it addresses a concern that many of my students have voiced, the discomfort provoked by AI’s mimicry of human expression and emotion in its performance.

While I found Thierry’s argument compelling enough, it is one that I have encountered in various forms: whether through academic podcasts, recent scholarship, or student discourse. His argument appears to follow a well-rehearsed line of thought, one that frames AI’s humanlike tendencies as deceptive and dangerous. He calls for a de-anthropomorphisation of AI, advocating that it should no longer be allowed to “pretend” it has feelings, thoughts, or ambitions. For Thierry, the increasing tendency to humanise AI constitutes a kind of “digital cross-dressing"—a provocative phrase, and one that my students themselves have echoed in slightly different language. Many of them express anxiety around the presence of AI bots on social media platforms, or in intimate digital interactions, where the line between tool and companion begins to blur. They worry about being emotionally manipulated, about false affect, and about the inability to discern the machine from the human. They see this as a kind of deception and in some ways, they are right.

However, I cannot help but feel that arguments like Thierry’s risk sidestepping a more difficult question. There is a tendency to shift blame onto the machine, to describe it as “dangerous,” as though it has emerged of its own volition, rather than being a product of human design. The article briefly references “companies” as agents who could reconfigure AI to be less anthropomorphic, but the critique falls short because it does not name or engage with the people behind the programming. To treat AI as something independent from the cultural and emotional logics that produced it is to indulge in a kind of ethical fantasy—one that absolves humans of responsibility and falsely casts AI as an autonomous actor rather than a socially embedded tool. We must ask what it is about human desire, vulnerability, or affective need that leads us to want AI to perform humanness in the first place.

This is, I think, where the conversation becomes most interesting. Why do we want our machines to sound like us? What is it about our loneliness, our capitalistic affective obsessions, or our longing for intimacy without consequence that makes humanlike AI so appealing? The performance of emotion by AI is not simply a technical issue. It is a mirror, reflecting something back to us about the kinds of connections we yearn for, the kinds of relationships we value, and the kinds of substitutions we are willing to accept.

Thierry concludes that AI is a tool—nothing more, nothing less. Like all tools, he suggests, it can be wielded as a weapon. While I do not disagree, I think it is worth interrogating the logic that places ethical weight solely on the tool. Cars are dangerous; guns are dangerous; yet we understand them within broader frameworks of human responsiblity and accountability. The same should be said of AI. Its humanlike qualities are not inherently threatening, rather they raise critical questions about how we as humans use, abuse, or are seduced by our own creations. The question is not simply what AI does, but why we have made it so and what that says about us.

Comments