Imagine sitting at your computer, working on a project, and turning to your virtual assistant for help. You ask it to draft an email or generate ideas for a presentation. It responds as if it understands your needs, almost. It seems to “think,” “understand,” and maybe even “care” about what you’re asking. The temptation to treat AI as a human-like partner is real — and that’s where the risk of anthropomorphism comes in. But here’s the truth: no matter how friendly or “intuitive” it seems, AI doesn’t actually think or feel like we do.
Let’s explore why this matters and how we can avoid falling into the trap of over-humanizing AI.
Understanding AI Anthropomorphism
Anthropomorphism is not new — it’s a psychological phenomenon where humans assign human-like qualities to non-human entities. From naming cars to imagining pets have human emotions, it’s a cognitive shortcut to making sense of the world. Already in 1976, when ELIZA, one of the first chatbots, was released, its human creator Josef Weizenbaum shared his inclination of an emotional connection with her/it. In 2017 when AlphaZero, an AI program, beat world-champion chess engines, its playing style was described in human terms like “intuitive” and “romantic,” implying that the system had intentions and feelings similar to human players.
Generative AI triggers and intensifies this tendency because systems like ChatGPT or virtual assistants like Siri produce responses that are deceptively performant at mimicking human language and aping voice. One part of the problem is that such framing exaggerates AI’s capacities and makes us overestimate its intelligence and understanding, which opens the door to trust.
The Dangers Of Anthropomorphizing AI
False Expectations: Anthropomorphizing AI can lead people to assume that AI systems possess qualities they do not have, such as empathy, moral judgment, or creativity. Increasingly people rate responses from ChatGPT as more empathetic than those from humans; for example, in AI-powered therapy. It is helpful to remember that empathy requires emotional and cognitive understanding—something that a set of algorithms, no matter how sophisticated they are, cannot achieve.
Emotional Dependency: When we anthropomorphize AI, we walk the slippery slope of emotional attachment to the systems we interact with. This can create a sense of reliance and gradually lead us to replace more challenging human interactions altogether. In some extreme cases, as seen with chatbots, users have engaged in deeply emotional conversations with AI systems, leading to misguided conclusions, sometimes with tragic outcomes, such as the case of a man who relied on a chatbot for emotional advice and later took his life based on suggestions from the AI that he did not question.
Distorted Understanding of AI: Anthropomorphizing AI blurs the lines between what AI is actually doing – following coded algorithms – and what it is perceived to do – thinking, feeling, understanding. It is a slippery slope from the utilization of AI to reliance on it. The better our AI assistants perform and the busier we are, the higher the risk of falling prey to cognitive inertia. Awareness of that trap is the first step towards actively protecting ourselves from crossing the threshold of blind trust, where we refrain from critically evaluating the limitations of our artificial assistants.
Language matters. Although we might intellectually know that the AI-powered tool is an artificial entity, the more we refer to the object as a subject, to it as a s/he, the more that perception takes root in our subconscious mind. By describing AI as “thinking” or “understanding,” we fail to appreciate its lack of real consciousness and the risks associated with its deployment in sensitive areas like education, healthcare, or law.
Retaining Agency In An AI-Infused World
To navigate the growing presence of AI in our lives without falling into the trap of anthropomorphism, here are four practical takeaways to reframe your interaction with AI with the A-Frame:
- Awareness of AI’s Capabilities and Limitations: Understand that AI systems, no matter how advanced, operate based on pre-defined algorithms and lack true human emotions, empathy, or moral judgment. Distinguishing between what AI appears to do and what it actually does sharpens our vigilance of the limitations, hence setting the ground for informed decisions.
- Appreciation of Human Attachments: While AI systems may offer human-like interactions, they do not replace human relationships. Treat AI as a tool, not a companion, and reappreciate your friends, family, and coworkers who bring magic and kindness to your life.
- Acceptance of Limitations. Before relying on AI for important decisions — whether they pertain to your professional or private spheres — evaluate its accuracy. We are cautious in our interaction with people; similarly, we should critically question whatever output an AI-powered entity delivers. Actually, we are well advised to use double caution because although we may assume that AI understands the nuances of human life and relationships, it simply processes data.
- Accountability for the outcomes. Whatever comes from our interaction with AI is ultimately our responsibility. Therefore it is important to cultivate double literacy. We must seek to build up our hybrid knowledge, combining an understanding of AI’s technical workings and its societal impact with a solid grasp of our own natural intelligence, including the brain and body. Acquiring this type of holistic understanding is the most secure way to remain empowered in an AI-saturated environment, where it is deceptively easy to delegate ever more to our always-available, always patient, and seemingly ever-so-friendly and understanding AI companions.
AI is useful. That’s it, and we should leave it at that without seeking to frame it as strategic or smart, kind or wise.