Artificial Intelligence Isn’t Human — So Why Can It Feel That Way?
How cognitive and neural mechanisms of social perception may lead us to “humanize” technologies such as artificial intelligence
Graphic by Nalani Wooton
Have you ever caught yourself thanking Siri for answering your questions, conversing with a chatbot such as ChatGPT as if it were human, or even scolding your laptop for being slow? If you have, you are certainly not alone.
This common tendency is known as anthropomorphism, which the American Psychological Association defines as the human inclination to interpret the behaviors and mental processes of nonhuman objects in terms of human traits and abilities. In other words, anthropomorphism describes the human tendency to treat things such as animals, objects, and even technology as if they were human.
With the rapid global digitalization of the modern world, anthropomorphism has become particularly relevant in describing how humans engage socially with technology, with an increase in attention towards interactions with artificial intelligence (AI). While this behavior may seem insignificant, psychologists and neuroscientists suggest it reflects deep cognitive mechanisms in the human brain.
According to researchers from Harvard and the University of Chicago, anthropomorphism is driven by innate psychological motivations, including the human need for social connection, and the desire to predict and understand the world around us. To investigate this tendency, researchers often examine how people attribute human traits to familiar nonhuman entities such as pets.
In one study, Nicholas Epley and his colleagues asked participants to evaluate qualities of their pets while measuring their levels of loneliness. They found that individuals who reported higher levels of social isolation were more likely to attribute humanlike emotions like compassion and thoughtfulness to their pets. Although this study focused on animals, it illustrates a broader psychological mechanism: anthropomorphism does not simply describe a behavior, but it occurs when people infer intentions, emotions, or motivations.
This raises an important question: why are humans so quick to assume that a nonhuman object is acting intentionally?
One explanation comes from research in evolutionary psychology. Humans evolved in environments where rapidly detecting intentional agents such as predators or other humans was essential for survival. Because the cost of missing a threat was far greater than the cost of mistakenly detecting one, the brain developed a bias toward assuming that environments are filled with intention. Researchers often refer to this cognitive bias as hyperactive agency detection: perceiving agency where none exists, which may be relevant for understanding why people may perceive agency in AI and other technologies.
If the human brain is innately biased towards detecting agency even where none exists, it must rely on certain cues to trigger this response. Among the most powerful of these cues is language.
According to Harvard researchers Heather and Kurt Gray and Daniel Wegner, we are more likely to perceive entities as agents with complex mental states when we believe that they are able to communicate. Therefore, in the case of large language models such as ChatGPT, the ability to generate coherent language may serve as a powerful cue that triggers the regions of the brain that are responsible for interpreting the mental states and intentions of others.
From a neuroscience perspective, this process is understood through predictive processing — the idea that the brain generates predictions about the world and updates them based on recurring experiences. Thus, when an AI system produces human-like conversational language, the brain’s predictive models may automatically interpret this signal as evidence of an intentional mind.
Studies have found that participants who had experience using ChatGPT attributed higher levels of consciousness to the system, and these attributions increased with usage. In other words, the more people interact with AI systems, the more likely they are to perceive them as possessing mental states. This may create a feedback loop where as individuals begin to detect agency, they may engage in increasingly social ways, leading people to develop increased trust and perceived intelligence.
The rise of new technologies is an undoubtedly remarkable achievement. While there is extraordinary promise with these technologies, current research suggests that there is also a need for critical engagement. Recognizing psychological tendencies involved in anthropomorphism may be beneficial for engaging with technology responsibly, allowing us to appreciate its capabilities without overestimating what it truly understands.
These articles are not intended to serve as medical advice. If you have specific medical concerns, please reach out to your provider.