Can Chatbots Actually Understand Human Language? The Psychology Behind the Illusion of LLMs
Why Passing the Turing Test Isn’t Enough? A Comparison of Artificial vs. Human Intelligence
Written by: Selina Hui | Edited by: YJ Si | Image by: Miguel Á. Padriñán on Pexels
When you ask ChatGPT to debug code or draft an email, the results can feel nothing short of magic. The text flows so naturally that it feels like there must be a mind behind the screen, some type of “intelligence” that truly "gets" us. But is there?
According to the American Psychological Association, "understanding" is the process of gaining insight into the significance of a word, concept, argument, or event. It requires semantics (knowing what things mean), not just syntax (knowing where words go). John Searle’s 1980 “Chinese Room” thought experiment suggested that machines could fake syntax to look like semantics. He imagined a man who speaks no Chinese in a room using a rulebook to respond to Chinese characters. To outsiders, he seems fluent. Inside, he manipulates symbols based on shape without knowing what they mean. Searle concluded that a computer can simulate understanding without truly possessing it.
Large Language Models (LLMs) are Searle’s “Chinese room” scaled up. They use statistical prediction to pass intelligence benchmarks, yet lack the cognitive foundations of human understanding: social intent, emotional grounding, and lived experience.
LLMs do not “know” facts, but calculate probabilities by guessing the next most statistically likely word. If given the word "ice," the model predicts that "cream" is a highly likely next word. Computational linguist Emily M. Bender calls this being a “stochastic parrot.” Just as a parrot repeats "I love you" to get a reward without understanding love, an LLM mimics patterns learned from the internet to satisfy a prompt, blind to the word’s actual meaning.
This is one of the many fundamental limitations that LLMs face. Because AI models lack physical senses and have never experienced tasting a lemon or feeling heartbreak, they "hallucinate”. However, this isn’t lying; LLMs are simply doing what they were trained to do: constructing a sentence that looks statistically plausible. It values probability over truth, constructing confident errors, like citing court cases that never happened, because they have no objective “reality” to check against.
This is why Yann LeCun, Meta’s Chief AI Scientist, calls current LLMs a “dead end” on the road to true intelligence. His reasoning is grounded in the “World Model.” A child learns gravity by dropping a spoon, building an internal model of cause and effect. An LLM reads millions of descriptions of gravity but remains “disembodied” since it knows the syntax of physics, but not the reality of falling objects.
Conversely, humans use language to navigate a shared reality. When a friend texts, “I'm fine,” an LLM sees the word 'fine' and classifies it as a positive sentiment. A human instead uses Theory of Mind, the ability to attribute mental states to others, to infer context. By reading and facial expressions, we realize they might mean "I am devastated."
Furthermore, human understanding is embodied as our language is tied to our bodies. Studies show that when people read words like "kick,” the motor regions of their brain associated with moving their actual legs lit up. We also process emotional weight, as our language centers are wired to the amygdala, the brain's emotional center. Research shows that reading emotional adjectives triggers specific activity in this region that neutral words do not. To a chatbot, "love" is a sequence of numbers; to a human, it triggers entirely different chemical reactions in the brain.
AI is a statistical miracle, mapping the patterns of human speech perfectly. But language is not just patterns, but also the transfer of experience and feelings. Until AI has a body to feel the world and a social drive to connect with others, it will remain a translator proficient at syntax (structure) but blind to semantics (meaning).
These articles are not intended to serve as medical advice. If you have specific medical concerns, please reach out to your provider.