Are We Just Language Models? What Split-Brain Experiments Reveal About LLMs
In 2024, MIT researcher Robert Kurzban revived a classic question in cognitive science: "Is the human mind a unified entity, or just a confederation of disconnected modules pretending to be one?" This idea, originally shaped by Michael Gazzaniga’s split-brain experiments and Robert Trivers’ theory of self-deception, challenges our deepest intuitions about consciousness, introspection, and rationality.
But here’s a twist: the way humans construct self-narratives may not be that different from how LLMs generate language.
Split-Brain Patients and the Illusion of a Unified Self
In classic split-brain studies (Sperry & Gazzaniga, 1970s), patients with severed corpus callosum — the structure connecting the brain’s hemispheres — were found to behave as if they had two minds. One hemisphere could act or make decisions, while the other (typically the left) would "confabulate" an explanation, despite having no access to the actual motive.
As Kurzban writes (see image, from The Hidden Agenda of the Mind, 2023, Italian edition), asking what a “patient” thinks is meaningless:
“There is no single ‘patient’ — the brain is composed of separate, disconnected modules. Asking what ‘it’ thinks is like asking a committee for a single answer when half its members are silenced.”
The speaking module (usually the left hemisphere) does what language does best: construct a plausible story. Truth is optional.
Large Language Models: The Same Trick at Scale?
LLMs like GPT-4 or Claude 3 don’t have beliefs, goals, or a self. But they do something eerily similar:
In essence: LLMs are masters of post-hoc rationalization, just like the human brain’s left hemisphere in split-brain cases.
Similarities: Language as a Narrator, Not a Truth Engine
Concept Human Mind LLMs Narration Generated post hoc to rationalize actions Generated forward to predict next token access to truth partial, filtered, biased None, only statistical patterns Modularity Biological and evolved (Trivers, Kurzban) Prompt-induced, functional Hallucination/Deception Evolutionary advantage (Trivers) Emergent byproduct of training data Introspective ability Illusory (Nisbett & Wilson, 1977) Impossible (black-box architecture)
Recommended by LinkedIn
Key Difference: Evolution vs. Prediction
The biggest difference is intentionality:
In other words: we lie to impress others. LLMs lie because they were trained to autocomplete.
So… Are We Just LLMs with a Body?
Not quite. But the analogy is powerful — and disturbing. Both systems:
This raises a provocative question for anyone in AI, behavioral science, or leadership:
If both minds and models construct fictions to make sense of fragmented signals — how much of our reasoning is just well-dressed noise?
Further Reading / References:
#AI #CognitiveScience #LLMs #SelfDeception #SplitBrain #NarrativePsychology
CEO and Founder at Listenia S.r.l - CEO and Founder at DOYOUCOACH S.r.l. - Assessor and Coach I.C.F. (PCC) -
2wBel post, grazie Paolo
Chief Partner Firemedia Partners
2wThanks for sharing, Paolo