Are We Just Language Models? What Split-Brain Experiments Reveal About LLMs

Are We Just Language Models? What Split-Brain Experiments Reveal About LLMs

In 2024, MIT researcher Robert Kurzban revived a classic question in cognitive science: "Is the human mind a unified entity, or just a confederation of disconnected modules pretending to be one?" This idea, originally shaped by Michael Gazzaniga’s split-brain experiments and Robert Trivers’ theory of self-deception, challenges our deepest intuitions about consciousness, introspection, and rationality.

But here’s a twist: the way humans construct self-narratives may not be that different from how LLMs generate language.

Split-Brain Patients and the Illusion of a Unified Self

In classic split-brain studies (Sperry & Gazzaniga, 1970s), patients with severed corpus callosum — the structure connecting the brain’s hemispheres — were found to behave as if they had two minds. One hemisphere could act or make decisions, while the other (typically the left) would "confabulate" an explanation, despite having no access to the actual motive.

As Kurzban writes (see image, from The Hidden Agenda of the Mind, 2023, Italian edition), asking what a “patient” thinks is meaningless:

“There is no single ‘patient’ — the brain is composed of separate, disconnected modules. Asking what ‘it’ thinks is like asking a committee for a single answer when half its members are silenced.”

The speaking module (usually the left hemisphere) does what language does best: construct a plausible story. Truth is optional.


Large Language Models: The Same Trick at Scale?

LLMs like GPT-4 or Claude 3 don’t have beliefs, goals, or a self. But they do something eerily similar:

  • They generate coherent narratives based solely on prior tokens.
  • They often hallucinate facts, but in a linguistically convincing way.
  • They cannot introspect or “know” why they said something — because there is no “they.”

In essence: LLMs are masters of post-hoc rationalization, just like the human brain’s left hemisphere in split-brain cases.


Similarities: Language as a Narrator, Not a Truth Engine

Concept Human Mind LLMs Narration Generated post hoc to rationalize actions Generated forward to predict next token access to truth partial, filtered, biased None, only statistical patterns Modularity Biological and evolved (Trivers, Kurzban) Prompt-induced, functional Hallucination/Deception Evolutionary advantage (Trivers) Emergent byproduct of training data Introspective ability Illusory (Nisbett & Wilson, 1977) Impossible (black-box architecture)


Key Difference: Evolution vs. Prediction

The biggest difference is intentionality:

  • Humans evolved self-deception as a strategy (Trivers, 2000) — to better deceive others, we must first deceive ourselves.
  • LLMs hallucinate without purpose. There is no “strategic advantage” to their confabulations — just a side effect of predicting the next word.

In other words: we lie to impress others. LLMs lie because they were trained to autocomplete.


So… Are We Just LLMs with a Body?

Not quite. But the analogy is powerful — and disturbing. Both systems:

  • Use language not to reveal the truth, but to construct plausible narratives.
  • Can’t access the “real” source of decisions or knowledge.
  • Are modular, distributed, and inherently disconnected from any ground truth.

This raises a provocative question for anyone in AI, behavioral science, or leadership:

If both minds and models construct fictions to make sense of fragmented signals — how much of our reasoning is just well-dressed noise?


Further Reading / References:

  • R. Kurzban (2010). Why Everyone (Else) Is a Hypocrite.
  • R. Trivers (2000). The Elements of a Scientific Theory of Self-Deception.
  • M. Gazzaniga (1985). The Social Brain.
  • Nisbett & Wilson (1977). Telling More Than We Can Know: Verbal Reports on Mental Processes.
  • OpenAI (2023-24). Studies on LLM hallucination, alignment faking and confabulation.


#AI #CognitiveScience #LLMs #SelfDeception #SplitBrain #NarrativePsychology


Alberto Marino

CEO and Founder at Listenia S.r.l - CEO and Founder at DOYOUCOACH S.r.l. - Assessor and Coach I.C.F. (PCC) -

2w

Bel post, grazie Paolo

Like
Reply
John A. Lack

Chief Partner Firemedia Partners

2w

Thanks for sharing, Paolo

Like
Reply

To view or add a comment, sign in

More articles by Paolo Baldriga

Insights from the community

Others also viewed

Explore topics