Human Intelligence in the Age of AI
As artificial intelligence continues its rapid integration into education and society, a critical question emerges: how can AI serve humanity without eroding the very qualities that make us distinct? The Human Intelligence in the Age of AI conference provided a platform for leading thinkers to explore this question, dissecting AI’s role in education, its impact on cognition, and the ways it can either empower or diminish human intelligence.
AI as a Means, Not an End
Sir Anthony Seldon framed AI as a tool, one that should amplify human intelligence rather than replace it. The central challenge, Seldon argued, is ensuring that AI does not strip education of its most human elements. Curiosity, creativity, consciousness, awe, love, and belonging—these are the dimensions of intelligence that cannot be programmed into an algorithm. AI must be embedded in ways that strengthen these qualities rather than undermine them.
Rethinking Intelligence in the Age of AI
Professor Rose Luckin expanded on the evolving nature of intelligence, warning against the over-reliance on AI-generated knowledge. She challenged the tendency to equate intelligence with information processing, arguing instead for an understanding of intelligence that values cognitive effort, epistemic awareness, and metacognitive reflection.
Her assertion that we treasure what we measure raised concerns about education’s increasing prioritisation of quantifiable outcomes over deep learning. AI, with its ability to generate instant responses, risks diluting intellectual struggle, the very process that forges critical thinkers. She argued that struggle is not a barrier to intelligence but a crucial component of its development, as effortful learning builds deeper understanding and resilience. The challenge, therefore, is not simply to integrate AI but to ensure it reinforces the cognitive processes that enable genuine intelligence to flourish.
Luckin positioned AI as a double-edged sword: it offers humanity the possibility of superintelligence, an augmentation of cognitive capacity that has never before been possible. However, this requires humans to actively engage with AI in a way that enhances their thinking rather than outsourcing it. We must get smarter, she argued, or risk becoming intellectually dependent on systems that are incapable of true comprehension. Without a strong foundation in epistemic cognition, the ability to assess knowledge sources and their validity, students may become passive recipients of AI-generated content rather than active participants in knowledge construction.
Luckin argued for an educational approach that fosters meta-intelligence—the ability to think critically about one’s own learning process. AI should not replace human cognition but should act as a scaffold, enabling students to build their own superintelligence rather than passively consuming machine-generated insights.
The Battle for Attention in an AI-Driven World
Professor Sylvie Delacroix addressed one of AI’s most insidious effects: its disruption of human attention. In a world optimised for efficiency, AI-driven education platforms often eliminate uncertainty, a key driver of deep cognitive engagement. By automating processes and providing instant solutions, AI risks eroding our capacity for sustained focus and intellectual curiosity.
Delacroix warned against cognitive atrophy, drawing on the metaphor of Wall-E, where human capabilities deteriorate in the absence of effortful engagement. She called for a reintegration of tinkerability into education, the freedom to experiment, modify, and play with knowledge. Rather than passive AI-driven learning, students must be encouraged to struggle with uncertainty, challenge their own assumptions, and engage in collective meaning-making.
Her proposed framework for AI in education included three guiding principles:
The Atomic Human: Protecting What Cannot Be Digitised
Professor Neil Lawrence explored the concept of intelligence. Intelligence, he argued, is deeply contextual, shaped by experience, values, and social interactions.
He cautioned against AI’s hidden costs, particularly its erosion of tacit knowledge, the kind of understanding that cannot be easily quantified or transferred to a dataset. As AI systems replace human expertise in various domains, they risk diminishing our ability to preserve and transmit this knowledge. He stated, the fundamental thing to our intelligence is the narrowness in which we can share.
Lawrence’s concept of the attention captive cycle highlighted how AI-driven dopamine loops, such as algorithmic social media feeds, erode sustained focus and deep thought. He contrasted this with an attention reinvestment cycle, where deliberate, conscious engagement fosters intellectual growth.
He also introduced the atomic human, the essential, indivisible core of human intelligence that cannot be replicated by AI. This includes critical thinking, metacognitive awareness, authentic human connection and vulnerability. Without deliberate cultivation of these skills and traits, humanity risks intellectual flattening, a world where AI-generated answers replace nuanced human inquiry.
A Collective Vision of Education
Andy Wolfe presented a vision of the education system, arguing that its prevailing narratives are increasingly out of step with the realities of AI-driven learning. He identified four key assumptions that shape policy:
He urged a reimagining of education’s purpose, moving beyond test preparation to a more expansive vision of learning. Education, he argued, should not be about producing students who can recall facts under pressure, but about cultivating individuals capable of building cathedrals—constructing knowledge, thinking deeply, and engaging meaningfully with the world.
Creativity and AI: The Art of Possibility
Baroness Beeban Kidron challenged the binary thinking that often places technology and creativity at odds. Creativity, she asserted, is not something that needs to be “taught” but something that needs to be unblocked. AI should not be seen as a constraint but as a tool that, when wielded with intention, can expand creative possibilities.
Sarah Ellis echoed this sentiment, exploring the intersection of AI and artistic expression. She argued that, in the hands of artists, AI becomes a medium for new kinds of storytelling and innovation. However, she warned against an uncritical embrace of AI, urging creators to remain aware of the implications of data ownership, authorship, and technological bias.
Ellis concluded: That chair can be whatever you want it to be. AI, like art, is defined by human intention. It is a tool, not an autonomous force.
Conclusion: Reclaiming the Human in an AI World
The conference underscored a central truth: AI is not inherently good or bad, it is a tool that must be wielded with care. If we allow AI to dictate the terms of learning, we risk eroding the very qualities that make us human. The task ahead is not simply to adapt to AI but to shape it in ways that reinforce what is most valuable about human intelligence. Education must remain, at its core, a deeply human endeavour, one that prioritises struggle over convenience, connection over isolation, and meaning over mere efficiency.
Legal Director at Browne Jacobson LLP. Pragmatic & sensible advice on Data Protection, AI and Information Governance in Education. I help the sector to be more than just reactive by balancing opportunity and risk.
1moYou've produced a perfect summary of this deeply insightful day! Thank you.
Head of Digital Education | AI in Education | EdTech |
1moSorry I missed you! Would have been great to say hello.