Thinking, Fast and Slow—But Is It Time to Rethink How We Think?

Thinking, Fast and Slow—But Is It Time to Rethink How We Think?

The Joke That Stuck With Me

In 2012, I attended a book signing with Daniel Kahneman and David Baddiel. As I approached the table, Kahneman looked up with a wry smile and said:

“I’ve never met anyone who has read my book—only people who are reading it.”

The room laughed, but there was an undeniable truth in his words. Thinking, Fast and Slow isn’t just a book you read; it’s a book you wrestle with. Every page forces you to stop, reflect, and question your own decisions.

Kahneman, who won the Nobel Prize in Economic Sciences, fundamentally reshaped how we understand human decision-making. His research, much of it developed alongside Amos Tversky, exposed the hidden biases and mental shortcuts that shape our judgments—often without us realising it. When he passed away last year, the world lost one of the greatest minds in behavioural science.

But his ideas live on. And in today’s AI-driven world, they feel more relevant than ever.

Confession: I Never Finished His Book

I have to be truthful! I am one of those people who never read Thinking, Fast and Slow from cover to cover. It’s a mighty tome, and like many, I started with good intentions before realising that every new concept demanded deep reflection, time to thinking and process.

This all said, there are two concepts from his work that I will never forget:

1. System 1 – Fast, intuitive thinking

Our gut reactions, emotions, and snap judgments, powerful, but often flawed.

2. System 2 – Slow, analytical thinking; deliberate, logical, and strategic, but effortful and prone to exhaustion.

These two modes of thinking shape everything, from the way we make business decisions to how we perceive risks.

What’s more, Kahneman revealed that our brains are wired to favour System 1 fast, instinctive thinking, even when System 2 should take over. And that’s where the trouble begins.


The Problem CEOs Face Today

As CEOs, we like to believe we’re operating in System 2—rational, strategic, and in control. But the truth? System 1 often dominates, and we don’t even realise it.

• We trust our instincts in hiring decisions, only to later realise we overlooked key data.

• We rely on past experiences to predict the future, even when the landscape has fundamentally changed.

• We assume we understand our customers, but our biases cloud what they really want.

Kahneman’s research on loss aversion is particularly relevant to business leaders today. He found that people fear losses twice as much as they value gains. Meaning we often resist change, even when the logic suggests we should embrace it.

And now, we are leading organisations where AI is making decisions alongside us.

Which raises an important question: is it time to update Kahneman’s model for the AI era?


Beyond Fast and Slow: Two More Systems for an AI Era

Kahneman’s framework revolutionised our understanding of human decision-making. But in an AI-driven world, are two systems enough?

I believe that, for CEOs navigating AI-driven organisations, we now need four systems of thinking:

  • System 1 – Fast, intuitive human thinking

Our gut reactions, based on experience and emotion. Sometimes right, often wrong.

  • System 2 – Slow, analytical human thinking

Deliberate, logical decision-making—but limited by bias and cognitive load.

  • System 3 – Augmented Human-Machine Teaming

AI-assisted decision-making, where humans and AI work together to enhance judgment. Think AI-powered risk assessments, predictive analytics, or AI copilots in strategic planning.

  • System 4 – Fully Automated AI Decision-Making

Decisions made entirely by AI—autonomous trading systems, fraud detection, self-optimising logistics, where speed and complexity exceed human capabilities.

If CEOs only think in terms of System 1 and System 2, they risk mismanaging AI’s role in their organisations; either over-relying on intuition or hesitating when automation could drive better results. But if we acknowledge System 3 and System 4, we can take a more strategic approach: ensuring AI serves us, rather than replacing human judgment altogether.


The Future: Human-Centric AI Decision-Making

Kahneman’s research reminds us that our biases are invisible to us. We don’t realise when we’re overconfident, when we’re simplifying a complex reality, or when we’re being influenced by irrelevant information.

And this is where AI presents both an opportunity and a risk.

AI doesn’t suffer from cognitive biases in the same way humans do. It doesn’t fear losses, favour recent experiences, or make emotional decisions. In some cases, it can correct for human errors. But AI is only as good as the data and algorithms behind it, and these, in turn, are built by humans. If we’re not careful, we risk embedding biases into AI systems and amplifying them at scale.

So in a world where AI is shaping industries, leadership isn’t just about making the right decisions—it’s about knowing who (or what) should make them.

• When should we trust our instincts? (System 1)

• When should we pause and analyse? (System 2)

• When should we leverage AI to assist our judgment? (System 3)

• And when should we let AI take the lead entirely? (System 4)

Kahneman’s work was about making us aware of our biases. Now, the challenge for CEOs is ensuring that AI doesn’t just inherit those biases but actively helps us overcome them. That requires a balance—using AI where it enhances human decision-making but keeping a human touch where it matters most.


Final Thought: What Would Kahneman Say?

If Kahneman were here today, would he challenge us to expand our thinking?

Would he argue that in an AI-driven world, it’s no longer just about thinking fast or thinking slow—but about knowing when to think, who should think, and how to ensure AI thinks with us, not for us?

As leaders, we need to ask ourselves: are Systems 3 and 4 the missing piece in how we lead in an AI age?

Join the Conversation: What Does It Mean to Be Human in a World of AI?

These questions go beyond business strategy—they touch on what it means to be human in an age of automation.

Because as AI increasingly shapes our world, it matters that we get this right.

If these ideas resonate with you, follow me for more thought-provoking discussions on how we balance AI with a human touch, how we rethink decision-making, and how we ensure technology serves us—not the other way around.

What do you think? How should leaders adapt their thinking in an AI-driven world? Let’s start a conversation.

Maxcene Quirke

🏫Online Training and Development in Transformation and Mobilisation of Facilities Management | ✍🏾Author Mobilisation Mastery Release Summer 2025 | Multiple Awards Finalist |International Speaker

1mo

Great perspective thanks for sharing your views

Luke Tyburski

From the Sahara to the C-Suite | Pro-Sports to Leadership Teams | ICF & EMCC Certified | Developing High-Performing Humans 👉 Evidence-Based with Real-World Resilience to Optimise Mindset & Leadership Impact | TEDx 🗣️🎤

1mo

Well put together with great insights James A Lang

Like
Reply
Isabel Clark

The Financial Planning Lady: On a Mission to give Business Owners peace of mind so they can focus on their genius zone. Let’s connect you with your Future Financial Self💷 Speaker | Podcaster - I Don’t Play Golf! 🏌️♀️

1mo

It makes sense that system 1 is likely being used more. We live in a world where people have a shorter attention span and are used to quick dopamine fixes on social media. Bringing AI into decision making and using it to avoid cognitive biases seems like wise use of the technology

Tim Richardson

Helping founders share their story on film to win more clients | Brand Story Films | Create a month's worth of content in 1 hour | Fireside Film

1mo

I know for me having AI has made a huge impact on my ability to make decisions and get things done. I love system 3!!

Mark Latimer

Software Engineer at CGI

1mo

Of course there is a system 3. It's stuff that is passed on by community/intergenerational. What are your taught by your community/ your parents/ those we refer to as teachers. In reinforcement learning some things are encouraged and some not

To view or add a comment, sign in

More articles by James A Lang

  • When Public Trust Meets Machine Logic

    Here is the 13 May 2025 edition of The itmatters AI Leadership Briefing, focused on AI in public services, where…

    3 Comments
  • War Games, Real Rules: AI Enters Strategic Doctrine

    Here is the 12 May 2025 edition of The itmatters AI Leadership Briefing, focused on AI in national security, defence…

  • Forecasting Without Understanding

    9 May 2025 edition of The itmatters AI Leadership Briefing The itmatters AI Leadership Briefing AI insights built for…

    1 Comment
  • AI in Education: Innovation or Automation Creep?

    Here is the 8 May 2025 edition of The itmatters AI Leadership Briefing, focused on how AI is reshaping education…

    6 Comments
  • AI Has Left the Lab and Entered the Interface

    7 May 2025 edition of The itmatters AI Leadership Briefing, The itmatters AI Leadership Briefing AI insights built for…

  • The Illusion of Scale: When Bigger Models Hide Smaller Truths

    Here is a fresh 5 May 2025 edition of: The itmatters AI Leadership Briefing AI insights built for decision-makers, not…

    5 Comments
  • The AI Safety Regime Under Pressure

    2 May 2025 edition of The itmatters Leadership Briefing, a Friday focus on regulation, assurance, and platform power…

    3 Comments
  • Workforce Shake-Up: AI Is Rewriting the Talent Equation

    1 May 2025 edition The itmatters Leadership Briefing AI insights built for decision-makers, not spectators. Workforce…

  • Custom AI Is the Next Competitive Edge

    30 April 2025 The itmatters Leadership Briefing AI insights built for decision-makers, not spectators. Custom AI Is the…

    2 Comments
  • AI Infrastructure Is the New Battlefield

    Daily Leadership Briefing from itmatters AI insights built for decision-makers, not spectators. 29 Apr 2025 AI…

Insights from the community

Others also viewed

Explore topics