Who Is Responsible When AI Makes the Wrong Decision?

Who Is Responsible When AI Makes the Wrong Decision?

I was in London last week, meeting top executives in the AI space—leaders from enterprises, startups, and regulatory bodies shaping the future of artificial intelligence. The discussions were wide-ranging, touching on everything from generative AI’s potential to the evolving landscape of AI infrastructure. But one topic kept surfacing in every conversation: AI ethics and accountability. According to an IBM Institute for Business Value (IBV) study, 78% of global executives believe that robust AI ethics and accountability measures are critical to long-term business success, and 74% agree that transparency and explainability in AI decision-making are essential to building trust with customers and regulators. With 67% of businesses reporting that insufficient oversight in AI deployment has led to unintended errors, the risks associated with AI-driven failures were top of mind for everyone. Yet, when things go wrong, who takes responsibility? From financial services to healthcare and autonomous systems, the question of accountability remains unresolved, and the stakes couldn’t be higher.


The AI Accountability Puzzle: Where Does Responsibility Lie?

AI systems are designed to make predictions, automate tasks, and optimize outcomes. But what happens when those predictions go wrong? AI models, especially deep learning systems, function as black boxes—meaning even their creators may not fully understand how they reach decisions. This opacity makes assigning responsibility a challenge.

Let’s break down the key players in the AI ecosystem and their roles in accountability:

1. AI Developers and Engineers

  • AI models are created by teams of data scientists, engineers, and researchers.
  • While they build the algorithms, they don’t always control how they are deployed or used.

2. Companies Deploying AI Systems

  • Businesses that use AI for hiring, lending, or fraud detection often rely on pre-built models or third-party services.
  • If an AI system discriminates against job applicants or falsely denies a loan, is the company or the AI vendor at fault?

3. AI System Operators

  • AI often requires human oversight, such as doctors using AI to assist in medical diagnoses or traders leveraging AI for financial analysis.
  • But if a doctor follows a faulty AI recommendation, who is accountable—the physician or the AI provider?

4. Regulators and Policymakers

  • Governments are increasingly stepping in to regulate AI.
  • The EU AI Act, for instance, categorizes AI systems based on risk levels, ensuring high-risk AI applications (e.g., healthcare, finance, legal decisions) have strict accountability frameworks.

5. The End-Users

  • Many AI-powered applications involve direct interaction with users (e.g., Chatbots, AI financial advisors).
  • If a consumer follows bad advice from an AI-driven financial app and loses money, should they bear the risk alone?


Real-World AI Failures and Their Accountability Challenges

1. AI in Hiring: The Bias Problem

In 2018, Amazon scrapped an AI-powered hiring tool after discovering it discriminated against female applicants. The system had learned from historical hiring data, which was biased toward male candidates.

  • Who was responsible?Amazon’s engineering team?The HR department that used the model?The AI itself?

Outcome: The tool was shut down, but the case highlighted the risk of unchecked AI bias.

2. AI in Healthcare: A Fatal Diagnosis

A patient in the UK was misdiagnosed by an AI-powered radiology system, leading to delayed treatment. The hospital relied on the AI’s assessment without further human verification.

  • Who was responsible?The AI company that developed the diagnostic tool?The hospital that integrated the system into patient care?The doctors who followed AI’s decision?

Outcome: Legal battles ensued, and regulatory bodies began investigating AI’s role in medical decision-making.

3. AI in Autonomous Driving

Tesla’s Autopilot feature has been involved in several fatal crashes, with investigations showing that drivers over-relied on the system, assuming it was more autonomous than it actually was.

  • Who was responsible?Tesla for marketing Autopilot as an "assisted driving" system that consumers may have misunderstood?The drivers for failing to intervene in time?Regulators for not setting clearer standards on semi-autonomous vehicles?

Outcome: Some lawsuits held Tesla partly responsible, but liability remains a gray area in autonomous vehicle regulations.


Legal and Ethical Implications: The Lack of AI-Specific Laws

Unlike human errors, AI failures don’t fit neatly into existing legal frameworks. Most laws are built around human decision-makers, making it difficult to assign direct liability to an AI system.

Current Legal Approaches to AI Accountability

🔹 Product Liability Laws – If an AI system is treated as a "product," liability can fall on the manufacturer (like how car companies are responsible for defective vehicles). However, AI models evolve dynamically, unlike static products.

🔹 Negligence Laws – Companies using AI could be held negligent if they fail to properly train or monitor their AI systems. But proving negligence in AI-driven decisions is challenging, as intent is hard to establish.

🔹 AI-Specific Regulations (EU AI Act, U.S. AI Executive Order) – New regulations aim to create clearer accountability structures, mandating transparency, explainability, and human oversight in AI decisions.


Building an AI Accountability Framework: What Needs to Change?

1️⃣ Regulatory Clarity and AI Risk Classification

  • Governments must establish clear guidelines for AI accountability based on risk levels.
  • High-impact AI systems (e.g., in healthcare, finance, criminal justice) should require human oversight.

2️⃣ Explainability and AI Transparency

  • Companies deploying AI must document and explain how their AI models make decisions.
  • Black-box models should be restricted in high-risk applications.

3️⃣ AI Ethics Boards in Enterprises

  • Businesses should create AI ethics committees to oversee AI deployment, ensuring accountability before harm occurs.

4️⃣ Insurance and Liability Policies for AI

  • The insurance industry must adapt to AI risks, creating AI liability insurance models for businesses deploying AI.

5️⃣ Shared Responsibility Models

  • Governments, businesses, and technology developers must share responsibility in AI deployment.
  • Similar to cybersecurity shared responsibility models, AI accountability should be a multi-stakeholder effort.


Final Thoughts: The Need for Proactive Accountability

As AI continues to reshape industries, the question of responsibility in AI failures remains one of the most pressing ethical and legal challenges of our time. Instead of waiting for AI-driven disasters to dictate accountability, businesses, regulators, and society must act proactively to establish governance frameworks, ethical guidelines, and liability structures.

AI is not just about technological innovation—it’s about trust. Ensuring clear accountability is the first step toward building AI systems that are both powerful and responsible.


What’s Next? 📌 Should AI systems have legal personhood, similar to corporations? 📌 How can AI companies and regulators collaborate to prevent high-risk AI failures?

Let’s continue the conversation. What are your thoughts on AI accountability? Drop a comment below! 🚀

Michael Litvak

I help PE firms and corporate clients unlock 20%+ EBITDA growth and 15–30% revenue acceleration by driving GTM strategy, digital transformation, and maximizing portfolio value.

2mo

Karan Sachdeva Thank you for this deep dive. This details the new frictions that AI and AI Agents can generate in commerce, a topic I explore in my post, "You Need Handshakes and Algorithms." We are seeing some companies focus on the potential productivity gains and may have gone too far in removing the transaction "buffers" built from good human judgment and deep relationships built over years of interaction that allow trust to endure. Your recommendations regarding the need for Proactive Accountability are the right messaging. We cannot allow "smart systems" to create "dumb organizations!"

Like
Reply

Great discussion. The arguments holds good for any innovation which touches lives and society. Identification of risks and loop holes is a learning process. It should eventually lead to a systemic and holistic approach to design. Need to spell out risk mitigation measures as well

Like
Reply

To view or add a comment, sign in

More articles by Karan Sachdeva

  • Can Ecosystems Become a Strategic Moat?

    Can Ecosystems Become a Strategic Moat? Andreessen Horowitz (a16z) famously asked in one of their most-read pieces:…

    2 Comments
  • AI Doesn’t Need More Platforms. It Needs More Partners.

    Why Ecosystems Will Define the Next Era of Enterprise AI In today’s AI gold rush, every vendor wants to be a platform…

    2 Comments
  • Riding the Tiger: First Movers in the AI Market

    In the high-stakes AI race, major players like OpenAI, Microsoft, Anthropic, and Meta have committed to staggering…

    1 Comment
  • Finding the Right Role in the AI Era

    The rise of AI is transforming industries, reshaping business models, and creating new opportunities at an…

    1 Comment
  • Agentic AI: Revolutionizing Business Operations

    According to Gartner, by 2028, about 33% of enterprise software applications are expected to incorporate agentic AI, up…

  • 2024: Moments that Matter

    To accomplish great things, we must not only act, but also dream, not only plan, but also believe.-Anatole France(poet,…

    2 Comments
  • 5 AI Skills to Master in 2025

    Artificial intelligence continues to reshape businesses across every industry. As we move toward 2025, the skill sets…

    2 Comments
  • 2025: Three Big Bets in Technology

    Much to my wife’s chagrin, I’ve always enjoyed putting a bet or two in casino.unlike most gamblers, I win a lot more…

    2 Comments
  • Finding Your Voice in 2025

    “Speak your mind, even if your voice shakes” - Maggie Kuhn, American social activist. "There is no greater agony than…

  • 5 Mind-Bending Use Cases of Generative AI

    Generative AI has quickly emerged as a transformative force, unlocking creative and operational possibilities across…

Insights from the community

Others also viewed

Explore topics