How Can We Help Address Hidden Bias in Medical AI: Let's Discuss MINIMAR and PROBAST-AI
Forbes Magazine

How Can We Help Address Hidden Bias in Medical AI: Let's Discuss MINIMAR and PROBAST-AI


As someone who’s spent years in healthcare, I’ve seen firsthand how hard it is to make the right call when time, resources, and information are limited. That’s why the promise of artificial intelligence (AI) in medicine is so compelling—it offers speed, support, and (potentially) better decisions. But let’s be honest: if we don’t ask the right questions about how these tools are built and evaluated, we risk doing more harm than good.

That’s where MINIMAR and PROBAST-AI come in. Think of them as the BS-detectors for medical AI—tools that help us spot hype, bias, and gaps before an algorithm gets near a patient.

So, what are they?

MINIMAR (Minimum Information for Medical AI Reporting) is like a checklist for transparency. It makes sure AI researchers tell us what we need to know:

  • Who was in the dataset (and who wasn’t)?
  • What kind of data the model used,
  • How it was tested,
  • And whether it can actually work in different settings or patient populations.

It’s about moving past glossy performance stats and asking, “Can this model help my patient? In my hospital? With our constraints?”

PROBAST-AI, on the other hand, is more like a quality-control tool. It’s designed to uncover hidden risks in building and validating AI prediction models. It looks at four significant areas:

  1. Who the participants were,
  2. What predictors (input variables) were used,
  3. How outcomes were defined, and
  4. How the model was analyzed.

It's a structured way of asking: Was this model built responsibly? Could it fail in ways we don’t yet see?

Why is this so Important?

AI bias isn’t always loud or obvious. Sometimes, it’s subtle—like underperforming in older patients or recommending unnecessary imaging in low-income settings because it was trained on data from somewhere else. When we don’t examine where the data came from or how the model was built, we risk embedding those biases into clinical care.

MINIMAR and PROBAST-AI help us stop that from happening.

Take-home message

We don’t need to be data scientists to ask the right questions. As clinicians, administrators, and researchers, we need frameworks like MINIMAR and PROBAST-AI to guide us.

Let’s stay mindful of how we shape AI so that it becomes not just smarter but fairer and truly serves everyone.



#HealthcareAI #PatientSafety #BiasInAI #ClinicalDecisionSupport #AIinMedicine #ResponsibleAI #HealthEquity #AnesthesiaAI #MedicalInformatics

To view or add a comment, sign in

More articles by Alvaro Andres Macias Londono MD, MHA, FASA/ Assoc Professor Anesthesia Periop Informatics

Insights from the community

Others also viewed

Explore topics