New Standards in AI for addressing Eliminating Bias for Fairer Medical Device Outcomes

New Standards in AI for addressing Eliminating Bias for Fairer Medical Device Outcomes

The field of artificial intelligence (AI) demands ensuring the integrity and fairness of machine learning models is paramount. ISO/IEC TS 12791:2024, a new technical specification, provides comprehensive guidelines to mitigate unwanted bias in AI systems, particularly those used for classification and regression tasks. This standard is crucial for industries where precision and fairness are non-negotiable, such as medical device software.

Understanding Unwanted Bias

Unwanted bias in AI can lead to skewed results, impacting decision-making processes and potentially causing harm, especially in critical sectors like healthcare. Bias can stem from various sources, including data collection, model training, and deployment. For example, a biased algorithm in a medical device could misdiagnose patients from certain demographic groups, leading to unequal treatment and outcomes. ISO/IEC TS 12791:2024 addresses these issues by offering a structured approach to identify, measure, and mitigate bias throughout the AI system lifecycle.

Key Components of the Standard

  1. Bias Identification and Measurement: The standard outlines methods to detect and quantify bias in datasets and models. This includes statistical techniques and fairness metrics that help in understanding the extent and impact of bias.
  2. Mitigation Techniques: ISO/IEC TS 12791:2024 provides a range of strategies to reduce bias. These include data preprocessing methods, algorithmic adjustments, and post-processing techniques to ensure that the AI system’s outputs are fair and unbiased.
  3. Lifecycle Integration: The standard emphasizes the importance of integrating bias mitigation practices at every stage of the AI system lifecycle—from data collection and model development to deployment and monitoring. This holistic approach ensures continuous vigilance against bias.
  4. Transparency and Accountability: To foster trust, the standard advocates for transparency in AI system design and decision-making processes. It encourages documentation and reporting of bias mitigation efforts, making it easier for stakeholders to understand and evaluate the fairness of AI systems.

Article content
Image is created using Napkin

Compliance Checklist for ISO/IEC TS 12791:2024

To ensure adherence to ISO/IEC TS 12791:2024, the following compliance checklist can be used by organizations, particularly those involved in medical device software development:

1. Bias Identification and Measurement

  • Data Review: Conduct a thorough review of datasets to identify potential sources of bias.
  • Statistical Analysis: Apply statistical techniques to measure bias in data and model outputs.
  • Fairness Metrics: Implement fairness metrics to quantify bias and its impact on model performance.

2. Bias Mitigation Techniques

  • Data Preprocessing: Use techniques such as re-sampling, re-weighting, or data augmentation to address bias in training data.
  • Algorithmic Adjustments: Modify algorithms to reduce bias, such as using fairness-aware machine learning models.
  • Post-Processing: Apply post-processing methods to adjust model outputs and ensure fairness.

3. Lifecycle Integration

  • Data Collection: Ensure diverse and representative data collection practices.
  • Model Development: Integrate bias mitigation strategies during model development and training.
  • Deployment: Monitor AI systems for bias during deployment and make necessary adjustments.
  • Continuous Monitoring: Establish ongoing monitoring processes to detect and address bias throughout the AI system lifecycle.

4. Transparency and Accountability

  • Documentation: Maintain detailed documentation of bias mitigation efforts, including data sources, preprocessing steps, and model adjustments.
  • Reporting: Regularly report on bias mitigation activities and outcomes to stakeholders.
  • Stakeholder Engagement: Engage with stakeholders to discuss bias mitigation strategies and gather feedback.

5. Governance and Oversight

  • Bias Mitigation Policies: Develop and implement organizational policies for bias mitigation in AI systems.
  • Training and Awareness: Provide training for team members on bias identification and mitigation techniques.
  • Ethical Review: Conduct ethical reviews of AI systems to ensure compliance with bias mitigation standards.

6. Evaluation and Improvement

  • Regular Audits: Perform regular audits of AI systems to assess compliance with ISO/IEC TS 12791:2024.
  • Feedback Loop: Establish a feedback loop to continuously improve bias mitigation practices based on audit findings and stakeholder input.
  • Benchmarking: Compare AI system performance against industry benchmarks to ensure best practices in bias mitigation.

Technical Depth

For those seeking a deeper understanding, the standard delves into specific algorithms and statistical techniques for bias mitigation. Techniques such as re-sampling, re-weighting, and data augmentation are used to address bias in training data. Fairness-aware machine learning models and post-processing methods are employed to adjust model outputs and ensure fairness.

Regulatory Landscape

ISO/IEC TS 12791:2024 aligns with other relevant regulations and standards, such as those from the FDA and other regulatory bodies. For instance, the FDA’s guidelines on AI and machine learning in medical devices emphasize the importance of transparency, accountability, and bias mitigation, which are core principles of ISO/IEC TS 12791:2024.

Case Studies

Real-world examples highlight the critical importance of addressing bias in AI systems. For instance, a study found that an AI system used for diagnosing skin cancer was less accurate for patients with darker skin tones due to biased training data. In another case, an AI tool for predicting patient outcomes in emergency rooms showed bias against certain demographic groups, leading to disparities in care. These examples underscore the necessity of rigorous bias mitigation practices as outlined in ISO/IEC TS 12791:2024.

Ethical Considerations

The ethical implications of AI bias are profound. Biased AI systems can perpetuate discrimination and inequality, leading to significant harm, especially in healthcare. Ensuring fairness and equity in AI systems is not just a technical challenge but a moral imperative. ISO/IEC TS 12791:2024 provides a framework to address these ethical concerns, promoting the development of AI systems that are both effective and just.

Global Perspective

Given the global nature of AI and medical device regulation, ISO/IEC TS 12791:2024 has the potential to influence international standards and regulations. By setting a high bar for bias mitigation, this standard can drive global harmonization of AI practices, ensuring that AI systems worldwide adhere to principles of fairness and equity. This alignment can facilitate international collaboration and trust in AI technologies.

Future Directions

Looking ahead, future trends in AI bias mitigation include the development of explainable AI and adversarial machine learning. Explainable AI aims to make AI systems more transparent and understandable, while adversarial machine learning focuses on improving the robustness of AI systems against biased inputs and attacks.

Implications for Medical Device Software

For the medical device industry, where software plays a critical role in diagnostics and treatment, adhering to ISO/IEC TS 12791:2024 is essential. Unbiased AI systems can enhance the accuracy and reliability of medical devices, leading to better patient outcomes. By implementing the guidelines of this standard, medical device manufacturers can ensure their AI-driven solutions are both effective and equitable.


ISO/IEC TS 12791:2024 represents a significant step forward in the quest for fair and unbiased AI systems. For professionals in the medical device software industry, this standard can provide a robust framework to address bias, ensuring that AI technologies contribute positively to healthcare advancements. Embracing these guidelines will not only enhance the quality of AI systems but also build trust and confidence among users and stakeholders.


Jean-Noel Courvoisier

Regulatory professional with extended experience with SaMD and AI

5mo

Nice article Ramin, thank you for sharing. I would not be surprised if this standard is used as a benchmark beyond SaMDs for high risks GPAI and AI systems under the EU AI Act. It would treat bias and systemic risks similarly to risks for FMEAs (risk identification, proposed mitigation and verification process)

To view or add a comment, sign in

More articles by Ramin Parchetalab

Insights from the community

Others also viewed

Explore topics