• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage
  • Life Sciences
  • Research

Addressing Bias in Healthcare AI: A Guide for Developers and Clinicians

by Pravin Tiwari, Executive Vice President and Business Unit Head, FPT Software 12/06/2024 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
Pravin Tiwari, Executive Vice President and Business Unit Head, FPT Software

The promise of AI is greater efficiency in solving complex problems. But with that promise comes the responsibility of understanding and applying 

governance, ethical, and reliability principles. Therefore, developing a responsible approach to developing and deploying artificial intelligence (AI) in a safe, trustworthy, and ethical fashion is essential. As 94% of IT leaders believe more attention should be paid to responsible AI development, the healthcare industry must devise strategies to alleviate the current AI challenges. 

Training data and algorithmic for bias

Because data sensitivity (the risk associated with the exposure, unauthorized access, or misuse of specific data) creates barriers to compiling data sets required for Machine Learning (ML) training, the trained data frequently carries some bias. When this occurs, the patient cohort will not represent the wider population. Similarly, if ML dataset training lacks diversity, the AI may develop biased algorithms that fail certain demographic groups. These and other factors are causing mistrust among health professionals and patients, and the issue is exacerbated in the US, where the lack of standardized data formats across EHR systems further slows down data access. Studies have shown that integrating data from various sources often requires extensive data cleaning and normalization, which delays research timelines.

Further, many institutions use different health information systems, complicating sharing and aggregating data for research purposes. This fragmentation creates technical barriers to timely data access. These realities often create AI bias.

For example, suppose an ML training system recognizes melanoma on the images of people with white skin. The AI might misinterpret images from patients with darker skin tones and fail to diagnose melanoma due to sampling bias (Adamson & Smith, 2018). Despite representing only 1% of skin cancers, melanoma is responsible for over 80% of skin cancer deaths. Therefore, ML developers should disclose the details of the training data, including patient demographics and baseline characteristics such as age, race, ethnicity, and gender.

In addition to overcoming bias when training data for AI operations, other issues need to be addressed. AI “hallucinations” are incorrect or misleading results caused by insufficient training data, wrong assumptions, or training data biases. A ChatGPT study found inaccurate, even dangerous, responses.   

For example, AI is often utilized to predict sepsis or heart failure by analyzing extensive patient data and calculations using neural networks in deep learning. This can challenge medics who seek to leverage AI predictions but need help understanding their rationale. 

The Cure for AI Maladies

Gartner predicts that 50% of governments globally will enforce responsible AI policies by 2026. The U.S. healthcare system employs several practices and guidelines to ensure AI’s fair, safe, and ethical use, ensuring patient well-being while allowing for innovation. 

Regulatory Guidance

Oversight by the U.S. Food and Drug Administration (FDA) regulates AI-based medical devices under its “software as a medical device” (SaMD) category. It ensures AI systems used in diagnostics, treatment planning, and other clinical settings meet safety and efficacy standards. The FDA also enforces post-market surveillance to monitor AI performance after its deployment. The Health Insurance Portability Accountability Act (HIPAA) mandates strict data privacy and security measures for AI systems handling protected health information (PHI). AI developers must anonymize personal data used for training models to comply with these rules.

Bias & Transparency

In the US, there is a strong focus on minimizing algorithmic bias to avoid exacerbating healthcare disparities. Initiatives like the National Institute of Standards and Technology (NIST) are working on frameworks to identify and reduce AI bias. AI tools are scrutinized for fairness to ensure they do not disproportionately impact specific patient populations based on race, gender, socioeconomic status, or other factors.

This transparency is crucial for building trust in AI and making it more ethical in clinical environments.  Healthcare providers and AI developers are encouraged to implement interpretable models, allowing clinicians and patients to understand how AI systems arrive at particular decisions.

Best Practices in AI Use

Clinical validation and maintaining human-in-the-loop practices are essential in ensuring the best outcomes. Before AI tools are deployed in healthcare settings, they must undergo extensive clinical validation. This involves testing the AI on diverse datasets and patient populations to ensure its predictions are accurate and reliable in real-world scenarios. 

Continuous Learning and Monitoring

Keeping a watchful eye on AI healthcare systems post-deployment is essential to detect issues like model drift, where the AI’s accuracy diminishes over time. This ensures that AI remains relevant and effective in clinical environments. 

Conversely, some AI models employ continuous learning practices, incorporating new patient data to encourage efficiencies and solve problems before they arise. However, these adaptive AI systems must remain within regulatory bounds to ensure ongoing safety.

A user-centric and collaborative approach is critical to mitigate risks associated with AI misdiagnosis or erroneous decisions. AI ethics experts and diverse stakeholders must be active early in AI creation. The right partners can help healthcare providers meet these requirements, safeguard patient trust, and ensure a safe future. Innovation and ethical integrity can thrive by allowing physicians and healthcare professionals to review AI-generated insights and retain final decision-making authority.


About Pravin Tiwari 

Pravin Tiwari is the Executive Vice President and Business Head at FPT Software Americas, a subsidiary of FPT Corporation, spearheading global initiatives that deliver sustainable, long-term value for customers and partners. With over two decades of senior management experience at the House of Tatas and FPT Software, Pravin has consistently driven innovation, operational excellence, and technology transformation across industries such as healthcare, media, and manufacturing.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Artificial Intelligence

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

2025 EMR Software Pricing Guide

2025 EMR Software Pricing Guide

Featured Interview

Kinetik CEO Sufian Chowdhury on Fighting NEMT Fraud & Waste

Most-Read

Blue Cross Blue Shield of Massachusetts Launches "CloseKnit" Virtual-First Primary Care Option

Blue Cross Blue Shield of Massachusetts Launches “CloseKnit” Virtual-First Primary Care Option

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

2019 MedTech Breakthrough Award Category Winners Announced

MedTech Breakthrough Announces 2025 MedTech Breakthrough Award Winners

WeightWatchers Files for Bankruptcy to Eliminate $1.15B in Debt

WeightWatchers Files for Bankruptcy to Eliminate $1.15B in Debt

KLAS: Epic Dominates 2024 EHR Market Share Amid Focus on Vendor Partnership; Oracle Health Sees Losses Despite Tech Advances

KLAS: Epic Dominates 2024 EHR Market Share Amid Focus on Vendor Partnership; Oracle Health Sees Losses Despite Tech Advances

'Cranky Index' Reveals EHR Alert Frustration Peaks Midweek, Highest Among Admin Staff

‘Cranky Index’ Reveals EHR Alert Frustration Peaks Midweek, Highest Among Admin Staff

Madison Dearborn Partners to Acquire Significant Stake in NextGen Healthcare

Madison Dearborn Partners to Acquire Significant Stake in NextGen Healthcare

Wandercraft Begins Clinical Trials for Physical AI-Powered Personal Exoskeleton

Wandercraft Begins Clinical Trials for Physical AI-Powered Personal Exoskeleton

Chipiron Secures $17M to Transform MRI Access with Portable Scanner

Chipiron Secures $17M to Transform MRI Access with Portable Scanner

Abbott to Integrate FreeStyle Libre Glucose Data with Epic EHR

Abbott to Integrate FreeStyle Libre Glucose Data with Epic EHR

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2025. HIT Consultant Media. All Rights Reserved. Privacy Policy |