• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • COVID-19
  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • Artificial Intelligence
    • Blockchain
    • Mobile Health
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage
  • Life Sciences
  • Research

We Recall Drugs with Adverse Effects, So Why Not Recall AI Algorithms with Racial Bias?

by Ed Ikeguchi, MD, Chief Executive Officer, AiCure 01/07/2022 Leave a Comment

We Recall Drugs with Adverse Effects, So Why Not Recall AI Algorithms with Racial Bias?
Ed Ikeguchi, MD, Chief Executive Officer, AiCure

Over the last decade, AI has transformed from a futuristic, hype-driven tool, to now being a daily part of our lives. We use AI for everything from unlocking our smartphones, to evaluating our credit scores, to helping clinicians make patient care decisions. In particular, the potential of AI to transform drug development and patient care is unparalleled – its ability to gather objective insights can help elevate a clinical trial’s data and bring potentially life-saving drugs to people in need faster. But, the data mining capabilities that make AI so exciting are also the same ones that worry industry leaders. To ensure AI doesn’t unintentionally perpetuate human biases and put minority populations at a disadvantage, the industry must work together to help AI reach its greatest potential. AI is only as strong as the data that it’s fed, so ensuring the quality of data we start with must be at the core of everything – the credibility of AI’s data backbones must be failsafe. 

When this technology is governed sufficiently, it holds significant potential for automating processes across all industries and taking innovation to new heights. We saw COVID-19 bring to light longstanding issues of healthcare disparities, and now more than ever, the life science industry is challenged to re-evaluate the foundation of the AI our drug development and patient care decisions increasingly rely on. There’s now a responsibility – both ethically and for the sake of “good science” – to thoroughly test algorithms. Companies are responsible for ensuring that their algorithms will perform as expected outside of a controlled research environment. They can do this by first establishing processes that help determine data sets are representative of the broader population, and second, normalizing going back to the drawing board when algorithms don’t work as planned, re-building them from the ground up. 

Applying “checks and balances” to spot bias

Often in today’s environment, once an AI solution receives a relatively arbitrary stamp of approval, there are limited protocols in place to assess how it performs in the real world. We need to be wary of this, as today’s AI developers still consistently lack access to large, diverse data sets and often train algorithms on small, single-origin data samples with limited diversity. Usually, this is because many of the open-source data sets developers use were trained using computer programmer volunteers, a predominantly white population. When these algorithms are applied in real-world scenarios to a broader population of different races, genders, ages and more, tech that appears highly accurate in research falls short in delivering on its promise and can lead to faulty conclusions about a person’s health.

Similar to how new drugs go through years of clinical trial testing with thousands of patients to determine adverse events, a vetting process for AI can help companies understand if their tech will fall short in real-world scenarios. There are usually unforeseen results when you move from a controlled research environment to real-world populations. For example, even after a new drug is approved, once given to hundreds of patients outside of a clinical trial, new side effects or discoveries that never arose during the trial often are uncovered. Just like there’s a process to reassess that drug, there should be a similar checks and balances protocol for AI that detects inaccuracies in real-world scenarios, revealing when it doesn’t work for certain skin colors or other biases. An element of governance and peer review for all algorithms should be mandated, as even the most solid and tested algorithm is bound to have unexpected results arise. An algorithm is never done learning – it must be constantly developed and fed more data over time to improve. 

Identify & refine

When companies notice that algorithms aren’t working properly across the entire population, they should be incentivized to rebuild their algorithms and incorporate more diverse patients into their testing. Whether it’s including patients with different skin tones or people wearing hats, sunglasses, or patterned clothes, training the AI to distinguish the individual person no matter their appearance, dress or environment will produce stronger algorithms, and therefore, improved patient outcomes. 

As an industry, we need to become more skeptical of AI’s conclusions and encourage transparency.  Companies should be able to readily answer basic questions such as how was the algorithm trained? On what basis did it draw this conclusion? Only once we interrogate and constantly evaluate an algorithm under both common and rare scenarios with varied populations will it be ready for introduction into real-world situations.

Recognizing there is work to be done

The first step towards fixing the problem is recognizing there is one. Many still haven’t grasped the notion that different complexions and appearances need to be factored into algorithms in order for the tech to work effectively. As the AI industry continues to grow and these tools increasingly become a pivotal part of how we research drugs and deliver new treatments, the future of the healthcare industry and patient care holds great promise. We must prioritize equality in the technology our patients and pharmaceutical companies use to help it reach its potential and make healthcare a more inclusive industry.


About Ed Ikeguchi, M.D., CEO, AiCure

Edward F. Ikeguchi, M.D. is the Chief Executive Officer at AiCure. Prior to joining AiCure, he was previously a co-founder and Chief Medical Officer at Medidata for nearly a decade, where he also served on their board of directors. Dr. Ikeguchi served as assistant professor of clinical urology at Columbia University, where he has experience using healthcare technology solutions as a clinical investigator in numerous trials sponsored by both the commercial industry and the National Institutes of Health. Dr. 


Tagged With: AI, algorithms, Artificial Intelligence

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Most Popular

ViVE 2023 Executive Takeaways

VIVE 2023: 6 Digital Health Executives Share Their Key Takeaways

Survey: Clinician Burnout Is A Public Health Crisis Demanding Urgent Action

17 Execs Share How Health IT Can Address Clinician Burnout, Staffing, & Capacity

Q/A: Dr. Johnson Talks Racial Disparities in Breast Cancer Care

Q/A: Dr. Johnson Talks Racial Disparities in Breast Cancer Care

Northwell Health Extends Contract with Allscripts Sunrise Platform Through 2027

Northwell to Deploy Epic Enterprise EHR Platform Across System

Sanofi Cuts Price of Lantus Insulin by 78% & Caps Out of Pocket Costs at $35 for All Patients

Sanofi Cuts Price of Lantus Insulin by 78% & Caps Out of Pocket Costs at $35 for All Patients

Pfizer Acquires Seagen for $43B to Tackle Cancer

Pfizer Acquires Seagen for $43B to Tackle Cancer

5 Key Trends Driving Purchasing Decisions in Healthcare IT

5 Key Trends Driving Purchasing Decisions in Healthcare IT

Sanofi to Acquire Diabetes Therapy Maker Provention Bio for $2.9B

Sanofi to Acquire Diabetes Therapy Maker Provention Bio for $2.9B

Dr. Arti Masturzo

Q/A: Dr. Masturzo Talks Addressing Food Insecurity with Patients

Transcarent Acquires 98point6 AI-Powered Virtual Care Platform and Care Business

Transcarent Acquires 98point6 AI-Powered Virtual Care Platform and Care Business

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • 2023 Editorial Calendar
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2023. HIT Consultant Media. All Rights Reserved. Privacy Policy |