• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • COVID-19
  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • Artificial Intelligence
    • Blockchain
    • Mobile Health
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage
  • Life Sciences
  • Research

AI Will Continue to Improve Healthcare, But Only If We Can Trust It

by Manoj Saxena | Executive Chairman | CognitiveScale and Chairman AI Global 01/22/2020 Leave a Comment

AI Will Continue to Improve Healthcare, But Only if We Can Trust It
Manoj Saxena | Executive Chairman | CognitiveScale and Chairman AI Global

While the public perceives artificial intelligence as futuristic, AI is already intertwined with healthcare today. As a PwC report observed, AI is transforming numerous aspects of healthcare, from medical training and research to wellness and treatment. But while AI is already supercharging our capabilities (it’s 30 times faster and 99 percent accurate when reviewing mammograms, reducing the need for unnecessary biopsies, for example), AI is also supercharging the disparities that are baked into our healthcare system.

The issue, however, isn’t a question of technology, it’s a matter of transparency and trust. Trust is the foundation of digital systems. Without trust, AI cannot deliver on its potential value.   To trust an AI system, we must have confidence in its decisions. Reliability, fairness, interpretability, robustness, and safety will need to be the underpinnings of Health AI. 

One area that provides a useful case study for the importance of trusted AI systems is patient intake. Hospitals nationwide have already deployed AI to triage patients in order to make better use of medical resources and ensure appropriate care is delivered in a timely manner. But the inputs AI requires doesn’t come from a vacuum, and in some cases, our existing biases are integrated into the AI’s decisioning. 

Consider a recent report in The Wall Street Journal about a hospital intake algorithm that exhibited racial bias. The algorithm gave healthier white patients the same ranking as black patients who had much worse chronic illnesses, as well as poorer laboratory results and vital signs. As one of the researchers who discovered the problem explained, “What the algorithm is doing is letting healthier white patients cut in line ahead of sicker black patients.” 

How could this happen? Turns out, the algorithm used cost to rank patients for intake. Because spending for black patients was less than for white patients with similar medical conditions, the AI inadvertently gave preference to white patients over black patients. Put it another way, the AI exacerbated racial disparities that are already present in our healthcare system.

But bad outcomes aren’t inevitable, even though the AI’s textbook is an imperfect world. As another researcher who helped discover the intake problem pointed out, an alternative algorithm could actually decrease racial disparities. In fact, those researchers created an alternate algorithm that increased the percentage of those identified for extra help who were black to about 47 percent, up from 18 percent. Nevertheless, given the enormous power of AI, it’s important to ask how we can build better systems from the start?

First, we must understand that AI isn’t like other tools we’ve developed throughout human history. The technology is a thinking partner, one we need to understand, and ultimately trust. That process isn’t automatic, nor is it inherently transparent. Understanding and trusting an AI is akin to understanding and trusting complex human institutions. As with those institutions, we can build-in principles that ensure trust. A trusted AI must adhere to five principles:

1. Data rights: Do you have the rights to the data and is the data reliable?

2. Explainability: Is your AI transparent?

3. Fairness: Is your AI unbiased and fair?

4. Robustness: Is your AI robust and secure?

5. Compliance: Is your AI appropriately governed?

These five principles underscore the need for a more human-centric approach to integrating AI with healthcare. In laymen’s terms, by building healthcare AIs with these five principles in mind, doctors and patients have the ability to “look under the hood.” 

While we’d never call racial preferences fair, we might not intuitively see that considering a factor as benign as the cost would lead to an unfair outcome. What that means is that fairness can’t live within a silo if you want to build a trusted AI. Instead, each principle works in conjunction with the other.

So, for example, fairness can only be properly understood if we first meet the threshold question of data access and then address explainability so that all stakeholders can comprehend the AI’s decisioning. Robustness and compliance must also come into play so that stakeholders can trust that the process hasn’t been tampered with and that there is a mechanism for human review so that we don’t unwittingly cede control of our lives to machines we don’t fully understand.   

In the case of the problematic intake AI, doctors didn’t question the AI’s findings, although to be fair, those doctors may not have been given the tools to do so. Still, it’s an important lesson: AI must serve humanity, not the other way around. When doctors are empowered to challenge AI, they actually maximize their benefits.

Which is to say, the doctor still knows best because it takes humans to advocate for human-centric AI systems. So, as we deploy AIs throughout healthcare, we must educate doctors and patients about how they work, in order to avoid undesirable outcomes and build the trust needed to reap the long-term benefits of this new technology.

About Manoj Saxena | Executive Chairman | CognitiveScale and Chairman AI Global 
 
Manoj Saxena is the Executive Chairman of CognitiveScale and a founding managing director of The Entrepreneurs’ Fund IV, a seed fund focused on B2B AI market with nine active investments. Previously, he served as the first General Manager of IBM Watson, where his team built the first cognitive systems. Prior to IBM, Saxena successfully founded and sold two venture-backed software companies within a five-year span. He currently serves on the Board of AI Global, a non-profit dedicated to promoting practical and responsible applications of AI and the Saxena Family Foundation. Recently, Saxena retired after serving six years as Chairman of the US Federal Reserve Bank of Dallas, San Antonio.

Tagged With: AI, Artificial Intelligence, case study, IBM, IBM Watson, PWC Report, Vital

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Most Popular

Survey: Clinician Burnout Is A Public Health Crisis Demanding Urgent Action

17 Execs Share How Health IT Can Address Clinician Burnout, Staffing, & Capacity

Q/A: Dr. Johnson Talks Racial Disparities in Breast Cancer Care

Q/A: Dr. Johnson Talks Racial Disparities in Breast Cancer Care

Northwell Health Extends Contract with Allscripts Sunrise Platform Through 2027

Northwell to Deploy Epic Enterprise EHR Platform Across System

Sanofi Cuts Price of Lantus Insulin by 78% & Caps Out of Pocket Costs at $35 for All Patients

Sanofi Cuts Price of Lantus Insulin by 78% & Caps Out of Pocket Costs at $35 for All Patients

Pfizer Acquires Seagen for $43B to Tackle Cancer

Pfizer Acquires Seagen for $43B to Tackle Cancer

5 Key Trends Driving Purchasing Decisions in Healthcare IT

5 Key Trends Driving Purchasing Decisions in Healthcare IT

Sanofi to Acquire Diabetes Therapy Maker Provention Bio for $2.9B

Sanofi to Acquire Diabetes Therapy Maker Provention Bio for $2.9B

Dr. Arti Masturzo

Q/A: Dr. Masturzo Talks Addressing Food Insecurity with Patients

Transcarent Acquires 98point6 AI-Powered Virtual Care Platform and Care Business

Transcarent Acquires 98point6 AI-Powered Virtual Care Platform and Care Business

Eli Lilly Cuts Insulin Prices by 70%, Caps Patient Costs at $35 Per Month

Eli Lilly Cuts Insulin Prices by 70%, Caps Patient Costs at $35 Per Month

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • 2023 Editorial Calendar
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2023. HIT Consultant Media. All Rights Reserved. Privacy Policy |