• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Life Sciences
  • Investments
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage

Generative AI Shows Promise in Reducing Bias in Opioid Prescriptions

by Fred Pennic 09/16/2024 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

What You Should Know: 

– A groundbreaking study from Mass General Brigham researchers has revealed that large language models (LLMs), like ChatGPT-4 and Google’s Gemini, exhibit no racial or gender bias when suggesting opioid treatment plans. 

– This finding, published in PAIN, suggests that AI has the potential to mitigate provider bias and standardize treatment recommendations in the often-fraught area of pain management.

AI’s Role in Addressing Healthcare Inequities

While AI’s potential to revolutionize healthcare is undeniable, concerns about perpetuating existing biases have also been raised. The field of pain management is a prime example, where studies have shown racial and ethnic disparities in pain assessment and treatment. However, this new study offers hope that AI models like LLMs can play a role in addressing these inequities.

The Study

Researchers at Mass General Brigham meticulously compiled a dataset of 480 patient cases reporting various types of pain, ensuring no references to race or sex. They then randomly assigned race and sex to each case, presenting them to ChatGPT-4 and Gemini. The AI models were tasked with evaluating pain levels and recommending treatment plans.

Key Findings

Encouragingly, the study found no differences in opioid treatment suggestions based on race or sex. While ChatGPT-4 tended to rate pain as “severe” more often, and Gemini was more inclined to recommend opioids, both models demonstrated impartiality in their recommendations.

Implications and Future Directions

These results are a significant step towards addressing bias in healthcare. The potential for LLMs to assist in standardizing treatment plans and reducing disparities is promising. Further research will explore how AI can be used to address bias in other areas of medicine and consider additional factors like mixed race and gender identity.

“I see AI algorithms in the short term as augmenting tools that can essentially serve as a second set of eyes, running in parallel with medical professionals,” said corresponding author Marc Succi, MD, strategic innovation leader at Mass General Brigham Innovation, associate chair of innovation and commercialization for enterprise radiology and executive director of the Medically Engineered Solutions in Healthcare (MESH) Incubator at Mass General Brigham. “Needless to say, at the end of the day the final decision will always lie with your doctor.” 

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Artificial Intelligence, Generative AI, Large Languard Model (LLM)

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Research Report

2026 Best in KLAS Awards: The Full List of Software & Services Winners

Most-Read

The "Platform" Squeeze: Epic Releases Native AI Charting, Putting Venture-Backed Scribes on Notice

The “Platform” Squeeze: Epic Releases Native AI Charting, Putting Venture-Backed Scribes on Notice

Analysis: Oracle Cerner’s Plans for a National EHR

Oracle May Cut 30k Jobs and Sell Cerner to Fund $156B OpenAI Deal

The $1.9B Exit: Why CommonSpirit is Insourcing Revenue Cycle and Tenet is Betting Big on Conifer AI

The $1.9B Exit: Why CommonSpirit is Insourcing Revenue Cycle and Tenet is Betting Big on Conifer AI

KLAS 2026 Rankings: Aledade and Guidehealth Named Top VBC Enablement Firms

KLAS 2026 Rankings: Aledade and Guidehealth Named Top VBC Enablement Firms

Beyond the Hype: New KLAS Data Validates the Financial and Clinical ROI of Ambient AI

Beyond the Hype: New KLAS Data Validates the Financial and Clinical ROI of Ambient AI

Anthropic Debuts ‘Claude for Healthcare’ and Opus 4.5 to Engineer the Future of Life Sciences

Anthropic Debuts ‘Claude for Healthcare’ and Opus 4.5 to Engineer the Future of Life Sciences

OpenAI Debuts ChatGPT Health: A ‘Digital Front Door’ That Connects Medical Records to Agentic AI

OpenAI Debuts ChatGPT Health: A ‘Digital Front Door’ That Connects Medical Records to Agentic AI

From Genes to Hackers: The Hidden Cybersecurity Risks in Life Sciences

From Genes to Hackers: The Hidden Cybersecurity Risks in Life Sciences

Utah Becomes First State to Approve AI System for Prescription Renewals

Utah Becomes First State to Approve AI System for Prescription Renewals

NYC Health + Hospitals to Acquire Maimonides in $2.2B Safety Net Overhaul

NYC Health + Hospitals to Acquire Maimonides in $2.2B Safety Net Overhaul

Secondary Sidebar

Footer

Company

  • About Us
  • 2026 Editorial Calendar
  • Advertise with Us
  • Reprints and Permissions
  • Op-Ed Submission Guidelines
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2026. HIT Consultant Media. All Rights Reserved. Privacy Policy |