• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Life Sciences
  • Investments
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage

The Future of AI in Healthcare: Best Practices for Responsible Innovation

by Dr. Heather Bassett, Chief Medical Officer at Xsoli 09/15/2025 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
Dr. Heather Bassett, Chief Medical Officer at Xsoli

Our patients can’t afford to wait on officials in Washington, DC, to offer guidance around responsible applications of AI in the healthcare industry. The healthcare community needs to stand up and continue to put guardrails in place so we can roll out AI responsibly in order to maximize its evolving potential. 

Responsible AI, for example, should include reducing bias in access to and authorization of care, protecting patient data, and making sure that outputs are continually monitored. 

With the heightened need for industry-specific regulations to come from the bottom up — as opposed to top-down — let’s take a closer look at the AI best practices currently dominating conversations among the key stakeholders in healthcare. 

Responsible AI without squashing innovation

How can healthcare institutions and their tech industry partners continue innovating for the benefit of patients? That must be the question guiding the innovators moving AI forward. On a basic level of security and legal compliance, that means companies developing AI technologies for payers and providers should be aware of HIPAA requirements. De-identifying any data that can be linked back to patients is an essential component to any protocol whenever data-sharing is involved.

Beyond the many regulations that already apply to the healthcare industry, innovators must be sensitive to the consensus forming around the definition of “responsible AI use.” Too many rules around which technologies to pursue, and how, could potentially slow innovation. Too few rules can yield ethical nightmares. 

Stakeholders on both the tech industry and healthcare industry sides will offer different perspectives on how to balance risks and benefits. Each can contribute a valuable perspective on how to reduce bias within the populations they serve, being careful to listen to concerns from any populations not represented in high-level discussions.

The most pervasive pain point being targeted by AI innovators

Rampant clinician burnout has persisted as an issue within hospitals and health systems for years. In 2024, a national survey revealed the physician burnout rate dipped below 50 percent for the first time since the COVID-19 pandemic. The American Medical Association’s “Joy of Medicine” program, now in its sixth year, is one of many efforts to combat the reasons for physician burnout — lack of work/life balance, the burden of bureaucratic tasks, etc. — by providing guidelines for health system leaders interested in implementing programs and policies that actively support well-being.

To that end, ambient-listening AI tools in the office are helping save time by transforming conversations between the provider and patient into clinical notes that can be added to electronic health records. Previously, manual note-taking would have to be done during the appointment, reducing the quality of face-to-face time between provider and patient, or after appointments during a physician’s “free time,” when the information gleaned from the patient was not front of mind.

Other AI tools can help combat the second-order effects of burnout. Armed with the critical information needed to recommend a diagnostic test available to them in the patient’s electronic health record (EHR), a doctor still might not think to recommend a needed test. AI tools can scan an EHR — prior visit information, lab results — to analyze potentially large volumes of information and make recommendations based on the available data. In this way the AI reader acts like a second pair of eyes to interpret a lab result, or year’s worth of lab results, for something the physician might have missed.

Administrative tasks outside the clinical setting can save burned-out healthcare workers (namely, revenue cycle managers) time and bandwidth as well.

Private-sector vs. public-sector transparency 

How can we trust whether an institution is disclosing how it uses AI when the federal government doesn’t require it to? This is where organizations like CHAI (the Coalition for Health AI) come in. Its membership is composed of a variety of healthcare industry stakeholders who are promoting transparency and open-source documentation of actual AI use-cases in healthcare settings.

Healthcare is not the only industry facing the question of how to foster public trust in how it uses AI. In general, the key question is whether there’s a human in the loop when an AI-influenced process affects a human. It ought to be easy for consumers to interrogate that to their own satisfaction. For its part, CHAI has developed an “applied model card” — like a fact sheet that acts as a nutrition label for an AI model. Making these facts more readily available can only further the goal of fostering both clinician and patient trust.

Individual states have their own AI regulations. Most exist to curb profiling, the use of the technology to sort people into categories to make it easier to sell them products or services or to make hiring, insurance coverage and other business decisions about them. In December, California passed a law that prohibits insurance companies from using AI to deny healthcare coverage. It effectively requires a human in the loop (“a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand”) when any denials decisions are made. 

By vendors and health systems making their AI use transparent — following evolving recommendations on how we define and communicate transparency, and promoting how data is protected to end users and patients alike — hospitals and health systems have nothing to lose and plenty to gain.


About Dr. Heather Bassett 

Dr. Heather Bassett is the Chief Medical Officer with Xsolis, the AI-driven health technology company with a human-centered approach. With more than 20 years’ experience in healthcare, Dr. Bassett provides oversight of Xsolis’ data science team, denials management team and its physician advisor program. She is board-certified in internal medicine.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Artificial Intelligence

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Interview

Reach7 Diabetes Studios Founder Chun Yong on Reimagining Chronic Care with a Concierge Medical Model

Most-Read

Bayer Exits Radiology AI Market, Discontinuing Calantic and Blackford

Bayer Exits Radiology AI Market, Discontinuing Calantic and Blackford

Oracle Health Launches AI Center of Excellence for Healthcare

Oracle Health Launches AI Center of Excellence for Healthcare

Particle Health Addresses Integration to Epic Data Despite Dispute

US Court Allows Particle’s Antitrust Claims Against Epic to Proceed

Epic Launches Comet: A New AI Platform to Predict Patient Health Journeys

Epic Launches Comet: A New AI Platform to Predict Patient Health Journeys

Preparing for the ‘Big Beautiful Bill’: How Digitization Can Streamline Medicaid Eligibility & Social Care Delivery

Preparing for the ‘Big Beautiful Bill’: How Digitization Can Streamline Medicaid Eligibility & Social Care Delivery

Evernorth Health Services Invests $3.5B in Shields Health Solutions

Evernorth Health Services Invests $3.5B in Shields Health Solutions

KLAS Report: Oracle Health Faces Customer Losses and Declining Satisfaction

KLAS Report: Oracle Health Faces Customer Losses and Declining Satisfaction

Tempus AI Acquires Digital Pathology Leader Paige for $81.25M

M&A:Tempus AI Acquires Digital Pathology Leader Paige for $81.25M

Mira Launches Ultra4™, the First At-Home Hormone Monitor with Lab-Quality Insights

Femtech: Mira Launches Ultra4™, the First At-Home Hormone Monitor with Lab-Quality Insights

How Healthcare CIOs Can Solve the Unstructured Data Crisis and Reduce Storage Costs

How Healthcare CIOs Can Solve the Unstructured Data Crisis and Reduce Storage Costs

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2025. HIT Consultant Media. All Rights Reserved. Privacy Policy |