• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage
  • Life Sciences
  • Research

How Thoughtful AI Implementation Can Rebuild Trust in Healthcare

by Stephanie Rosner, Scientific Program Manager, Artificial Intelligence for DIA and Maria Vassileva, PhD, Chief Science and Regulatory Officer for DIA 01/24/2025 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
Maria Vassileva, PhD, Chief Science and Regulatory Officer for DIA
Stephanie Rosner, Scientific Program Manager, Artificial Intelligence for DIA 

Imagine a physician and a patient sitting quietly together in an examination room. The physician’s eyes are focused on a computer screen as she speaks in brief sentences about elevated A1C levels and the challenges of managing blood sugar through lifestyle changes and medication. The patient nods along silently and anxiously while holding an incomprehensible sheet of lab results and struggling to process her Type 2 diabetes diagnosis.

The entire interaction lasts five minutes. The physician, already 17 minutes behind schedule, moves to her next patient frustrated that she couldn’t explain the diagnosis more clearly. The patient leaves with a new prescription and instructions for monitoring her blood sugar levels, then spends the afternoon trying to understand the implications of her condition and the details of the treatment plan.

Encounters like these form the foundation of a deepening crisis in American healthcare. One study found that trust in physicians and hospitals has plummeted from 71.5% in April 2020 to 40.1% in January 2024 — an erosion partly tied to the COVID-19 pandemic. Meanwhile, 60% of Americans grade the healthcare system C or worse, and 70% express a desire for stronger relationships with their healthcare providers (HCPs). 

This erosion of trust occurs as advancements in artificial intelligence (AI) are changing how we view healthcare and look for information about our conditions and treatment options. Despite concerns surrounding data biases and potential errors, generative AI tools can help rebuild trust in medical establishments and strengthen the patient-provider relationship — if providers are committed to using these tools ethically and responsibly.

Building better clinical relationships

Clinicians are in a tough situation: Because they’re stretched so thin, maintaining a high quality of care has become increasingly challenging from logistical and psychological standpoints.

Many are turning to AI to help. A recent survey determined that 76% of physicians have started incorporating large language models (LLMs) into their clinical decisions.

There are countless benefits to using AI in clinical settings. AI tools can handle documentation and treatment planning, so clinicians can focus on patient care. Additionally, AI-powered ambient clinical intelligence can transcribe patient encounters in real-time, allowing physicians who use these services to have more meaningful patient conversations. 


Increasing the patient’s understanding

The moments after a medical appointment often bring more questions than answers. Patients struggle to recall their physician’s explanation, understand their diagnosis, or make sense of their treatment instructions.

Clear communication is vital to strengthening their relationship. AI can convert medical terminology from an eleventh-grade reading level to a sixth-grade reading level (the accepted standard for health literacy), thereby offering patients a clearer understanding of their diagnosis and treatment. 

One emergency room doctor tried unsuccessfully to explain to an elderly patient’s children why their treatment suggestions would worsen their mother’s condition, so he turned to ChatGPT. “As I recited the AI’s words, their agitated expressions immediately melted into calm agreeability,” he wrote.

Confusion and frustration are magnified when physicians and patients don’t speak the same language. Language barriers have been shown to result in more frequent adverse events, reduced access to health information, and diminished care satisfaction. Beyond basic translation, AI-powered services can be trained to understand cultural nuances and medical terminology across different dialects — and they’re only getting stronger.

AI can also help overcome fundamental access restrictions. Specialized medical chatbots, including one for cancer patients, may offer on-demand, cost-effective preliminary diagnostic guidance and health information to patients who lack immediate access to care. They can also alert patients when their condition requires in-person medical attention.

AI therefore can put knowledge in patients’ hands. It can deliver customized content about conditions, treatments, and preventive care. Patients can show up for appointments prepared with a greater understanding of their illnesses, and physicians can verify their diagnoses or find common ground with patients.

Detailed treatment explanations enable more informed healthcare decisions — and a feeling that your doctor is there for you.

Ensuring safety and privacy is crucial

Make no mistake, AI needs considerable human oversight and rigorous safeguards to be effective in healthcare settings. Clinicians must address privacy concerns and assure the quality of any output as well as the quality of the data sources utilized if they wish to use AI to rebuild and maintain patient trust.

AI implementation must be systematic and thoughtful. More than 200 guidelines exist globally to direct appropriate AI use in healthcare settings, including some laid out by the U.S. Food and Drug Administration (FDA). Providers recognize that AI and LLMs in particular still require human oversight: 97% of them report consistently vetting LLM outputs before clinical application.

Any clinical AI tool must comply with the most stringent patient data encryption requirements, including HIPAA. Clinicians may also wish to receive patient consent before using AI in order to maintain transparency. Deloitte found that 80% of patients want to know how their providers use AI in delivering care.

Once a physician begins using AI, its outputs must be reviewed continually to verify their accuracy. Errors must be tracked to improve the models. All staff members on a clinical team must undergo training to understand AI’s capabilities and limitations. 

Most importantly, the focus must remain on augmenting, rather than replacing, human medical expertise. Like any other tool, AI is a resource that should help HCPs be more efficient, leaving them more time for meaningful and empathetic patient interactions. Providers must maintain the essential human elements of medical care to give patients what they need and want and to preserve the heart of the patient-provider relationship.

Embracing a future with AI

Consider again the physician and diabetic patient in that examination room. AI now offers tools to transcribe their conversation, explain complex lab results in clear terms, and provide the patient with understandable information about diabetes management. The physician spends less time documenting and more time answering questions. The patient leaves with confidence in her treatment plan and renewed assurance in the provider’s care.

As healthcare systems implement AI tools thoughtfully and securely, they create opportunities for stronger connections between clinicians and patients, leading to restored trust in medical care and improved health outcomes. Utilizing models with trustworthy, diverse data sets, and constant validation and improvement will be critical to ensuring the best AI outcomes.


About Maria Vassileva, PhD

Maria Vassileva is the Chief Science and Regulatory Officer for DIA. Dr. Vassileva has decades of experience with managing complex multi-stakeholder biomedical research programs. She spent most of her career in the nonprofit sector, leading the Science Team at the Arthritis Foundation, and working at the Foundation for NIH and the American Association for the Advancement of Science. She was also on the leadership teams of two health research organizations, serving as project director on multiple government contracts. Her areas of expertise include musculoskeletal, metabolic, immunity and inflammation disorders, as well as patient engagement. She received her PhD in Biochemistry and Cell Biology from Johns Hopkins.


About Stephanie Rosner
Stephanie Rosner is the Scientific Program Manager of Artificial Intelligence for DIA, where she is dedicated to fostering ethical AI design and advancing technology with a human-centric approach. Rosner has held project management and business development roles at Mathematica Policy Research and Optum, working with stakeholders to ensure ethical and equitable outcomes and policies related to advancements in health projects.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Artificial Intelligence

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

2025 EMR Software Pricing Guide

2025 EMR Software Pricing Guide

Featured Interview

Kinetik CEO Sufian Chowdhury on Fighting NEMT Fraud & Waste

Most-Read

Blue Cross Blue Shield of Massachusetts Launches "CloseKnit" Virtual-First Primary Care Option

Blue Cross Blue Shield of Massachusetts Launches “CloseKnit” Virtual-First Primary Care Option

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

2019 MedTech Breakthrough Award Category Winners Announced

MedTech Breakthrough Announces 2025 MedTech Breakthrough Award Winners

WeightWatchers Files for Bankruptcy to Eliminate $1.15B in Debt

WeightWatchers Files for Bankruptcy to Eliminate $1.15B in Debt

KLAS: Epic Dominates 2024 EHR Market Share Amid Focus on Vendor Partnership; Oracle Health Sees Losses Despite Tech Advances

KLAS: Epic Dominates 2024 EHR Market Share Amid Focus on Vendor Partnership; Oracle Health Sees Losses Despite Tech Advances

'Cranky Index' Reveals EHR Alert Frustration Peaks Midweek, Highest Among Admin Staff

‘Cranky Index’ Reveals EHR Alert Frustration Peaks Midweek, Highest Among Admin Staff

Madison Dearborn Partners to Acquire Significant Stake in NextGen Healthcare

Madison Dearborn Partners to Acquire Significant Stake in NextGen Healthcare

Wandercraft Begins Clinical Trials for Physical AI-Powered Personal Exoskeleton

Wandercraft Begins Clinical Trials for Physical AI-Powered Personal Exoskeleton

Chipiron Secures $17M to Transform MRI Access with Portable Scanner

Chipiron Secures $17M to Transform MRI Access with Portable Scanner

Abbott to Integrate FreeStyle Libre Glucose Data with Epic EHR

Abbott to Integrate FreeStyle Libre Glucose Data with Epic EHR

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2025. HIT Consultant Media. All Rights Reserved. Privacy Policy |