• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage
  • Life Sciences
  • Research

Risk Mitigation: Strengthening Trust in AI for Healthcare

by Joachim Roski, Principal at Booz Allen Hamilton and Kathleen Featheringham, Principal at Booz Allen Hamilton 08/03/2021 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
  • Joachim Roski, Principal at Booz Allen Hamilton
  • Kathleen Featheringham, Principal at Booz Allen Hamilton

Better clinical decision support, population health interventions, patient self-care, and research – these are just a few of the promising use cases of artificial intelligence (AI).

In addition to its benefits, AI can introduce risks that could potentially undermine trust in AI solutions and need to be addressed. These risks include, among others, promulgating bias inherent in source data; a lack of minimal transparency in computation algorithms; AI performance in a “lab setting” not extending to applications in real-life scenarios; lack of prediction accuracy over time (model drift) due to lack of model parameter understanding and calibration; or cybersecurity risks.

How, then, can the healthcare industry continue AI’s momentum and prevent the next AI winter (flattening of the adoption curve) from taking hold? First, there are several organizational prerequisites that should be in place before substantial investments in AI are made/ They include a clear vision of what problems AI will help solve; in-house talent with both technical AI expertise and health domain understanding; and a review process to assess the potential risks and ethical implications of each AI solution. Once those prerequisites are met, there are additional measures that can be taken to ensure the long-term success and return on investment in your AI project.

In this article, we will describe how to mitigate three groups of prominent risks.

Risk 1: Improper data and algorithm management

Biased outputs from AI models can be the result of an AI model being trained on data that doesn’t accurately represent the population the solution is designed to support. For example, if an AI solution predicts health outcomes for a general population, but the data used to train the AI algorithm is limited to senior citizens, there is a significant risk that the model’s predictions for other age groups will not be valid.

Similarly, selecting inappropriate target variables for the prediction can promulgate bias. For example, researchers discovered that a prediction algorithm widely used by health insurers to identify individuals likely to need health interventions exhibited significant bias. In this case, previous health care expenditures were used as a proxy for “health status” and to predict future needs. However, previous resource use is not an accurate prediction variable for healthcare needs for some population segments. This algorithm did not accurately represent health care needs for minority populations as they receive fewer health services and accrue fewer healthcare costs compared to other population segments due to a number of reasons (e.g., lack of insurance coverage). In that instance, previous healthcare use was not a valid “proxy” for the health status of that minority population segment.  To minimize bias in the source data, developers must sample large data sets to ensure their training data accurately represents the population for which predications are sought.

In addition, it’s critical to maintain careful provenance records of AI algorithms. These records detail the components, inputs, systems, and processes that affect collected data. Based on this information, AI developers, implementers, and users clearly understand where relevant data comes from and how it was collected.

Finally, you’ll need a plan to tie these elements together into a single operating model, supported by compliance and monitoring protocols that include documenting model predictions, incoming data, security requirements, issues, and bugs.

One way to stand up and automate a repeatable, scalable, integrated AI system across your organization is through “AIOps.” Like DevOps for software development, AIOps are processes, strategies, and frameworks to operationalize AI in response to real-world challenges. AIOps combines responsible AI development, data, algorithms, and teams into an integrated, automated, and documented modular solution for AI development, sustainment, and high-impact, enduring outcomes.

Risk 2: Lack of Cybersecurity

Malicious actors are regularly and aggressively targeting health systems. In May and June 2021 alone, we’ve seen four major ransomware-related outages in the healthcare sector. In one incident, the threat actors behind the attack stole a trove of data for more than 150,000 patients.

AI systems offer many advantages, but they are also susceptible to cyberattacks and must be hardened. All components of the technical delivery stack, associated data sets, and enabling infrastructure can be targeted by adversarial attacks. Improper access credentials for development and production environments can create additional unintended vulnerabilities. AI carries an inherent risk of sensitive data spillage due to data aggregation and widespread use across an organization.

Trust is critical to AI adoption, and nothing puts trust at risk like the prospect of security attacks, data spillage, or data breaches. As the healthcare community at large continues to be a top target for cyberattacks,  it is critical to have the necessary foundational controls and measures in place to mitigate risk, including a properly trained workforce coupled with governance and data management processes that allow for secure data access and an understanding of how and where data will be used.

Additionally, AI offers potential benefits for cybersecurity itself, therefore, AI can be part of the cybersecurity solution used to analyze and detect potential cybersecurity risks furthering the protection of AI’s use in healthcare.

Risk 3: Lack of ongoing monitoring and maintenance

AI tools may “shift” or “drift” over time due to changing parameters and data. AI models operating in the field are subject to far more variables than when developed in a “lab” setting. They must be updated to keep up with a changing environment to maintain their accuracy and reliability. Large unanticipated pattern changes in healthcare utilization, like those caused by the COVID-19 pandemic, are a good example of a global disruption that could affect the outputs of a model if not accounted for. In this instance, this major disruption to usual care patterns (e.g., suspension of elective surgeries, patients opting not to schedule appointments for fear of infection, etc.) would not have been predicted by an AI model that was trained on the care pattern prior to the COVID-19 pandemic. This example illustrates the importance of transparency in the model parameters to allow any corrections based on major or unanticipated changes.

Organizations can use systematic model monitoring to mitigate risks. Regular checks—supplemented by feedback from users who understand how the models work—can detect significant pattern changes, feedback loops, or anything else beyond established parameters for accuracy. You can then retrain your models and algorithms over time as conditions change. It’s essential to communicate regularly with the appropriate stakeholders about where data is coming from, how the data is being populated, and how that relates to decisions made from the outputs of the model.

Shaping the future of AI

Now is the time for healthcare organizations to proactively shape the future of AI. One way is by purposefully addressing AI risks by creating well-calibrated organizational and project controls throughout the AI development and implementation life cycle. By doing so, the healthcare industry can maintain users’ trust in AI and realize its transformative potential.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: AI, Artificial Intelligence, risk

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

2025 EMR Software Pricing Guide

2025 EMR Software Pricing Guide

Featured Interview

Kinetik CEO Sufian Chowdhury on Fighting NEMT Fraud & Waste

Most-Read

Medtronic to Separate Diabetes Business into New Standalone Company

Medtronic to Separate Diabetes Business into New Standalone Company

White House, IBM Partner to Fight COVID-19 Using Supercomputers

HHS Sets Pricing Targets for Trump’s EO on Most-Favored-Nation Drug Pricing

23andMe to Mine Genetic Data for Drug Discovery

Regeneron to Acquire Key 23andMe Assets for $256M, Pledges Continuity of Consumer Genome Services

CureIS Healthcare Sues Epic: Alleges Anti-Competitive Practices & Trade Secret Theft

The Evolving Role of Physician Advisors: Bridging the Gap Between Clinicians and Administrators

The Evolving Physician Advisor: From UM to Value-Based Care & AI

UnitedHealth Group Names Stephen Hemsley CEO as Andrew Witty Steps Down

UnitedHealth CEO Andrew Witty Steps Down, Stephen Hemsley Returns as CEO

Omada Health Files for IPO

Omada Health Files for IPO

Blue Cross Blue Shield of Massachusetts Launches "CloseKnit" Virtual-First Primary Care Option

Blue Cross Blue Shield of Massachusetts Launches “CloseKnit” Virtual-First Primary Care Option

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

2019 MedTech Breakthrough Award Category Winners Announced

MedTech Breakthrough Announces 2025 MedTech Breakthrough Award Winners

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2025. HIT Consultant Media. All Rights Reserved. Privacy Policy |