• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Life Sciences
  • Investments
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage

Guardrails for AI in Medicare Risk Adjustment: Navigating Innovation Without Losing Control

by Arun Hampapur, PhD Co-Founder CEO, Bloom Value Fellow, IEEE 10/20/2025 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
Arun Hampapur, PhD – Co-Founder & CEO, Bloom Value; Fellow, IEEE

Implementing AI in Risk Adjustment for Managed Care is like adding rocket fuel to your engine—from accelerating chart reviews to identifying coding opportunities in near real-time, AI can dramatically improve efficiency, accuracy, and compliance. But without the right safeguards, the same tools can just as easily magnify errors, introduce bias, and create costly regulatory exposure. 

As Managed Care organizations navigate this rapidly evolving landscape, a key question looms:  How do we ensure AI remains honest, useful, and defensible? 

The answer: implement the right guardrails. We don’t have to start from scratch-industries with zero margin for error, such as aviation, have spent decades perfecting systems to manage complex, high-risk operations. Applied thoughtfully to Medicare Risk Adjustment, these guardrails allow healthcare organizations to mitigate risk while unlocking AI’s full potential.  

 The Two Pillars of AI Guardrails for Risk Adjustment

The goal of AI guardrails in Medicare risk adjustment is twofold:

  1. Ensuring Accuracy and Correctness
  2. Ensuring Traceability and Accountability

Pillar 1: Ensuring Accuracy and Correctness

In Risk Adjustment, accuracy is non-negotiable. One incorrect HCC code can ripple through reimbursement, compliance, and patient records, creating operational and legal exposure. The principle is simple: eliminate preventable errors before they cause harm. 

Key guardrails include:

  1. Ensuring Human Oversight Through Expert Validation 

AI-assisted tools can significantly cut down coding time— a 2025 randomized crossover trial found that coders using AI tools completed complex clinical notes 46% faster – but they lack the nuanced clinical understanding experienced professionals bring. Every AI-suggested code should be reviewed by a clinical coding expert before submission. Embedding the validation interface directly into the coding platform streamlines the process and avoids workflow disruption.  

  1. Grounding AI Suggestions in Clinical Documentation  

To ensure defensibility, every flag must be tied to explicit, timestamped records – no unsupported codes. AI should automatically confirm supporting documentation (e.g., ICD-10 descriptors or diagnostic values) before sending a suggestion for review. A coding compliance lead or CDI specialist should own this guardrail, protecting against compliance risks and fostering provider trust. 

  1. Clinician Feedback as a Learning Engine 

Establish mechanisms for providers to share structured feedback (ratings, comments, etc.) on each AI suggestion, with this input feeding directly into model retraining. Regular oversight by a clinical informatics lead or physician advisor, who can translate provider input into retraining data, ensures AI evolves with coding standards and real-world practices. 

  1. Preventing Overcoding, Fraud, and Abuse 

Without controls, AI can inadvertently drive upcoding. Recent Department of Justice investigations revealed that unsupported diagnoses inflated risk scores and led to millions in Medicare Advantage overpayments. Compliance safeguards should flag high-risk diagnoses, require second-level reviews, and align with CMS program integrity rules- monitored by a coding integrity officer or liaison from the Special Investigations Unit (SIU). 

Pillar 2: Traceability and Accountability  

 When something goes wrong in aviation, investigators can reconstruct events through black box recorders, maintenance logs, and communication transcripts. This transparency builds trust and continuous improvement.

In Medicare risk adjustment, the methods must likewise be explainable, reviewable, and defensible. Key guardrails include: 

1. Creating Traceable Decisions with Transparent Logic 

Auditors need the “why” behind each submitted code—opacity is a liability. A 2025 study found clinicians trust AI more when it explains clearly and ties them to specific clinical data. Explainable AI techniques—such as highlighting relevant data points or showing confidence scores—help reviewers trace decisions and build confidence. 

2. Maintaining Fairness Through Ethics and Bias Monitoring 

AI can perpetuate inequities. A 2023 systematic review found six common bias types in EHR-trained AI models. Structured fairness audits should monitor disparities across race, gender, age, and geography, with adjustments made as needed. Oversight on bias reviews and policy updates should rest with an AI ethics lead or cross-functional governance committee. 

3. Version Control and Comprehensive Documentation for Full Traceability

Treat AI models like enterprise software: rigorously version-controlled, timestamped, and fully documented. Maintain a centralized knowledge base capturing model configuration, training data snapshots, validation protocols, and rationale for changes—owned by a compliance or governance lead. Ownership of this process should rest with a designated compliance and governance lead—such as a platform architect or AI lifecycle manager—who is accountable for maintaining documentation fidelity, audit readiness, and change control across all deployed models.

4. Ongoing Audit Readiness 

Make audit readiness an always-on process, not a quarterly scramble. Compliance teams should monitor real-time audit logs, ensure every code suggestion and validation step is recorded, and use dashboards to surface anomalies. A compliance or internal audit lead should monitor real-time audit logs, ensure logging of every code suggestion and validation step, and oversee dashboard-driven alerts.

Conclusion 

AI offers enormous promise for Medicare Risk Adjustment—speeding suspect identification, surfacing hidden opportunities, and driving revenue optimization. But without the right guardrails, it can quickly become a liability: generating unsupported codes, triggering audits, and alienating providers. 

By anchoring your AI strategy in these guardrails, you create a system that is not only faster and smarter but also defensible by design.


About Arun Hampapur, PhD

Arun Hampapur, PhD, is the Co-Founder and CEO of Bloom Value, a company leveraging AI/ML, big data, and automation to improve the financial and operational performance of healthcare providers. A former AI/ML leader at IBM Research, he holds 150+ US patents and is an IEEE Fellow.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: medicare

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

Digital Health Funding Q3 2025: Choppy Undercurrents Beneath a Steady Surface

Featured Interview

ConcertAI VP Shares View on AI Hallucinations and the Fabricated Data Crisis in Scientific Publishing

Most-Read

Cleveland Clinic and Khosla Ventures Form Strategic Alliance to Accelerate Healthcare Innovation

Cleveland Clinic and Khosla Ventures Form Strategic Alliance to Accelerate Healthcare Innovation

Northwell Health Selects to Deploy Abridge’s Ambient AI Across 28 Hospitals

Northwell Health to Deploy Abridge’s Ambient AI Across 28 Hospitals

Omada Health Launches "Nutritional Intelligence" with AI Agent OmadaSpark

Omada Health Launches AI-Powered Meal Map to Transform Nutrition for Cardiometabolic Patients

From Overwhelmed to Optimized: How AI Agents Address Staffing Challenges and Burnout in Healthcare

From Overwhelmed to Optimized: How AI Agents Address Staffing Challenges and Burnout in Healthcare

Qualtrics Acquires Press Ganey Forsta for $6.75B to Create the Most Comprehensive AI Experience Platform

Qualtrics Acquires Press Ganey Forsta for $6.75B to Create the Most Comprehensive AI Experience Platform

Pfizer and Trump Administration Announce Landmark Agreement to Lower Drug Costs

Pfizer and Trump Administration Announce Landmark Agreement to Lower Drug Costs

KLAS Report: Epic's Native Ambient Speech Tool Reshapes Customer AI Strategies

KLAS Report: Epic’s Native Ambient Speech Tool Reshapes Customer AI Strategies

Epic Unveils MyChart Central and New APIs to Advance Interoperability at Open@Epic

Epic Outlines Roadmap for Next-Generation Data Sharing at Open@Epic

Epic Launches Comet: A New AI Platform to Predict Patient Health Journeys

Epic Launches Comet: A New AI Platform to Predict Patient Health Journeys

RevSpring to Acquire Kyruus Health, Creating a Unified Patient Experience

RevSpring to Acquire Kyruus Health, Creating a Unified Patient Experience

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • Op-Ed Submission Guidelines
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2025. HIT Consultant Media. All Rights Reserved. Privacy Policy |