• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Life Sciences
  • Investments
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage

Healthcare AI Governance: Implementing NIST Trustworthy AI and OWASP Security Guardrails

by Our Thought Leaders 04/30/2026 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
Building an 'AI-Ready' Healthcare Enterprise Using NIST and ISO Frameworks
Marty Barrack, Chief Legal and Compliance Officer, XiFin, Inc.

In Part 1 of this series, we covered the foundation of an AI-ready healthcare organization: an organization-appropriate governance framework, regulatory awareness, and an AI inventory.

Now, let’s discuss the other essential capabilities for AI readiness: risk management and guardrails.

  1. Apply Enterprise Risk Management (ERM) to AI

AI risk must be managed like other enterprise risks:

  • Establish a risk framework 
  • Define your organization’s risk appetite and tolerance
  • Identify available mitigation and risk transfer mechanisms (e.g., vendor contracts, insurance, indemnification)
  • Consider impacts on other workflows, such as the software development lifecycle (SDLC), human resources (HR), privacy, cybersecurity, compliance, and clinical oversight 

This risk lens must be applied at the initial consideration of the AI system and then updated as changes are considered. 

  1. Build an AI Governance Program with Named Accountability

An AI governance program should:

  • Be based on compliance and contract requirements
  • Use identified frameworks and guidance
  • Create a policy for internal and third-party AI use
  • Designate an executive accountable for AI governance and define authority
  • Establish an executive committee with cross-functional oversight
  • Incorporate AI development into a secure SDLC, providing security and privacy by design
  • Commit to ongoing updates as regulation and technology evolve
  1. Write a Policy That Encourages AI Use and Forces the Right Reviews

A comprehensive AI policy should:

  • Recognize limited resources and facilitate financial modeling and proper prioritization 
  • Encourage AI use where appropriate
  • Encourage innovative thinking
  • Require review of AI usage, scoped by the consequentiality of decision-making by the AI system
  • Define disclosure expectations
  • Drive compliance with organizational policies
  • Be updated for changes in regulation, contracts, industry trends, and technology
  • Document, audit, and refresh on a regular cadence

A policy that is too restrictive drives shadow AI. A policy that is too permissive drives unmanaged risk. The goal is controlled enablement. 

  1.  Use “Trustworthy AI” as Your Guardrail Checklist

NIST provides a concrete set of trustworthiness characteristics that translate into operational controls.

Valid and Reliable: Define validation standards for outputs (e.g., accuracy, hallucination risk, and drift) and the revalidation cadence.

Safe: Identify where AI output could contribute to patient harm or inappropriate clinical or financial decisions and require human oversight where necessary.

Secure and Resilient: Integrate AI systems into cybersecurity tools, monitoring, incident response, and secure development requirements; plan for adversarial use and disruption. Address unique security concerns for AI systems. 

Explainable and Interpretable: Require clarity on what the system does and why it outputs what it outputs, especially when used in high-impact workflows.

Privacy-Enhanced: Implement data minimization, masking, retention/deletion controls; restrict training use and vendor reuse of sensitive data.

Fair with Harmful Bias Managed: Define bias assessment expectations and monitor outcomes across groups and contexts.

Accountability and Transparency: Document roles and responsibilities across the lifecycle; define transparency expectations for users, stakeholders, and regulators.

  1. Make Security AI-Aware

In addition to the cybersecurity concerns typical for computer systems, AI brings an additional set of security concerns that have to be addressed. For example, OWASP’s 2025 Top 10 for Large Language Model (LLM) and Gen AI Applications highlights these concerns:

  1. Prompt injection
  2. Sensitive information disclosure
  3. Supply chain vulnerabilities
  4. Data and model poisoning
  5. Improper output handling
  6. Excessive agency
  7. System prompt leakage
  8. Vector and embedding weaknesses
  9. Misinformation
  10. Unbounded consumption
  11. Certain Risks to Consider
  • Lawfulness of training the AI system:  Sufficient legal rights to the materials used to train the AI system. ​
  • Lawful use of the AI system:  Sufficient license rights to the AI Tool to be used. ​
  • Inaccurate results:
  1. What risks will occur if the results are inaccurate, including false positives (i.e., overinclusive) and false negatives (i.e., underinclusive)? 
  2. The level of authority, if any, that the AI system will have to make decisions. 
  3. The extent personnel or customers will rely on the AI system to make decisions, and what those decisions will entail. ​
  • Security: Maintaining confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use. Without limitation, consideration should be given to the unique security issues raised by AI systems. Appropriate security tools should be in place. ​
  • Privacy: Mapping data flows, identifying the processing of protected information that will occur, and considering the privacy issues raised by the AI system. ​
  • Safety: Will the AI system present material risks to life, health, property, the environment, employment, education, healthcare, health insurance, or financial services?​
  • Supply chain vulnerabilities: What risks are associated with the AI system’s supply chain?
  • Model Theft: What risks will occur if the AI system is stolen/copied in full or in part? 
  • Ethics: Could the AI system reasonably be used for unethical purposes?​
  • Bias: Harmful bias in an AI system may come from many different sources, including incomplete data sets and data sets that have embedded biases. AI system decisions must be made without harmful biases, e.g., decisions must fairly and fully consider the impact on individuals and account for variances that are not identified or invalid in whole or in part. ​
  • Social Equity: Could the AI system reasonably be used to perpetuate or exacerbate existing social inequalities? ​

Key Takeaways

As healthcare organizations move from AI governance theory to real-world implementation, a consistent theme emerges: success depends on establishing practical, enforceable guardrails that enable innovation without compromising safety, compliance, or operational integrity. 

To scale AI responsibly across diagnostics, radiology, pharmacy, and revenue cycle operations, ensuring rapid, reliable, secure, and sustainable adoption:

  • Build an AI governance program tailored to your anticipated use of AI and regularly review your AI activities and your governance program.
  • Screen proposed AI uses and apply appropriate guardrails to address the compliance, ethical, security, privacy, and financial concerns raised by each use.
  • Refresh your review of AI systems as your systems and the AI governance environment change. 
  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Artificial Intelligence

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

Aligning IT & Clinical Teams: How to Reduce Friction and Improve Communication

Most-Read

OpenAI Launches ChatGPT for Clinicians: Free AI Documentation and Research Tool for Verified Physicians

OpenAI Launches ChatGPT for Clinicians: Free AI Documentation and Research Tool for Verified Physicians

IKS Health Acquires TruBridge for Rural EHR and RCM Solutions Expansion

IKS Health Acquires TruBridge for Rural EHR and RCM Solutions Expansion

UT Austin is Building the Nation's First 'AI-Native' Hospital, Backed by $750M

Why UT Austin is Building an ‘AI-Native’ Hospital from Scratch

The Medtech Pitch Deck Casino: Why Hype Still Wins, and How Scrutiny Could Improve Everyone’s Odds

The Casino Model: Why Medtech VCs Are Betting Billions on Unproven AI

Oracle Lays Off 539 Kansas City Employees as Focus Shifts to AI Data Centers

Oracle Lays Off 539 Kansas City Employees as Focus Shifts to AI Data Centers

SAMHSA and ONC Invest $20M in Behavioral Health IT Initiative

HHS Reverses 2024 Tech Reorganization: Why HHS Just Stripped AI and Cyber Operations Out of the ONC

How Small Medical Practices Can Build HIPAA-Aligned DevSecOps Without Enterprise Budgets

How Small Medical Practices Can Build HIPAA-Aligned DevSecOps Without Enterprise Budgets

Insilico Medicine and Eli Lilly Form $2.75B AI Drug Discovery Collaboration

Insilico Medicine and Eli Lilly Form $2.75B AI Drug Discovery Collaboration

Microsoft Copilot Health, Integrates Apple Health, Oura, and 50,000 EHRs in New AI Push

Microsoft Launches Copilot Health, Integrates Apple Health, Oura, and 50,000 EHRs in New AI Push

Health Recovery Solutions (HRS) Acquires Rimidi for Chronic Care Management and RPM Integration

Health Recovery Solutions (HRS) Acquires Rimidi for Chronic Care Management and RPM Integration

Secondary Sidebar

Footer

Company

  • About Us
  • 2026 Editorial Calendar
  • Advertise with Us
  • Reprints and Permissions
  • Op-Ed Submission Guidelines
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2026. HIT Consultant Media. All Rights Reserved. Privacy Policy |