• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Life Sciences
  • Investments
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage

From Governance to Enablement: How Healthcare CIOs Can Stop Killing AI Innovation

by Tony Pastorino, Director of Healthcare Practice, Resultant 04/22/2026 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
How Healthcare CIOs Can Innovate Without Compliance Becoming a Bottleneck
Tony Pastorino, Director of Healthcare Practice, Resultant

AI governance in healthcare has a branding problem. The word “governance” alone is enough to make half the room shut down. I’ve seen it happen. You put that word on a meeting invite, and suddenly everyone assumes this is the conversation where someone tells them what they can’t do. That framing kills innovation before it starts.

I spent nearly a decade leading data and analytics teams at the largest healthcare system in Indiana, and we eventually stopped calling it governance altogether. We started referring to it as data enablement, and that small shift in language changed the entire tone of how people engaged with the work. Where governance sounds like a wall, enablement sounds like a door. Healthcare CIOs who want their organizations to adopt and leverage AI need to think carefully about how they utilize oversight as an enabling mechanism.

The groundwork isn’t new, but the scope is

There’s a common narrative right now that AI came out of nowhere and organizations are scrambling to build frameworks from scratch. That’s only partially true. Healthcare has been grappling with AI and machine learning for longer than most industries give it credit for. Imaging and radiology have used it for years. The same goes for medical device vendors applying predictive modeling to device data. What’s changed is the surface area. Generative AI has expanded the challenge significantly, touching clinical workflows, administrative operations, and patient-facing tools in ways earlier models never did.

What health systems need to figure out now is whether the compliance frameworks they built for imaging and device data hold up against a much broader set of AI capabilities.

Start with an enablement team

If your organization doesn’t have a cross-functional team dedicated to responsible AI adoption, that’s Priority Number One. Full stop. This team should include privacy and security experts, patient ethics specialists, data scientists, and licensed healthcare providers. The technical people need to understand the guardrails they’re operating within, and the clinical and ethics representatives need to understand what the technology can actually do. That cross-pollination is where responsible innovation happens.

I’ve sat in rooms where every person at the table was a technologist, and the conversation moves fast but misses critical blind spots around patient impact. I’ve also been in rooms where compliance dominated, and every idea died on the vine. The right team has both perspectives, and neither side gets veto power. They define what responsible looks like together, and that definition gets applied consistently as new capabilities emerge.

Don’t lead with compliance 

Here’s where I think a lot of organizations get this wrong. They build a governance framework and then tell people to run every idea through it before they even get to brainstorm. Governance should enter the conversation at step two or three, once you’ve identified a high-value idea and need to figure out how to implement it responsibly. Put simply, innovation should come first. Let your people generate ideas freely. If you tell people they can only innovate inside a predefined box, you’re going to get incremental thinking at best. Incremental thinking isn’t going to solve the operational challenges health systems are facing right now. The organizations that separate idea generation from compliance assessment are the ones that will actually move.

What about Protected Health Information?

The PHI question often feels more daunting than it needs to be. There are 18 specific types of data fields that constitute protected health information: names, geographic data smaller than a state, dates tied to individuals, Social Security numbers, medical record numbers, and so on. 

The practical approach is straightforward. Determine whether any of those 18 field types are involved. If they are, validate that your team or vendor is following established de-identification standards and that there’s no viable path back to re-identification. Ask the question directly: what data are you training on, and how is identifiable information being protected?

Beyond de-identification, CIOs should be asking whether the AI’s output is explainable. Physicians have to be able to articulate the reasoning behind clinical decisions to their patients. Black-box decision support that can’t be explained is a liability, both legally and in terms of patient trust.

I’ve always used a simple gut check when evaluating whether an AI initiative is on the right track from a compliance standpoint: ask yourself whether what you’re doing puts you at even a medium risk of a news article about patient data being misused or leaked. If you look at every initiative through that lens, it tends to self-correct without needing a 50-page compliance manual for every project.

No health system wants to be the one explaining a data breach on the evening news. That healthy fear, channeled productively, is actually one of the best compliance mechanisms available.

Moving fast while staying compliant 

For CIOs feeling pressured to accelerate AI adoption faster than their compliance framework allows, the advice is simple: don’t put the compliance burden on your idea generators. Let them keep generating. Take the high-value ideas and see how they line up with your existing framework. If the framework needs to flex, adjust it. But don’t slow down the people whose job it is to think creatively about how AI can improve care, reduce costs, and give providers more time to do what they got into medicine to do.

The health systems that will lead in this next era are those that have figured out how to move responsibly without treating every new idea as a threat.


About Tony Pastorino

Tony Pastorino is the commercial health strategy director at Resultant. He brings deep healthcare domain expertise to his role, having led various components of information services at the largest Indiana healthcare system for nearly ten years. Today, Tony leads delivery on healthcare client engagements and the advancement of Resultant’s commercial healthcare practice. He is passionate about serving the urgent need of health care systems to extract value from data investments beyond basic trend analysis through predictive and prescriptive analytics.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Artificial Intelligence

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

Aligning IT & Clinical Teams: How to Reduce Friction and Improve Communication

Most-Read

UT Austin is Building the Nation's First 'AI-Native' Hospital, Backed by $750M

UT Austin is Building the Nation’s First ‘AI-Native’ Hospital, Backed by $750M

Oracle Lays Off 539 Kansas City Employees as Focus Shifts to AI Data Centers

Oracle Lays Off 539 Kansas City Employees as Focus Shifts to AI Data Centers

SAMHSA and ONC Invest $20M in Behavioral Health IT Initiative

HHS Reverses 2024 Tech Reorganization: Why HHS Just Stripped AI and Cyber Operations Out of the ONC

How Small Medical Practices Can Build HIPAA-Aligned DevSecOps Without Enterprise Budgets

How Small Medical Practices Can Build HIPAA-Aligned DevSecOps Without Enterprise Budgets

Insilico Medicine and Eli Lilly Form $2.75B AI Drug Discovery Collaboration

Insilico Medicine and Eli Lilly Form $2.75B AI Drug Discovery Collaboration

Microsoft Copilot Health, Integrates Apple Health, Oura, and 50,000 EHRs in New AI Push

Microsoft Launches Copilot Health, Integrates Apple Health, Oura, and 50,000 EHRs in New AI Push

Health Recovery Solutions (HRS) Acquires Rimidi for Chronic Care Management and RPM Integration

Health Recovery Solutions (HRS) Acquires Rimidi for Chronic Care Management and RPM Integration

RadNet Subsidiary DeepHealth Acquires French Radiology AI Leader Gleamer

RadNet’s $269M AI Play: DeepHealth Acquires French AI Gleamer

Walgreens Launches Virtual Weight Management Platform for Self-Pay GLP-1 Patients

Walgreens Launches Virtual Weight Management Platform for Self-Pay GLP-1 Patients

KLAS Digital Pathology 2026 Report: Top IMS, Scanner, and AI Vendors Evaluated

KLAS Digital Pathology 2026 Report: Top IMS, Scanner, and AI Vendors Evaluated

Secondary Sidebar

Footer

Company

  • About Us
  • 2026 Editorial Calendar
  • Advertise with Us
  • Reprints and Permissions
  • Op-Ed Submission Guidelines
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2026. HIT Consultant Media. All Rights Reserved. Privacy Policy |