
AI is no longer a concept of the future and has become a practical tool helping hospitals and health systems extend care, ease workforce pressures and improve patient engagement. From virtual nursing models that extend clinical capacity to AI-driven automation in call centers and back-office functions, healthcare organizations are finding new ways to manage workloads, reduce costs and improve patient experiences.
Despite its promise, AI in healthcare is not risk-free and the stakes are high. Without strong safeguards around data integrity, cybersecurity and regulatory compliance, the same tools that can strengthen patient trust and quality of care can just as easily undermine them. Like any clinical intervention, AI must be deployed with clear guidance and human oversight. Organizations who are finding the most success with AI aren’t the ones adopting the fastest, but those who are doing so responsibly, pairing innovation with rigorous security and accountability.
AI’s expanding role and growing risks
AI is reshaping nearly every corner of healthcare. Providers are using it to automate tedious tasks like documentation, and to assist with diagnosis and care recommendations. Payers, too, are applying it to streamline claims processing, identify fraud and manage appeals.
Research from the American Medical Association found that physician use of AI in practice nearly doubled in 2024, from 38% to 66% – an “unusually fast” rate of technology adoption for the industry. The same report found that physicians are becoming increasingly familiar with various AI use cases, such as triage support, clinical documentation, surgical simulations and predictive analytics for health risks and treatment outcomes.
While this growth demonstrates how AI can drive efficiency and improve healthcare, it also highlights a growing need for responsible oversight, especially when it comes to patients. Behind every AI-driven workflow or diagnostic recommendation lies a complex set of algorithms that interpret data and detect patterns to generate insights. These systems are becoming deeply embedded in clinical decision-making, and as their influence expands, understanding their potential for bias and inaccuracy is critical.
Algorithms are objective by design, but they are only as good as the data they’re trained on. Even systems that perform well in testing can create unintended consequences in the real world. For example, some large insurers have faced lawsuits in recent years for using AI algorithms claimed to have wrongfully denied coverage for medical services. Bias in AI systems, along with hallucinations and other factors, can lead to these outcomes even when there’s no intent or awareness of harm. As a result, there is always a need for ongoing human oversight, ethical governance and transparent communication with patients about how AI informs their care.
Securing AI in an era of increasing vulnerability
Cybersecurity is another critical piece of this conversation, yet is often overlooked. Every digital health innovation relies on sensitive patient data, and as AI adoption grows, so does the scale and sensitivity of that data.
The February 2024 ransomware attack on Change Healthcare, the largest medical data breach in U.S. history, made that reality painfully clear. Hackers used stolen credentials to access an account without multifactor authentication, crippling claims and care processes across the country and affecting an estimated 190 million people. The incident revealed a new truth for the AI age: patient safety now depends as much on cybersecurity as it does on clinical care.
As care delivery becomes more dependent on technology, organizations must harden their defenses against growing cyber threats. Building resilience starts with foundational security practices, including:
- Staff training: A well-informed workforce is the foundation of strong security. Regular training sessions and phishing simulations tailored to departmental needs help foster a culture of awareness, accountability and continuous improvement.
- Multi-factor authentication (MFA): MFA should be mandatory for all system access, ensuring a critical second line of defense if credentials are compromised.
- Vendor verification: Supply chain attacks remain a leading threat to healthcare cybersecurity. Continuous monitoring of third-party partners helps identify vulnerabilities before they spread.
- Incident response planning: Attacks aren’t a matter of if, but when. A tested, well-practiced response plan is critical to minimizing disruption and maintaining care continuity.
- AI-driven defense: AI isn’t limited to clinical use. Healthcare organizations can also deploy AI-enabled security tools to automate threat detection, help sift through alerts and streamline incident response. Even resource-constrained IT teams can use these tools to strengthen their resilience against modern threats.
These practices are crucial for the protection of both data and patients. Security, privacy and reliability are all prerequisites for safe and effective AI deployment.
The path forward: Secure innovation for better care
As AI becomes more deeply embedded in healthcare operations, organizations must integrate governance and cybersecurity at every level, from data acquisition and model training to deployment and monitoring. Aligning AI governance with cybersecurity programs ensures that innovation advances without compromising safety or trust.
When implemented responsibly, AI can deliver tangible value: faster workflows, more precise diagnostics and most importantly, better patient outcomes. But the organizations that truly lead in this next era of healthcare will be those that treat security, transparency and oversight not as compliance checkboxes, but as the foundation of innovation itself.
The future of healthcare will be shaped by those who move fast and build safely. Healthcare leaders must ensure that AI serves its ultimate mission: delivering care that is safe, effective, and centered on the patient.
About Scott Lundstrom
Scott Lundstrom is the Sr. Industry Strategist of Health, Life Sciences at OpenText, a compnay helping organizations securely manage and connect data across the enterprise, transforming data into trusted, AI-ready information. Scott Lundstrom is a long-time industry analyst, CIO, and software developer supporting complex regulated businesses in healthcare, life sciences, and consumer goods. At AMR, Scott contributed to the original SCOR model, and helped launch the Top 25 Supply-Chain program. Scott founded the health industry practice at IDC Research and led this group for 13 years. Scott also held leadership roles in research focused on AI, Cloud, SaaS, enterprise applications and analytics.
