
Hospitals and healthcare systems are facing a growing cyber threat, one that’s being greatly accelerated by generative AI. While much of the public conversation around AI has focused on job displacement or deepfakes, AI’s role in cybercrime has expanded. Phishing attacks, in particular, have become more effective and easier to launch, posing serious risks to healthcare organizations, which already operate under intense operational pressure.
In the second half of 2024, phishing incidents surged by more than 700 percent – a spike that coincided with the mainstream adoption of generative AI tools. These tools are now being used to create convincing emails, fake login pages, and impersonation campaigns that target both patients and staff. And in healthcare, where digital literacy can vary widely and data is especially sensitive, the consequences can be severe, leading to data breaches, ransomware, and system outages.
The Rise of AI Phishing
Generative AI makes it extremely easy for nearly anyone to launch a phishing scam, removing many of the barriers to being a cybercriminal. Previously, nefarious actors needed a high level of tech expertise, but not anymore. Anyone who can use ChatCPT can now launch their own scam.
Of course, phishing isn’t new, but the emergence of generative AI has supercharged its capabilities. In the past, phishing attempts were often riddled with grammatical errors, formatting issues, or suspicious-looking links – clear giveaways that helped users spot and avoid them. But today’s AI-powered phishing scams are alarmingly convincing and sophisticated.
Generative AI tools can now craft emails that mimic internal communications, imitate the tone and formatting of official hospital correspondence, and even create highly realistic fake login pages in seconds. These AI-generated lures are designed to exploit trust and familiarity, making it far more likely that a user will click a link or enter sensitive credentials without hesitation.
For the healthcare sector, this is an especially serious risk. Hospitals and clinics serve a mix of internal users and external users – from employees logging into medical systems to patients and family members accessing portals. Many of these users may be unfamiliar with phishing tactics and could be more likely to trust realistic-looking login prompts or urgent alerts. The combination of accessible AI tools and a digitally inexperienced user base creates a perfect storm for credential theft.
Healthcare Is A Prime Target
The healthcare industry holds a distinct and precarious position in the cybersecurity landscape. Healthcare stores some of the most sensitive and valuable data available: patient records, medical histories, insurance information, and even genetic data. Unlike credit card numbers, which can be changed after a breach, a patient’s health information is permanent. That makes it highly attractive on the dark web, typically fetching considerably higher prices than financial data. The growing use of generative AI in phishing schemes only intensifies the risk.
But the value of healthcare data isn’t the only reason the sector is vulnerable. Many healthcare systems also face ongoing operational and staffing challenges that weaken their security posture. Outdated infrastructure, tight IT budgets, and high staff turnover make it harder to maintain up-to-date defenses. In fact, many hospitals are still in the process of migrating to the cloud or overhauling outdated identity and access management systems – changes that take time, money, and expertise to implement effectively.
Attackers know this, and they know that healthcare cannot afford downtime. The threat of care disruption makes hospitals more likely to pay ransoms or act quickly on false alerts. In recent ransomware incidents, hospitals have been forced to turn away patients, postpone surgeries, and scramble to restore essential digital services, all while dealing with regulatory fallout. As AI makes it easier to launch convincing phishing campaigns, those risks are only becoming more difficult to contain.
Always Identify Every User
To combat this rising tide of AI-enhanced phishing and social engineering, healthcare organizations need to rethink their defenses from the inside out. This begins with adopting an identity-first security model – an approach that shifts the focus from securing devices and networks to continuously verifying the people behind them.
At its core, identity-first security means that access to systems and data is governed primarily by who the user is, not just where they are or what device they’re using. It’s a shift from the traditional perimeter-based security model, which assumes that anything inside the network is safe, to one in which every access request is verified based on the user’s identity and behavior, regardless of location.
Strong authentication is the backbone of this strategy. Phishing-resistant authentication can thwart these attacks and void the impact of clicking on a phishing link. For instance, passkeys are a phishing-resistant authenticator widely used across industries to stop phishing attacks, as each authenticator is uniquely tied to an app or website using public and private keys. If a healthcare organization cannot implement phishing-resistant authentication, then multi-factor authentication (MFA) is the next line of security. MFA should become a baseline requirement for accessing systems. Even if a phishing attempt successfully captures a username and password, MFA adds additional layers of verification – such as biometrics or time-sensitive codes – to block unauthorized access.
Equally important is adopting zero-trust principles. Zero trust models require continuous validation of user identity, behavior, and the security status of the device being used, such as whether it’s encrypted, up to date, and free of known risks. This means that access to patient records or medication systems is granted only when all risk indicators align, every time.
But technology alone isn’t enough. A truly effective identity-first security strategy also includes continuous user education. Phishing emails – especially those enhanced by generative AI – can fool even the most experienced professionals. Regular awareness campaigns and simulated phishing exercises can help staff develop a reflex for spotting fake emails, verifying URLs, and reporting suspicious activity quickly. Hospitals can also provide basic guidance to patients accessing portals and telehealth services, ensuring that all users are part of the defensive front line.
The Path Forward
AI-enhanced phishing isn’t some far-off risk. It’s already reshaping the threat landscape. And as generative AI continues to advance, these attacks will only grow more convincing and more frequent. For healthcare organizations, the time to modernize security strategies is now, with identity as the foundation.
The consequences go far beyond financial loss or reputational damage. In healthcare, a breach can disrupt care, erode patient trust, and put lives at risk. By adopting identity-first security, hospitals and clinics can strengthen their defenses where it matters most, protecting not just systems and data, but the people who rely on them.
About Zack Martin
Zack Martin, senior policy advisor at Venable LLP, is a trusted advisor for clients across the cybersecurity ecosystem. Zack brings experience in the digital identity, cybersecurity, healthcare information technology (IT), and payment markets to the Privacy Group. A Certified Information Systems Security Professional (CISSP), he has in-depth knowledge of identity and access management (IAM), authentication, biometrics, and public sector challenges with identity systems. Zack has written multiple white papers and articles on citizen identity, identity proofing, authentication, authorization, and self-sovereign identity. He has also presented at cybersecurity events, garnering support from and building relationships with government officials and industry executives alike.