In recent years, the healthcare industry has fully embraced the potential for artificial intelligence (AI) to transform healthcare practice and delivery. From the advent of big data to the arrival of EHRs, the industry is familiar with the hype that accompanies technology advancements. While these innovations have certainly been useful in many regards, healthcare remains deeply fragmented, inefficient, and prone to waste.
Many experts believe that AI will deliver on the long-promised transformation of healthcare by providing the connective tissue that ties our other technology advancements together. The industry will finally be able to merge data from multiple sources to provide valuable insights at the point of care. Workflow advances and time-saving automation will help decrease the administrative burden for both providers and payers, while new consumer uses for AI-enabled healthcare will more effectively engage patients in their own care.
In fact, researchers estimate that AI applications could potentially reduce annual U.S. healthcare costs by $150 billion as early as 2026, due in large part to a systemic shift from a reactive to a proactive approach to medicine.1
AI Within Case and Utilization Management
With the availability and abundance of data comes the opportunity to transform case and utilization management to be more efficient and proactive. However, the human brain can’t possibly process and synthesize all the data in a timely manner, leaving most data underleveraged. This is where AI comes into play.
Predictive AI, an increasingly popular form, identifies patterns and trends within data to provide clinical predictions. Robotic process automation (RPA), another form of AI, is widely used to replace time-intensive manual tasks, such as the completion of medical reviews. When a significant portion of their administrative burden is lifted, case managers are free to spend time with patients who require additional focus.
But when it comes to patient care, can we rely solely on an algorithm to know what’s really best for the patient? Can any form of AI be intelligent enough to potentially “understand” the nuances and complexities of patients and the healthcare system? And when it comes to defending the care that was delivered, can providers simply claim that they followed what the AI told them to do?
Technology as a Navigation Aid
Of course, AI does not—and was never intended to—replace evidence-based medicine and clinical judgment. In my work with InterQual®, I see AI as a tool that complements clinical evidence and expertise.
When people share their reservations about the place of AI in clinical practice, I like to use the analogy of navigation. Hundreds of years ago, we relied on the stars and a compass to navigate; fifty years ago, we used a giant Rand McNally road map. Now, we have multiple GPS applications which not only give us directions but also inform us about traffic jams and suggest alternate routes. However, at the end of the day, we are literally in the driver’s seat: we make the decisions. We can choose to take a slower route, as we want to make a stop along the way.
Clearly, the existence of technology does not necessitate our utter dependence upon it. Innovations like machine learning, natural language processing, and predictive analytics are simply tools that enhance our ability to make better, more informed decisions. They give clinicians insight into the bigger picture so that they can make the most optimal decision.
By the same token, the existence of AI does not relieve us from the responsibility of making informed decisions. Any program that relies exclusively on AI to drive patient decisions—by relying on an algorithm alone to automate patient classification—is neither transparent nor trustworthy.
Pairing AI With Clinical Expertise, Evidence, and Care Management
Evidence-based medicine is the foundation of patient care. All of the uses for AI—from automating medical necessity reviews or providing advanced clinical insights—rest upon or complement that foundation. For patient care decisions to be clinically defensible, AI must be used in conjunction with evidence-based criteria, and the results evaluated by trained professionals using expert clinical judgment.
In decision support systems, it is vital that AI, clinical expertise, and evidence-based criteria should work together. For example, before the decision to admit is made, predictive AI can provide the probability that a patient belongs in an inpatient or observation bed, which helps case managers prioritize patients. Once a patient is admitted, AI provides key insights into the length of stay and discharge destination, helping case managers adjust care accordingly.
The most effective and efficient decision support systems also leverage robotic process automation to map structured and unstructured clinical data from the EHR to codified, evidence-based criteria to automatically populate a medical review. Automated reviews not only reduce the administrative burden, but they are also transparent and trusted by payers, as they contain embedded clinical data for each criteria point.
In these instances, AI functions as a workflow efficiency tool, helping case managers to prioritize their workload, support appropriate care, and begin transition planning well in advance. However, even the best AI and RPA technologies do not help the case manager with what to do to help the patient progress along the optimal path; they cannot provide care guidance when there are delays, complications, or additional episode complexity. That’s where evidence-based guidance comes to play: during the stay.
State-of-the-art evidence-based guidance provides admission considerations that account for social determinants of health and includes the expected clinical progress for that condition, along with what to do if the patient isn’t progressing. It drives the key actionable components for facilitation to the next level of care. As delays in care can lead to serious financial implications for the hospital, evidence-based guidance is essential for helping case managers get care back on track.
While AI is useful for increasing workflow efficiency and providing predictive insights, case managers need relevant evidence to support and defend their decision-making process. Evidence-based guidance can advise case managers on how to best coordinate care so that discharge planning and follow-up care are arranged appropriately to prevent readmissions.
Making Clinically Defensible Decisions
As patients are complex, case managers cannot depend on AI alone to progress care. Its predictions can never be completely clear-cut. And AI’s inherent lack of transparency also makes defending a care decision based on AI alone a very poor choice.
Medical necessity is about documenting why a patient requires a certain approach to care so that this approach can be supported downstream when dealing with payment and audit processes. Once care is provided and the claim submitted for payment, payers look for the clinical evidence that supports the claim of medical necessity.
Providers that rely on AI alone might answer this request by sending payers a simple score based on an AI algorithm, along with bits and pieces of clinical data that are potentially (but not necessarily) relevant to the decision to admit. This approach is not helpful in appealing a denial or defending against a RAC audit, as there is no complete information about how the AI algorithm derived its score.
To ensure the medical review is defensible, providers must align patient-specific data with evidence-based criteria to support medical necessity. That requires giving payers full visibility into the process of clinical decision-making, including date and time stamps for findings plus record locations for each data point. They can easily see the connection between the patient’s clinical condition and the evidence-based criteria used to justify the care delivered.
Ultimately, the goal of responsible AI is not to supplant the clinician’s expertise or eliminate the need for clinical evidence. The goal is to empower clinicians by giving them as many relevant data points as possible to help them make better decisions—and to provide that information as quickly as possible, as decisions are being made.
AI-enabled clinical decision support must be built on a foundation of evidence-based medicine in order to truly be of use. When paired with evidence-based criteria and clinical judgment, AI can support proactive, efficient, defensible care management to help care teams reduce denials, deliver appropriate care, and ultimately improve the patient experience.
About Laura Coughlin
Laura Coughlin is vice president of Clinical Innovation and Development at Change Healthcare, leading clinical content strategy and product innovation across the InterQual suite. She is a registered nurse and has served in leadership roles at Healthsource, New England Deaconess, and Olsten Kimberly Quality Care.
1Adam Bohr and Kaveh Memarszadeh, “The Rise of Artificial Intelligence in Healthcare Applications,” Artificial Intelligence in Healthcare, June 26, 2020. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7325854/