If you’ve read the healthcare IT trade press recently, you’ve no doubt seen stories about the clinical usability of EHRs and how it can be improved. When you look at the issue closely there are, ultimately, two approaches to “breaking the usability barrier” of these systems:
1. Provide EHR users with instant access to relevant clinical information for any condition a patient presents. This information should be connected to clinically responsive workflows that mirror the way physicians and nurses think and enable them to get all their work done at the point of care. This includes documentation, quality measures, specific protocols, diagnostic and E&M coding––all while managing issues related to clinical risk management for value-based care.
2. Design a system that clinicians will largely avoid, even ignore, and then use analytics, artificial intelligence, and other technologies to handle all coding, risk management, compliance, and documentation “cleanup” after the encounter.
Considering the lack of clinical functionality in current EHRs, the level of excitement surrounding the technologies in approach number 2 is understandable. Just think about using ambient artificial intelligence (AI) to capture sound during an encounter, using speech recognition to turn it into text, processing the text with natural language processing (NLP) to convert it to data, then applying analytics to evaluate the encounter and populate the patient record––all so the provider can avoid using the unusable.
Sounds idyllic, right?
Consider the hard truth: By enabling the clinician to largely ignore the EHR, what do we lose?
Speech recognition is approaching 98% accuracy in converting sound to text. Meanwhile, NLP, while improving, still has error rates that mandate manual reviews to make corrections and achieve acceptable levels. And this is done after the fact, which means that in addition to problems with data fidelity, there is a lag between the acquisition of information and the clinician’s ability to act upon it at the point of care.
This is not to say that voice control of systems is misguided. The technology has great promise for navigation, command, and control, especially where clinical data fidelity is not a concern. However, with the shift to value-based care and the focus on effective management and treatment of chronic conditions, it is crucial that hallmark indicators of disease status and progression are reliably and instantly available to providers at the point of care. This requires that the spoken word be converted into actionable, structured clinical data that can be diagnostically filtered for presentation to the user.
With this type of instantly available and reliable clinical data, users can act at the most appropriate time––the point of care––rather than waiting for post-encounter processing and cleanup of patient information.
There is promising technology coming later this year that will provide voice-to-data (not just to text) capabilities in real-time––combining speech recognition, NLP, and clinically dynamic command-and-control paired with a clinical data relevancy engine. This combination will enable users to quickly navigate the EHR, see all relevant information for any problem, take timely and appropriate action, fulfill all documentation, coding, and quality requirements, and capture clean structured clinical data.
All of this will happen at the point of care, controlled by voice, powered by a clinical relevancy data engine that shows users what they need, when they need it, with links to workflows to complete the encounter (and related coding and quality requirements) –– all while still with the patient.
When this is a reality, clinicians will finally be able to transition from “EHR avoidance” to “EHR engagement.”
About David Lareau
David Lareau is Chief Executive Officer of Medicomp, where he is responsible for operations and product management, including customer relations and marketing. Prior to joining Medicomp, Lareau founded a company that installed management communication networks in large enterprises such as The World Bank, DuPont and Sinai Hospital in Baltimore. The Sinai Hospital project, one of the first PC-based LAN systems using email and groupware, was widely acknowledged as one of the largest and most successful implementations of this technology.