Patient matching is one of the major challenges facing the US healthcare system. It is becoming much more difficult as we enter an era where the amount and diversity of patient data is exploding with patient portals, patient engagement applications, telemedicine, and personal health records.
Simultaneously, patient matching is becoming exponentially more important for patient safety, as well as to lower costs, to enable population health analytics, and to exchange patient information with other providers, payers, HIEs, and government agencies.
Yet we have relied on the same technology to help us in this effort since the 1990s: the master patient index (MPI), which is simply a large database of every patient at a healthcare organization and which links all of a patient’s records across that organization’s departments and IT systems.
But unfortunately, MPIs have not kept pace with changing healthcare needs. Simply put, their matching algorithms require accurate and consistent patient demographic data to match patient records. But patient demographic data is notoriously out-of-date, errored, and incomplete. In fact, more than 30 percent of the demographic data in any given MPI is typically incorrect in some way.
To compensate, organizations must invest an enormous amount of extra money into their MPIs to improve their matching and their data. Because of this, the sad fact is that conventional MPIs have a much higher total cost of ownership (TCO) than advertised.
These extra unadvertised costs can be broken down into three main categories.
Related: If MPIs are Dead, What’s Next For the Future of Patient Matching?
Conventional MPIs require tuning, early and often
As mentioned, the demographic data in MPIs is often faulty. There are multiple reasons for this. For example, phone numbers, emails addresses and even names change over time; spelling and transcription errors are common and unavoidable; and some data might be missing or entered with default values (like a birthdate of 01/01/1900).
To compensate for this bad data and to account for unique patient populations, conventional MPIs’ algorithms must be finely tuned when they are implemented. This is often a timely process that can last for months and requires a lot of manual effort and complex data science.
Moreover, MPIs must be re-tuned with each new data source to adjust for new data discrepancies. Conventional MPIs do not advertise this cost and effort as part of their total cost of ownership – and for good reason. This fine tuning can cost up to $150,000 – $200,000 every 3 years, as well as with every new data source.
Conventional MPIs require a lot of data stewardship
When an MPI isn’t sure about a match, that potential match must be manually reviewed and resolved by data stewards or health information management (HIM) professionals. Hundreds of thousands or even millions of potential matches accumulate in backlog as data stewards try to keep up.
To take a real-world example, one HIE with over 3.5 million patient identities in its MPI had determined that 187,000 potential matches needed to be manually reviewed and resolved. This backlog of potential matches would have taken four data stewards over two years to resolve.
Ultimately, data stewardship ends up being a huge unforeseen cost of conventional MPIs as each data steward can cost over $40,000 – $50,000 per year. And without investing in these data stewards, MPIs become riddled with duplicates, leading to incomplete health records at the point of care and causing missed revenue due to denied claims.
Conventional MPIs require upgrades every few years
Like any enterprise software that sits on premises, conventional MPIs must be upgraded every few years. These upgrades are often time consuming as well as expensive, costing between $100,000 – $300,000 every 3-5 years.
The bottom line is that conventional MPIs have a much higher total cost of ownership than advertised (see graph below). While vendors often include license and implementation costs, maintenance fees, and hardware in their cost calculations, they often conveniently leave out the items mentioned above. As we can see, that adds up to a much greater TCO than expected.
The Solution: Next-generation SaaS MPIs
Conventional MPI technologies are not only expensive, they have reached their accuracy limits and have begun to fail with the increasing demands for accurate patient matching. But a new generation of Software-as-a-Service (SaaS) MPI technologies are now on the scene to help decrease costs, increase ease-of-use, while also greatly increasing matching accuracy.
For example, one type of SaaS MPI technology called a “universal” MPI leverages a new type of patient matching architecture called Referential Matching that is so accurate it requires no tuning and can decrease data stewardship by 50-75%. And because these universal MPIs are SaaS-based, organizations can simply “plug into” them to gain a world-class patient matching solution in weeks – without any maintenance or upgrades required.
Importantly, these SaaS MPIs are not simply cloud-hosted versions of conventional MPI technologies, but rather have been built to leverage the true power of the cloud by offering revolutionary features not available with conventional MPI technologies – even ones that are hosted in the cloud.
SaaS technologies such as these will help to solve patient matching challenges while also reducing the TCO of MPIs.