Even though many think of 2008 as the first time CMS began emphasizing quality over quantity via the Medicare Improvements for Patients and Providers Act (MIPPA), the truth is that we’ve been on a 30-plus-year journey to get to this point.
It really started when managed care became a hot trend, but eventually, the friction between payers and providers was too high and patients were caught in the middle. Then came accountable care, which while still important today, ultimately isn’t creating change at the pace needed to make a considerable difference. So, here we are today with the rise of quality scoring and value-based contracting, which do truly hold tremendous opportunities to finally reduce costs and improve care quality.
However, in many ways, it’s still the Wild West in value-based care, and it’s imperative that adoption across the industry doesn’t stall out like some of its predecessors. It’s also imperative that we solve some of the same issues we’ve seen before like dealing with payer and provider friction or too little incremental cost-savings.
So, what are some of the issues that still hold back the shift to value-based care?
First, the plainly obvious reality is that it still requires a highly manual process to execute, one that is completely incongruent compared to its complexity. Payers and providers exchange spreadsheets, PDF files, and reports generated from local databases filled with manually entered data, which creates a ton of room for errors.
And then there’s the fact that very little agreement exists across the industry on what should be included in the various measurements. There are hundreds of varying measurement models, certification requirements, scoring algorithms, and nonprofit organizations all vying to become the de facto standard for their own corner of the healthcare ecosystem.
And all of this makes it a daunting — and very costly — a challenge for providers to keep up.
CMS continues to issue new incentive structures and performance measurements to hold providers accountable, then those get translated and implemented in their own unique ways by payers in the commercial markets. So, if you’re a mid-sized community hospital with a few different specialties, you might have an administrative reality where you have 43 different value-based contracting arrangements — all with their own unique sets of important data and calculation methods — from eight to ten different payers requiring eight to ten different ways to measure performance. In short, it isn’t feasible.
At the end of the day, the very same savings that value-based care was supposed to create end up gobbled up by armies of actuaries, lawyers, and administrators on both sides squabbling over-interpreting data and computations. The lack of a common infrastructure is a killer to organizational bottom lines, which of course ends up punishing those with less resources more.
However, the headaches don’t end there, especially with regard to disputed quality measurements, as the implications aren’t just exclusive to lower reimbursements. It also can have tremendous marketing ramifications when various metrics are published publicly for consumers. Take the case of Rush University Medical Center in Chicago, which had lower Medicare star ratings than it deserved because of its own unique circumstances — namely that it often has patients from lower socioeconomic backgrounds. While Rush ultimately took CMS to task for four specific reasons, one of the biggest issues it called out CMS for was not adjusting for social determinants of health.
The Rush situation also underscores an important point about the rise of healthcare consumerism and why the core technology underpinning value-based contracting is an issue. Healthcare providers (hospitals, provider groups, labs, etc.) are all acknowledging that they need to ramp up consumer marketing as the competition for patients heats up.
However, like Rush, they all have circumstances that may affect performance measurement in some of the various quality-scoring frameworks. So, many will inherently look to both enhance marketing and adjust for their unique qualities by building their own or incorporating specific quality-measurement algorithms that they can then use to market to consumers or to their peers. However, this could be an extremely messy and confusing endeavor without a core industry-wide system in place.
These are the three great ironies that exist within the current state of value-based care: it involves complex algorithms yet is still so manual; it is supposed to save money but inherently creates costs, and is meant to help promote transparency about quality but often doesn’t tell the whole story. All three boil down to technology.
We’ve seen this movie before, as it’s a similar story to that of e-prescribing when I was leading a young Surescripts. There was great friction between pharmacies, PBMs, and providers (with patients in the middle again). Very similar to today’s friction between payers and providers over quality measurements and value-based care calculations. Rife with errors and inefficiencies, almost everyone in healthcare agreed that prescriptions should become electronic. However, those days likewise were the Wild West for e-prescribing with no orderly way to connect payers, providers, and pharmacies.
Ultimately, it was left to the industry to figure itself out, and it still took many long years — until eventually, the Obama Administration created a committee to help create industry standards. As for finally achieving full-scale adoption of value-based care, let’s hope it doesn’t take that long. But, let’s also hope we’ve learned the lessons from history.
Kevin Hutchinson has more than 30 years of experience in growing companies that deliver innovative services by using disruptive technology. Formerly the founding CEO and president of Surescripts, Kevin now serves as CEO of Apervita.