In 2022, the Center for Medicare and Medicaid Services (CMS) established health equity as a pillar of its future work. Program integrity staff from every state Medicaid program, and federal program staff working on Medicare, must consider the roles of both program integrity and analytics when combatting fraud, waste, and abuse (FWA)in the healthcare system.
CMS defines health equity as “the attainment of the highest level of health for all people, where everyone has a fair and just opportunity to attain their optimal health regardless of race, ethnicity, disability, sexual orientation, gender identity, socioeconomic status, geography, preferred language, or other factors that affect access to care and health outcomes.”
Experts in the field have growing concerned over the potential for implicit bias when investigating FWA cases to negatively impact specific populations of healthcare recipients and providers. However, designers of program integrity analytics must make use of a different kind of data bias in order to be effective. It is essential to distinguish between unintentional, implicit bias through attribute selection and intentional algorithmic bias through sampling methodology.
Health outcomes and Medicaid program integrity
Health outcomes are a function of the quality of care. CMS has been focusing heavily on quality of care for the last decade. Healthcare providers willing to commit fraud often do so at the expense of quality care. Such a singular focus on financial gain indicates a view of patients as a financial transaction, where what can be billed (legally or not) takes precedence over the care delivered. Simply put, providers that commit fraud are willing to jeopardize patient care, contrary to the CMS goals for health equity. Therefore, program integrity staff in both Medicaid and Medicare have a responsibility to weed out these bad providers.
Program integrity is a balancing act. Too much effort in one direction can create provider abrasion (e.g., the perception that it is difficult to work with the healthcare program). Provider abrasion can exacerbate access to care issues Medicaid faces leading to inadequate provider networks. Understandably, Medicaid and Medicare agencies attempt to reduce that abrasion. However, this does more than just risk increased poor health outcomes; it raises a genuine and tangible threat to patient safety. The goal is to combat FWA throughout the healthcare system successfully.
Health equity must consider patient safety in cases of FWA
The worst healthcare outcome a patient can face is death or direct harm. Patient safety can be affected in two ways: (1) receiving unnecessary treatments and (2) not receiving necessary treatments and services. Consider the fraud case of Scott Charmoli, DDS, for a grave example of unnecessary treatment. For years, Charmoli schemed to defraud dental insurance companies by billing for unnecessary crowns, even breaking patients’ teeth intentionally to justify the procedures.
Not receiving necessary services, which clearly qualifies as a driver of health inequity, can result in patient deaths, such as in the case of Mikayla Norman. Mikayla, a 14-year-old with cerebral palsy, weighed only 28 pounds at the time of her death. Providers billed for home and community-based services to care for her but did not provide adequate care. These tragedies are way too common when bad providers are allowed to continue practicing unchecked.
Medicaid agencies may be reluctant to fully employ their program integrity capabilities, fearing provider abrasion may create access to care concerns. This mindset is counterproductive and skirts Medicaid and Medicare’s responsibilities for health equity. This is especially true for patients at higher risk of harm and abuse, like those covered under federal Medicaid waivers designed to allow those patients to remain in the community. Generally, these are patients with a skilled nursing level of care, such as senior citizens, medically fragile patients like Mikayla Norman, and patients diagnosed as intellectually and developmentally disabled (IDD).
A failure to effectively pursue healthcare program integrity is an admission that these types of patients won’t receive the enhanced program oversight that health equity requires. Not aggressively pursuing FWA may be an invitation for providers that CMS already identifies as high risk for fraud (i.e., home health, transportation, etc.) to commit the types of fraud that put patients at risk of harm and abuse, as well as leading to poorer health outcomes.
Health equity and analytics: Why bias is not always a bad thing.
While some states have shown concern about how artificial intelligence (AI) and machine learning (ML) are applied in program integrity, as it relates to fraud, waste and abuse, the concern is that providers serving protected classes will be disproportionately targeted. This perceived risk could discourage providers from treating those populations, reducing access to care.
This implicit bias can happen when algorithms unintentionally include attributes that correlate with a particular demographic. Consider an algorithm that detects cosmetic surgeries fraudulently billed under insurance as medically necessary. Without careful consideration, this could inadvertently target women undergoing reconstructive surgery after a cancer-based mastectomy. Ideally, AI and ML platforms have built-in tools to help detect and resolve analytics bias and model monitoring to ensure bias is not introduced over time.
By nature, the goal of fraud analytics aims to seek out rare occurrences; only a small percentage of providers commit fraud. Algorithms are designed to be highly accurate and will naturally skew toward the more common behavior – the 99% or so of providers who don’t commit fraud. To account for this, data scientists use intentional bias selection techniques to emphasize the fraud examples while minimizing the impact of the bulk of the data. However, the analytics must focus on the behaviors of bad providers without regard to groupings such as age, race, gender, etc.
Program integrity data analysts must be aware of historical biases that are potentially harmful. Where historical data on fraud convictions and fraud cases is used to train unsupervised models, there is a risk that these training data sets contain bias because of how humans previously unwittingly determined which cases to investigate and prosecute. Care should be taken to examine and vet these data sets fully.
About John Maynard
John Maynard is a fraud and risk solutions specialist for analytics provider SAS. An expert in fraud and risk, specializing in healthcare and government, John served in government for more than 25 years and has a broad background in federal, state, and local programs. A former auditor, John has experience with healthcare providers, banking, insurance, and financial services in the private sector.
About Tom Wriggins
With over 30 years of healthcare experience, Tom Wriggins brings practitioner-level expertise to his role as a Principal Industry Advisor with SAS. Tom combines extensive clinical experience with data and analytics knowledge to help government healthcare entities crack down on fraud and improper payments. He has led multidisciplinary teams that have delivered large and complex data solutions for government health agencies, as well as created fraud and abuse investigative training programs