The outcome of the recent election caught many people, and many forecasters, by surprise. How could their predictions have missed the mark so significantly? Granted, there were a number of people who predicted the outcome more accurately, but many of those who used data models to analyze the likely outcome are left now with head-scratching and postmortem analysis in order to improve their methods.
In their book Superforecasting, The Art and Science of Prediction, authors Philip Telock and Dan Gardner describe a subset of people who, on average, are significantly more accurate in their ability to predict upcoming events. “What makes them so good is less what they are than what they do—the hard work of research, the careful thought and self-criticism, the gathering and synthesizing of other perspectives, the granular judgments and relentless updating.”
What does this mean for healthcare? I’m not talking about the impact of the new presidency on health policy and healthcare delivery (that’s another discussion) – I’m talking about whether predictive analytics is really all that accurate in the first place. Where does it fail?
The strengths and weaknesses of predictive analytics
Predictive Analytics in healthcare, a buzzword in the industry for a couple of decades, is the science of determining which populations are likely to become ill, what the health and cost implications of that are, and what might be done by way of pre-illness intervention to change things. About 20% of the population consumes 80% of health care dollars. But those who are catastrophically ill this year are not necessarily the ones who will become catastrophically ill next year – the high-cost cohort, though a consistent finding year after year, will be comprised of different individuals each year. The goal of medical Predictive Analytics is to figure out who will likely drop into that high-cost bucket next year, and what can be done to reduce that risk.
Much of risk stratification (the core of medical Predictive Analytics) is focused on populations. Taking people with certain health risk parameters in aggregate, as a population, is something that is predictable, can be measured and studied, and is the basis of what we have now. But drilling that understanding down to an individual patient becomes much more uncertain. Should this diabetic patient, controlled with medications but not on a statin, and who has at-target LDL cholesterol levels for a diabetic – should this patient be prescribed statins anyway? Doctors will have differing opinions, will do different things, and will look to supporting data (which may be sparse in a more granular analysis) to justify their choices.
How can we get better at individualizing medical recommendations? How can we take the current state of Predictive Analytics, which concerns itself with population management, and move it forward to something more precise?
AI: the next step in prediction
This is where Artificial Intelligence (AI) in healthcare can be very powerful. AI is the intersection of Machine Learning (ML) – a set of self-teaching algorithms that can identify patterns in data without being pre-programmed on what to look for (therefore without “pre-analysis bias”) – and the application of that ML to very large data sets. The shortcoming of medical AI so far is not so much a shortcoming in ML algorithms, but is the lack of very large, normalized data sets on which it can work. Medical data (clinical data) is historically fragmented into institution-centered silos, and claims data is segmented into payer silos. Aggregating this data into huge data sets is the task at hand in order for AI to become meaningful.
From this effort, our Medical Knowledge Graph (MKG) can be extraordinarily useful. The Flow Health Medical Knowledge Graph is the organized result of AI insights built in a way that can be used on-the-fly by a variety of medical applications, such as Electronic Health Records, population management and reporting tools for value-based care, web tools, and patient-facing apps. For the patient described above, the individualized recommendation can be made for that given person, and take into account all the diagnoses, lab values, medications used and discontinued in the past, and genetic markers if known.
Does this technology, once it matures, make the doctor’s role obsolete? No. It makes the doctor’s role more precise, more accurate, more consistent. In clinical medicine, we use clinical judgement based on recognizing a pattern presenting in a given patient, and we try to match that against similar patterns from our learning and our experience. We then use that pattern-matching to make recommendations. In the case of AI and the MKG, the pattern can be described in more detail, and the comparison is done against the entire body of data available to the ML engine. It becomes a tool that can make clinical judgement much better informed.
Predictive analytics, and the AI tools now becoming available, predict the odds of success, or the odds of something occurring. They deal in probabilities. However, as noted by many forecasters, nothing is truly certain (until it happens). Failures of accurate prediction teach the learning engines. This is true in political outcome prediction, and it is true in medicine. Leaders, whether in government, in the military, in business, or in healthcare, need to be well-advised, but must make executive decisions.
In healthcare we call that making a surgical decision (there are no erasers on the ends of scalpels). Clinicians need to be decision-makers, informed by the best analytics available. In health IT, we need to build the best analytics engines we can, so as to inform medical decision-making in the best way that technology allows.