Eventually, many conversations about artificial intelligence (AI) include HAL.
An acronym for Heuristically programmed ALgorithmic computer, HAL played a prominent and disconcerting role in Stanley Kubrick’s mind-bending 1968 film 2001: A Space Odyssey. In the film, sentient computer HAL learns that the humans suspect it of being in error and will disconnect it should that error be confirmed. Of course, HAL is having none of that, and terror ensues.
So influential was Kubrick’s adaptation of an Arthur C. Clarke short story that HAL is now a part of the ways in which AI is often conceived.
So, given that it is 2019, a full 18 years past the marvelous technological era predicted by Kubrick’s title, we must be well beyond HAL. Right?
Not even close, as it turns out. Nothing like HAL exists in any industry. Sure, IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997, and Watson emerged victorious on Jeopardy in 2011, but efforts to revolutionize oncology using Watson have not come to fruition. The self-aware supercomputer that talks to us like a brilliant sidekick is not on the horizon. (Sorry, Janet.)
And let’s say right now that, for the time being, that’s a good thing. Were the use of Watson more promising, healthcare technology would pour money into making it a reality and too many provider organizations would gobble it up.
This is to say, throwing money at shiny baubles is not what healthcare should be doing, especially when technology has yet to transform the industry as imagined.
Honestly, we can scarcely help ourselves. Just look at EHRs if you need a case study. Originally imagined as the technological fix for most of what ails American healthcare, expectations have now settled into a more terrestrial realm. Sure, many aspects of EHRs are proving useful, but widespread adoption also clearly illustrates where reality and expectations diverge.
“We’ve done ourselves a disservice in propagating the hype around AI,” said Dr. Rasu Shrestha, CIO for the UPMC system.
Hindsight tells us that maybe the hype-fueled initial prices of EHRs were unwarranted—that technological costs should be commensurate with proven benefit. Exactly the same measuring stick should now be used with AI.
“I think that all our patients should actually want A.I. technologies to be brought to bear on weaknesses in the health care system, but we need to do it in a non-Silicon Valley hype way,” said Isaac Kohane, a biomedical informatics researcher at Harvard Medical School.
With that said, it helps to actually define what AI means, how it can positively impact healthcare, where we currently stand and what the future looks like.
As previously stated, AI in healthcare is not HAL, or Janet from The Good Place, or even Watson. Today, we commonly see AI in online chatbots and in facial recognition, but the most common and useful healthcare applications are machine learning and deep learning. In short, computers can rapidly and meaningfully analyze huge data sets for useful information. Human beings cannot.
Because EHRs are in the business of collecting huge data sets, AI is a logical and useful addition to any organization’s healthcare IT platform. But it is logical and useful in a more mechanical, less nuanced, way. Humans have the subtlety to do what AI cannot (yet), and AI has the speed and singular focus that humans lack.
As Jeremy Hsu writes in a Smithsonian article on AI, while AI’s strength is making “impressive predictions by discovering data patterns that people might miss … humans still must help make decisions that can have major health and financial consequences. Because A.I. systems lack the general intelligence of humans, they can make baffling predictions that could prove harmful if physicians and hospitals unquestioningly follow them.”
Those “impressive predictions” are now potentially the source of predictive modeling efforts by some of the larger health systems.
NYU Langone, for example, periodically rolls out predictive models for heart disease, sepsis, and other potential clinical scenarios. At UPMC, patients are discharged with a tablet used to record and transmit their vitals back to the hospital. In both instances, data culled from huge sets feeds models that tell clinicians when a patient might be in trouble.
The technology is especially promising in population health scenarios, which eventually brings us back around to the issue of cost. Specifically, while AI is proving useful to both NYU Langone and UPMC, it might benefit a greater number when applied to underprivileged areas and financially challenged healthcare organizations. In this sense, AI is just one more potential example of the healthcare divide between the haves and have nots.
“A lot of the A.I. discussion has been about how to democratize health care, and I want to see that happening,” says Effy Vayena, a bioethicist at the Federal Institute of Technology in Switzerland. “If you just end up with a fancier service provision to those who could afford good health care anyway, I’m not sure if that’s the transformation we’re looking for.”
And that’s the danger of adding technology to healthcare. The government, after all, is not subsidizing the acquisition of AI the way they did EHRs, and technology accumulation drives overall costs up, even as it offers tantalizing but often modest improvements in care.
So, while there is more than sufficient reason to look at AI as a potential benefit to healthcare, there is no strong argument for paying a lot to acquire solutions. Indeed, while AI arguably benefits low-income and marginalized populations the most, it still doesn’t measure up to less expensive population health approaches like regular checkups and vaccinations, improved diet and exercise, and the strengthening of family and community bonds.
No, none of these solutions is as sexy and exciting as science fiction creations like HAL, but they also can’t lock you outside the ship in outer space.
In all seriousness, what has proven true for EHRs holds for AI—technology, especially as it applies to population health, is not a shiny bauble. Its value is in how fully it serves the total patient population and in the strong correlation between benefits to that population and cost.
And if healthcare tries and fails to make that vision a reality, perhaps Watson can tell us what we did wrong. Or not.