Much has been discussed about artificial intelligence (AI) and the impressive proliferation of Generative AI (GenAI) over the last several months. It is staggering. As I think about the potential changes and impacts of leveraging AI and GenAI within a healthcare organization, I can’t help but think about the good, the bad, and the ugly.
Good: Ambient AI
There are many positive examples so far, where AI can improve healthcare, but one stands out for me: ambient AI technology. It’s a combination of AI, machine learning, and voice recognition software to provide ambient listening and/or ambient intelligence capabilities. It’s referred to as ambient because it works in the ambient environment to create intelligent systems that can perceive, interpret, and respond to the presence and activities of individuals in a healthcare setting. This tech can integrate sensors and IoT and embedded devices and it can provide unobtrusive and context-aware support to patients and healthcare providers.
Early results indicate these capabilities can improve patient outcomes, ensure the safety of clinicians and patients by monitoring activities in the ICU or operating room, improve both the patient and provider interaction, reduce provider and clinician burn-out risks by transcribing the appointments and automatically follow-up on actions discussed such as ordering medications and scheduling new appointments (once the provider approves the notes and action). Ambient AI also can contribute to an increase in overall data quality.
Some of the early adopters testing these capabilities include organizations such as Stanford Health, Atrium Health, Duke Health, and University of Michigan Health-West. Most of the electronic health record (EHR) platforms offer varying levels of these capabilities for healthcare organizations (HCOs) to test.
If your organization isn’t exploring these capabilities, I would encourage you to identify the use cases you’d like to test, define the success criteria for each, plan to monitor throughout the process, and kick off a pilot.
The Bad and Ugly: Energy Consumption and Bias
For as much good as AI can bring to healthcare organizations, there’s also the bad. It will and should be a long time before any provider/clinician accepts the output of an AI application at face value. Even with the significant advancements AI has shown in areas such as medical imaging, it is important to realize that while AI enables healthcare professionals to utilize its analytical capabilities the provider must still maintain control over the diagnostic process.
As a healthcare IT professional, there are two areas of AI that don’t get much airtime but should – the power consumption of AI and the risk of harm to patients due to AI model bias. Each HCO must determine which will have more potential negative impact on their enterprise: the increasing energy consumption that AI requires and the very real risk of harming and/or alienating patients because AI models have bias and aren’t well trained in healthcare equity.
Let’s start with energy consumption. Many HCOs have set, or plan to set, long-term sustainability goals, covering factors like improving energy efficiency in their facilities or reducing overall carbon emissions. At this time, and for the foreseeable future, leveraging AI will run counter to these sustainability goals. While the algorithms that drive AI reside in the cloud, the fuel to feed them – water and energy — comes from a large and always-hungry infrastructure. This is the bad of AI. If your organization has a goal to become energy efficient and you are considering using AI or GenAI, you must consider the benefits and risks. For the unforeseeable future, you can’t make progress on your energy initiative without having it negated by the AI work.
Bias Drives Inequality
Back in 2021, Kate Crawford1, a research professor at the USC Annenberg School, a senior principal researcher at Microsoft Research, the inaugural chair of AI and Justice at the École Normale Supérieure, and a leading scholar of the social implications of AI wrote The Atlas of AI. In this book, she writes “Within years, large AI systems are likely to need as much energy as entire nations.” And, while this is bad, it’s the other topic she covers that has experts more concerned: bias in the dataset and its impact on equity. This is the ugly side of AI.
Even with positive intent, data products can exhibit bias based on gender, race, ethnicity, nationality, payer mix, or other sensitive features. These biases can persist even if demographic information is excluded from the model due to various kinds of confounding. Data users and system validators must be trained to assertively interrogate their results and algorithms for bias. This is an even more important issue for healthcare as social determinants of health are factored into a patient’s dataset along with the rest of their medical record.
The Knowing Machines2 research team completed a study that was published a couple of months ago that looked at one of the biggest datasets behind today’s generative AI systems that are performing text-to-image generation. This dataset is called LAION-5B, which stands for large-scale Artificial Intelligence Open Network 5 Billion, with the 5 billion a nod to at least that many images and text captions drawn from the internet. The thinking was that LAION-5B would continue to mirror societal bias but instead, they found an even greater amount of bias and odd distortions due to the influence of the image ALT Tags. ALT Tags are an HTML attribute that provides alternative text for images or other visuals on a web page should they not render correctly on their browser. However wise marketers have used ALT Tags to promote products, lifestyle virtues, and more. All this info is now part of what Gen AI uses to provide its responses3.
I call it the ugly part of AI because the dataset just keeps growing, making your organization’s trust in the data that feeds any of the AI models a critical component to leveraging the best of what AI can offer.
As AI and GenAI continue to proliferate at a rapid pace, everybody in healthcare – C-level leaders, IT staff, clinicians, providers, staff, AI technology vendors, policymakers and more – should be aware of all sides of what it has to offer. The tech holds transformative potential in healthcare, offering significant benefits but also poses notable challenges. It’s a balance that requires careful consideration and ongoing dialogue among stakeholders to harness the benefits of AI while mitigating its risks and addressing ethical concerns.
About MJ Stojak
MJ Stojak is the Managing Director of the Data, Analytics & AI practice with Pivot Point Consulting, a Healthcare IT consulting firm and #1 Best in KLAS for Managed Services and Technical Services in 2024
Footnotes:
- https://katecrawford.net/atlas
- https://knowingmachines.org/
- https://knowingmachines.org/models-all-the-way#section2