What You Should Know:
- A new study by Cedars-Sinai investigators describes how ChatGPT, an artificial intelligence (AI) chatbot, may help improve health outcomes for patients with cirrhosis and liver cancer by providing easy-to-understand information about basic knowledge, lifestyle and treatments for these conditions.
- The findings, published in the peer-reviewed journal Clinical and Molecular Hepatology, highlight the AI system’s potential to play a role in clinical practice.
Helping Patients of Chronic Liver Disease Via Artificial Intelligence
“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease,” said Brennan Spiegel, MD, MSHS, director of Health Services Research at Cedars-Sinai and co-corresponding author of the study. “We found ChatGPT—while it has limitations—can help empower patients and improve health literacy for different populations.”
Patients diagnosed with liver cancer and cirrhosis, an end-stage liver disease that is also a major risk factor for the most common form of liver cancer, often require extensive treatment that can be complex and challenging to manage. Personalized education AI models could help increase patient knowledge and education, noted Alexander Kuo, MD medical director of Liver Transplantation Medicine at Cedars-Sinai, and co-corresponding author of the study. One of those is ChatGPT, which stands for generative pre-trained transformer. It has quickly become popular for its human-like text in chatbot conversations where users can input any prompt and it will generate a response based on the information stored in its database. It has already shown some potential for medical professionals by writing basic medical reports and correctly answering medical student examination questions.
To verify the accuracy of the AI model in its knowledge about both cirrhosis and liver cancer, investigators presented ChatGPT with 164 frequently asked questions in five categories. The ChatGPT answers were then graded independently by two liver transplant specialists.
Each question was posed twice to ChatGPT and was categorized as either basic knowledge, diagnosis, treatment, lifestyle or preventive medicine.
Study results include:
- ChatGPT answered about 77% of the questions correctly, providing high levels of accuracy in 91 questions from a variety of categories.
- The specialists grading the responses said 75% of the responses for basic knowledge, treatment and lifestyle were comprehensive or correct, but inadequate.
- The proportion of responses that were “mixed with correct and incorrect data” was 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle and 50% for preventive medicine.
The AI model also provided practical and useful advice to patients and caregivers regarding the next steps adjusting to a new diagnosis.
Still, the study left no doubt that advice from a physician was superior.