• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

  • Opinion
  • Health IT
    • Behavioral Health
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Patient Engagement
    • Population Health Management
    • Revenue Cycle Management
    • Social Determinants of Health
  • Digital Health
    • AI
    • Blockchain
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • M&A
  • Value-based Care
    • Accountable Care (ACOs)
    • Medicare Advantage
  • Life Sciences
  • Research

Ethical Concerns of AI in Healthcare: Can AI Do More Harm Than Good?

by Erica Garvin 08/06/2019 Leave a Comment

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print
Before the AI Doctor, Ethics Must in Place
Tim Casey, Director of the STEPPS Program at California Western School of Law

The AI doctor can see you now, but it shouldn’t treat you until ethics are in place. Professor Timothy Casey explains.

Artificial Intelligence (AI) and ethics—once mythical adversaries only on-screen like in “The Terminator” circa 1984— are now actually at virtual odds in 2019. This time, there are no catchy slogans like Arnold Schwarzenegger’s “I’ll be back,” but the moral question has returned:

Does AI’s transformative power have the ability to do more harm to humankind than good?

That is the question presently reverberating across industries, as the shadow of AI’s ubiquity looms here in the U.S and abroad. AI (which includes the fields of machine learning, natural language processing, and robotics), is transforming just about everything, from self-driving cars and AI-powered assembly lines to risk management and research and development. The possibilities circulating AI appear infinite, yet therein also lies the potential for endless problems.

To err is human, yet thanks to AI, flesh-and-blood beings are no longer exclusively culpable. Who’s at fault if a self-driving car causes a fatal accident? What if an AI-operated, armed military drone attacks an enemy in error? How about a massive data breach due to an autonomous data search? These are the complexities that major companies, such as Google, Amazon, and Microsoft, are trying to unpack.

In healthcare, there are equally moral questions to consider, and as far as morality is concerned, healthcare is already falling behind, according to Tim Casey, a professor in residence and director of the STEPPS Program at California Western School of Law. In law, like the kind Casey teaches, ethics are built into the core of the profession, but he knows the applications of values in business are not clear cut.

According to Forbes, the total public and private sector investment in healthcare AI is expected to reach $6.6 billion by 2021. Even more staggering, Accenture predicts that the top AI applications may result in annual savings of $150 billion by 2026. However, focusing on cost savings vs. the real saving healthcare is intended for may not be the best long-term approach, according to Casey.

“The central problem with ethics and healthcare in America is that the entire system is set up to achieve profit, rather than maximizing health. This problem does not change with the addition of AI, but it presents a high probability of getting much worse,” said Casey. “We need to be very careful about defining the goals of the system. If the goal of the system is to maximize profit, then we can expect an AI-driven system to be better than a human system at achieving that goal. However, we may not like the result.”

One could argue that healthcare has its share of values; the Hippocratic Oath of “do no harm” is plenty ethical. Still, even with FDA regulations and the protection of HIPAA compliance, the shift to value-based care hasn’t been without its hiccups, especially when you bring HIT into the fold. Some would argue that the tech is merely evolving faster than entities can establish ethical principles or even continuity for that matter; interoperability is still a huge challenge throughout the industry.

AI holds significant promise for healthcare in the fields of biomedical research, personalized medicine, drug development, insurance assessment, telemedicine, and more. Impressive clinical tools include algorithms that assist radiologists in detecting forms of cancer, avatars that can converse with patients to aid psychiatrists with diagnosing and treating their conditions, and even the potential to have surgical robots’ function autonomously.

Despite AI’s potential, it brings with it more questionable challenges for the industry. For example, how will education systems evolve in the presence of AI? “Education is notoriously reluctant to change, and that reluctance would exasperate the difficulties in the profession,” said Casey. “For example, who trains doctors on da Vinci? It’s certainly not the stodgy professors, and it’s probably not the most experienced practitioners,” said Casey.

What it Means to be a Doctor

Another concern is the issue of inserting biases into the so-called, self-thinking machinery. Even if AI develops its virtual thought processes, those thoughts initially stem from a human brain, one most likely borne with very human biases. As reported by CNET, Google is presently trying to work through this problem even addressing the issues in healthcare, with what’s called TCAV (testing with concept activation vectors), which is designed to correct the issue.

For example, TCAV is learning not to assume a doctor is male even if AI training data indicates that’s more likely. The technology is revealing how AI is making these choices with more human-friendly identifiers as opposed to low-level tech characteristics, like pixel-level structures in photos.

Efforts like Google’s, or even Microsoft with putting together an internal ethics committee known as FATE (fairness, accountability, transparency, and ethics), are minute on the tiny tip of an enormous ethical iceberg. Moreover, with AI and its potential to addresses care gaps often tinged with socioeconomic prejudices, it’s going to take a lot of human reflection and correction to rid tech of the same preconceived notions.

“The fundamental problem is that if we think of ethics as ‘doing the right thing,’ there will always be a conflict between ethics and profit,” said Casey. “Our systems, whether in healthcare or technology, are designed, for the most part, to increase profit, not to increase the human condition. The legal system reinforces this tendency.”

Even with AI’s inherent, innately human flaws, the virtual advances are hard to ignore. AI’s systems could hold a steadier hand at the surgical table or replicate human conversation to triage patients efficiently. However, it’s the things humans can’t do that make the technology truly transformative. AI systems, for example, may assess the relative risk of a surgical procedure to a specific individual in a manner better than a surgeon. 

“The more data the AI system receives, the better the system would be able to ‘see’ correlations and develop causal connections that might be ‘invisible’ to the human analyst,” said Casey. “When we consider this level of AI, where we are talking about the exercise of professional judgment, the question becomes: what does it mean to be a doctor?”

Casey continued: “If we think of a professional as a combination of knowledge, skill, judgment, and values, and if an AI system can out-perform a human in terms of knowledge, skill, and judgment, then the only thing separating AI from a human is values. I think there is an argument that judgment includes values. Our values—defined by our ethics—become very important.”

Thus, before we give rise or even praise to the machine, we have to go back to the moral drawing board. With so many ethical questions circulating AI, where does one begin? According to Casey, it’s quite simple: All healthcare entities have the responsibility to provide the best possible medical care to as many people as possible. 

Taking the focus off profits and centering the concern on the patient is easier said than done, there are complexities to value-based care that plague healthcare even in the absence of technology. There are no specific solutions, but it seems that forming ethical bodies to regulate the practices and approach to AI application is an excellent place to start. How healthcare grapples with the AI evolution remains to be seen, but it surely will be interesting.  As for the question of whether AI in healthcare more harm than good will do, this is what Casey had to say:

“Many new technologies will present the possibility of doing more harm than good.  The technology is simply a tool – the question is how we decide to use the tool.  AI, like many technologies, has the potential to magnify existing weaknesses and faults in our systems.  It also had the potential to change medicine. Whether that change ends up being a step forward or a step back depends on our commitment to ethics. Ethics will only gain traction when we decide that there are values more important than profit,” he concluded.

  • LinkedIn
  • Twitter
  • Facebook
  • Email
  • Print

Tagged With: Accenture, AI, AI in healthcare, algorithms, Amazon, Artificial Intelligence, cancer, FDA, google, HIPAA, HIT, interoperability, Machine Learning, Microsoft, Personalized Medicine, risk, telemedicine, Value-Based Care

Tap Native

Get in-depth healthcare technology analysis and commentary delivered straight to your email weekly

Reader Interactions

Primary Sidebar

Subscribe to HIT Consultant

Latest insightful articles delivered straight to your inbox weekly.

Submit a Tip or Pitch

Featured Insights

2025 EMR Software Pricing Guide

2025 EMR Software Pricing Guide

Featured Interview

Paradigm Shift in Diabetes Care with Studio Clinics: Q&A with Reach7 Founder Chun Yong

Most-Read

Medtronic to Separate Diabetes Business into New Standalone Company

Medtronic to Separate Diabetes Business into New Standalone Company

White House, IBM Partner to Fight COVID-19 Using Supercomputers

HHS Sets Pricing Targets for Trump’s EO on Most-Favored-Nation Drug Pricing

23andMe to Mine Genetic Data for Drug Discovery

Regeneron to Acquire Key 23andMe Assets for $256M, Pledges Continuity of Consumer Genome Services

CureIS Healthcare Sues Epic: Alleges Anti-Competitive Practices & Trade Secret Theft

The Evolving Role of Physician Advisors: Bridging the Gap Between Clinicians and Administrators

The Evolving Physician Advisor: From UM to Value-Based Care & AI

UnitedHealth Group Names Stephen Hemsley CEO as Andrew Witty Steps Down

UnitedHealth CEO Andrew Witty Steps Down, Stephen Hemsley Returns as CEO

Omada Health Files for IPO

Omada Health Files for IPO

Blue Cross Blue Shield of Massachusetts Launches "CloseKnit" Virtual-First Primary Care Option

Blue Cross Blue Shield of Massachusetts Launches “CloseKnit” Virtual-First Primary Care Option

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

Osteoboost Launches First FDA-Cleared Prescription Wearable Nationwide to Combat Low Bone Density

2019 MedTech Breakthrough Award Category Winners Announced

MedTech Breakthrough Announces 2025 MedTech Breakthrough Award Winners

Secondary Sidebar

Footer

Company

  • About Us
  • Advertise with Us
  • Reprints and Permissions
  • Submit An Op-Ed
  • Contact
  • Subscribe

Editorial Coverage

  • Opinion
  • Health IT
    • Care Coordination
    • EMR/EHR
    • Interoperability
    • Population Health Management
    • Revenue Cycle Management
  • Digital Health
    • Artificial Intelligence
    • Blockchain Tech
    • Precision Medicine
    • Telehealth
    • Wearables
  • Startups
  • Value-Based Care
    • Accountable Care
    • Medicare Advantage

Connect

Subscribe to HIT Consultant Media

Latest insightful articles delivered straight to your inbox weekly

Copyright © 2025. HIT Consultant Media. All Rights Reserved. Privacy Policy |