With doctors’ time in short supply, one area in which AI-driven tools could help transform healthcare is by increasing diagnostic accuracy. AI-driven diagnostic tools, especially in areas like cardiology, have the potential to save the healthcare system money and most importantly save lives. They are developed using incomprehensibly large datasets and can, therefore, discover patterns that are invisible to the human eye.
For example, echo-based tools combined with deep clinical insight, machine learning, and some of the largest echo datasets in the world could reduce misdiagnosis of heart disease by more than 50 percent.
Another example of where AI could transform healthcare is by feeding algorithms sequences of images of cancer, and they can be trained to spot nascent cancers earlier – in this study, 16 percent more effectively than human doctors. However, without the training set of more than 200,000 patients, the algorithm is useless.
AI’s Need for Data – Finding an Ethical Balance
For widespread adoption and full implementation of AI in healthcare, deep learning-based AI solutions need access to vast amounts of personal data. The more data available, the more accurately AI algorithms can be designed and help save the lives of patients. However, the medical data needed is among our most sensitive information.
A report from the Academy of Medical Royal Collages about the use of Artificial Intelligence in Healthcare sums up this issue well, “The UK Government and its health and social care systems have a legal duty to maintain the privacy and confidentiality of its citizens…However, the development of AI and machine learning algorithms relies on the use of large datasets.”
Collectively, there is an immense benefit to be had from everyone allowing their data to be accessed for the development of AI solutions in healthcare, yet individuals are reluctant to reveal their data. Not surprising in an era of commoditized data, where corporations like Facebook and Google offer free services in return for personal information, and where serious leaks face relatively tepid repercussions.
People are rightfully concerned with headlines about healthcare data privacy. Recently, the NHS conflicted with the Google DeepMind team over their use of patient data. The two clashed over lack of transparency and DeepMind’s handling of data on millions of individuals. DeepMind accessed more data than they publicly stated the need for, but not more than they were legally allowed to, leaving them in a data-privacy grey area.
So how do we balance the concerns of the need for substantial amounts of data while still respecting patient privacy?
Accessing Data
Today, it’s entirely possible to access data without violating the NHS code of conduct by obtaining patient consent through academic bodies as part of a clinical trial. Each patient prospectively grants permission for commercial use of their data, and the companies developing these technologies can use it without any ethical issues arising.
The greater issue we’re facing now is how companies can retrospectively use patient’s data and get retroactive consent. As a result, the NHS is working on amending their code of conduct to hopefully convince more people to share their data to tackle this problem.
Gaining Awareness
The best way to convince people to want to share their data is by generating greater awareness of AI technology developments. Many people’s idea of AI is abstract at best, and at worst is the conception of a malevolent supercomputer. In reality, AI will manifest simply as smarter software running in the background, offering more accurate insights to the human practitioner.
Ongoing trials using the existing ethical framework continue to build the case for using data for AI training, and greater exposure of sceptical patients to AI-driven medical technology will improve the perception of this technology as it becomes increasingly widespread.
Most people aren’t concerned about the use of their data for the development of life-saving technology, but rather that companies will make money from their data. An essential step here is to make patients aware of the companies which are returning their innovations back into the NHS free of charge, as this also helps develop trust regarding data privacy.
We believe that most patients will consent to the use of their data if they fully understand the situation: that they will contribute to moving healthcare forward and improving outcomes for other patients.
Ongoing Challenges
As with any revolutionary technology, AI faces new and interesting ethical issues in addition to the issue of data handling discussed here. For instance, how do we decide who to favor when the AI and the doctor disagree? Another ethical concern is about improving AI as it rolls out to hospitals. How do you train an AI model on an ongoing basis if continually updating its data risks inputting bad data and degrading its quality? Algorithmic tools which are dynamically improving and changing will clearly require new considerations.
AI-based tools are already saving lives. To continue to develop them, however, requires access to more patient data. Working out how to distribute and handle this information responsibly, requires rethinking current assumptions about data privacy.
Ultimately, public perception is the key. For AI projects to succeed, we must keep encouraging patients to share their information and educating them about the benefits of doing so. We need to turn our focus into meaningful actions – this is the next step in the widespread adoption of AI in healthcare.
About Ross Upton
Ross Upton is the CEO and Academic Co-Founder of Ultromics – a startup that is focused on bringing the benefits of AI to support clinicians in the diagnosis of cardiovascular disease, that largest cause of death globally. It is developing an echo-based tool by combining deep clinical insight with machine learning and some of the largest echo datasets in the world to aid clinicians in their diagnoses, without disrupting workflow.