An Accenture report was released at HIMSS18 with a bold prediction: the healthcare artificial intelligence (AI) market may hit $6.6 billion in the next three years. In 2014, that number was just $600 million, meaning the AI healthcare market could see an eleven fold increase in value in less than a decade.
The survey showed that as of 2018, one in five U.S. consumers have already used healthcare services “powered by artificial intelligence,” and many are open to AI clinical services, like home-based diagnostics (cited by 66 percent of respondents) and virtual health assistants (61 percent). On top of that, consumer use of mobile health apps has tripled over the last four years (from 16 percent in 2014 to 48 percent today) and the use of wearable devices has nearly quadrupled (9 percent in 2014 to 33 percent today).
It may not be overly ambitious to say that we are, in fact, on the brink of a revolution – and a fast-moving one, at that. It’s not hard to see why: the potential benefits are immense, from removing steps like traveling to receive care to developing new treatment options.
Many clinical applications are still in development, but last year, we saw small sparks of change happen with AI and machine learning (ML) improving workflows and administrative functions, deploying clinical services as well as streamlining the research process. Ultimately, we’re just seeing the beginning of what this new revolution has to offer.
In the face of such rapid and ground-shaking change, there will always be new challenges. When it comes to AI and ML, there are still legal and regulatory questions that remain unanswered.
This year, it will be worth keeping an eye on the following areas of legal development:
The 21st Century Cures Act amended the term “device” to exclude clinical decision support software, which can deploy AI and ML technologies. While the requirements associated with the exclusion are rigorous, the legislation represents strong legislative support for the development of AI and ML healthcare technology. FDA has also cleared some AI-based medical devices, and there is no reason to believe this will not continue through 2018.
Finally, FDA has embarked on an ambitious project to re-design its oversight of digital health technologies to focus more on the developer. This continues an effort by FDA to improve and accelerate its review process associated with digital health technologies, which included guidance on mobile medical apps. While FDA has been subject to criticism in these efforts by industry groups, the effectiveness of these efforts to balance a desire for a quicker and more certain clearance process and protecting the public remains to be seen.
Exploring the Future
In December 2017, the FUTURE of AI Act was introduced in both the House and the Senate. If passed, it would establish a committee to advise the Secretary of Commerce on AI regarding work force, education, legal and regulatory regimes, as well as international competitiveness. The bill’s passage could very easily affect the development of AI and ML applications in healthcare, and mark the U.S. as one of the few nations, like the U.K, UAE and China, that are actively exploring the implications of the new technology.
Similarly, New York City passed a local law in February 2018 that requires the creation of a task force to provide recommendations on “how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems.” It’s the first of its kind in terms of AI transparency legislation in the US. Whether other cities or states follow suit will be interesting to watch in the coming year.
In the Accenture survey, 90 percent of participants said they are willing to share personal data with their doctor, and 88 percent said they are willing to share personal data with a nurse or other healthcare professional. That’s great news for providers, but will require them to not only collect, but to secure the information patients provide.
In April, Verizon released a report that found that healthcare makes up 15 percent of all data breaches, second to financial services at 24 percent. On top of that, healthcare is the only industry where insider threats pose the greatest risk to sensitive data, with 68 percent of threat actors internal to the organization.
Recent healthcare data security breaches and the recent news about the improper or unintended use of data in have pushed data privacy concerns directly in front of the public. Government regulators have been increasing their enforcement activities relative to privacy laws, and we should expect that trend to continue. Because AI and ML empowered digital health tools rely significantly on data from patients, the developers and users of these technologies must stay on top of all of the developments in this area.
It’s undeniable that as AI and ML capabilities advance, they will further infiltrate the healthcare industry and present new and precedent-setting legal dilemmas in additional ways that are not readily apparent. Importantly, these technologies have the capacity to fundamentally change the way care is delivered and impact our traditional approaches to healthcare delivery.
As technology’s disruption of traditional healthcare continues, organizations must be aware of and prepared for the implications of digital health innovation — mitigating risk to ensure compliance and a strong foundation for growth. Going forward, it will be increasingly important for organizations to identify the signs of disruption and prepare for the implications.
Dale Van Demark is a Partner at international law firm, McDermott Will & Emery where he advises clients in the health industry on strategic transactions and the evolution of health care delivery models. He has extensive experience in health system affiliations and joint venture transactions and provides counseling on the development of technology in health care delivery, with a particular emphasis on telemedicine.