The healthcare industry has traditionally been an early adopter of new technology, particularly tools and techniques that can make healthcare delivery safer, more effective, and more efficient. Nonetheless, the speed of that adoption is often curtailed by regulators, which are charged with ensuring that not only are new technologies safe to use, but also offer improvements in efficacy, speed, or accuracy. Moreover, in recent years, regulators also have focused on ensuring that any new device, tool, software application, or process will safeguard the privacy of individuals receiving care.
One of the key jurisdictions where regulation remains a major component of the marketplace is in the United States, where both legislators and the Food and Drug Administration (FDA) have taken a cautious approach to approving digital initiatives, including the use of artificial intelligence (AI), which refers to a device or product that can imitate intelligent human actions or behavior. Some of the potential use cases for AI in healthcare include a machine learning algorithm that evolves and improves as new data inputs are entered, thereby providing doctors with detailed diagnostic information for a wide variety of medical conditions.
Definitions and Regulations Keeping Pace with New Innovations
However, the definitions of what is considered a medical device (which are clearly regulated by the FDA), have not really kept up with the pace of innovation in the AI field. That is starting to change, as both the FDA and the U.S. Congress have acknowledged the growth of digital technology and its potential for use cases in healthcare. The FDA created a Digital Health Program, which was charged with developing and implementing a new regulatory model for digital health technology, and since 2013, the program has issued several enforcement discretion guidance documents, such as the Mobile Medical Applications Guidance, which explains the FDA’s oversight approach.
However, FDA guidance is not law, so Congress enacted in December 2016 the 21st Century Cures Act, which contains provisions clarifying the FDA’s jurisdiction over certain digital health products. Indeed, Section 3060 of the Cures Act specifically excludes certain types of “medical software” from the “device” definition, including software that is:
(ii) supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and
(iii) enabling such health care professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.
Software in this category is often referred to as clinical decision support software, and increasingly, machine learning algorithms are used to analyze data to help healthcare providers make these clinical decisions. Based on a reading of the Cures Act, it is not fully clear whether machine learning algorithms that power clinical decision support software would be excluded from the FDA’s oversight and regulation.
In late December 2017, the FDA issued a draft guidance that specifies the criteria that must be met by the software to merit FDA oversight exclusion, noting that the software must clearly explain:
- The purpose or intended use of the software function
- The intended user
- The inputs used to generate the recommendation
- The rationale or support for the recommendation
AI-driven decision engines may not meet the last two criteria; many machine learning algorithms are considered to be proprietary, thereby obscuring the exact inputs used to generate a recommendation, as well as any specific rationale or support for a recommendation. It seems likely that AI devices that impact any decision making will remain under the purview of the FDA.
Fast Tracking Evaluations and Approvals
That said, the FDA has been enabling various technology companies to file a Digital Health Software Pre-Certification (“Pre-Cert”) to fast track the evaluation and approval process. Major technology companies, such as Apple, Fitbit, and Samsung, have filed pre-cert applications for specific devices.
Interestingly, the Pre-Cert Pilot Program is designed to help the FDA create a new approach to regulating software, by evaluating the developer of the product, rather than primarily at the product. If adopted by FDA, this developer-focused approach would be particularly well-suited to evaluating machine learning and other AI-products that are constantly evolving.
Consider the comments on the Pre-Cert program that Scott Gottlieb, M.D., the FDA’s Commissioner of Food and Drugs, made during Academy Health’s 2018 Health Datapalooza, in April 2018:
AI holds enormous promise for the future of medicine, and we’re actively developing a new regulatory framework to promote innovation in this space and support the use of AI-based technologies. So, as we apply our Pre-Cert program—where we focus on a firm’s underlying quality— we’ll account for one of the greatest benefits of machine learning – that it can continue to learn and improve as it is used.
Employing the Pre-Cert approach to AI may allow a firm to make certain minor changes to its devices without having to make submissions each time. And, we’ll make sure that other aspects of our regulatory framework, such as new software validation tools, are sufficiently flexible to keep pace with the unique attributes of this rapidly advancing field.
As Tractica notes in its Artificial Intelligence for Healthcare Applications report, the use of a Pre-Cert program “should reduce the time it takes for a company to go through the entire certification process, as it can work on getting the company approved before its technology or product is ready for review. To fully qualify as a medical device that can be used on patients, the system must be able to demonstrate it can produce results at least as accurately as humans.”
Revenue from medical image analysis and medical treatment recommendation, the two use cases that are likely to drive clinical decision support engines, is projected to grow significantly over the next several years. Tractica projects that these two use cases will reach a combined total of $3.2 billion in annual revenue by 2025, up from $76.6 million in 2017.