On April 2, 2019, outgoing U.S. Food and Drug Administration (FDA) Commissioner Scott Gottlieb released the Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) and stated the agency’s intent is “to apply our current authorities in new ways to keep up with the rapid pace of innovation.”
The desire to ensure safety is admirable, and the proposal’s contributors (who include government scientists, regulators and physicians) urge “more collaboration across disciplines in building and testing AI algorithms and intensive validation of them before they reach patients.” In 2014, the FDA approved AI-based algorithms for medical use, and in 2018, it issued its first approval of an AI system for diagnosis without human clinical input. Naturally, the agency should have thoughts on the matter. But proposing a regulatory framework for medical AI/ML software modification via a “predetermined change control plan” is cause for concern.
And should the FDA be regulating such technology at all?