On April 2, 2019, outgoing U.S. Food and Drug Administration (FDA) Commissioner Scott Gottlieb released the Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) and stated the agency’s intent is “to apply our current authorities in new ways to keep up with the rapid pace of innovation.”

The desire to ensure safety is admirable, and the proposal’s contributors (who include government scientists, regulators and physicians) urge “more collaboration across disciplines in building and testing AI algorithms and intensive validation of them before they reach patients.” In 2014, the FDA approved AI-based algorithms for medical use, and in 2018, it issued its first approval of an AI system for diagnosis without human clinical input. Naturally, the agency should have thoughts on the matter. But proposing a regulatory framework for medical AI/ML software modification via a “predetermined change control plan” is cause for concern.

And should the FDA be regulating such technology at all?

HEAL Security is a cognitive cybersecurity intelligence platform custom-designed for the healthcare sector.

Founders

Charles Aunger

Milestones

Founded 2021

WEBSITE
Social

HOPPR is transforming healthcare diagnostics with AI-driven medical imaging technology.

Founders

Khan M. Siddiqui, MD
Oliver Chen, MD
Robert Bakos
Gerry Stegmaier

Milestones

Founded 2018

WEBSITE
Social
Powering insights

Case Study: Health2047’s Model in Action

Podcast: So You Want to Transform Healthcare