AMIA calls on FDA to refine its AI regulatory framework

Register now

The American Medical Informatics Association wants the Food and Drug Administration to improve its conceptual approach to regulating medical devices that leverage self-updating artificial intelligence algorithms.

The FDA sees tremendous potential in healthcare for AI algorithms that continually evolve—called “adaptive” or “continuously learning” algorithms—that don’t need manual modification to incorporate learning or updates.

While AMIA supports an FDA discussion paper on the topic released in early April, the group is calling on the agency to make further refinements to the Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).

“Properly regulating AI and machine learning-based SaMD will require ongoing dialogue between FDA and stakeholders,” said AMIA President and CEO Douglas Fridsma, MD, in a written statement. “This draft framework is only the beginning of a vital conversation to improve both patient safety and innovation. We certainly look forward to continuing it.”

Also See: FDA mulls new regulatory framework for AI-based medical devices

In particular, AMIA sent comments on the framework to the agency, recommending improvements in four areas: continuously learning versus “locked” algorithms; new data inputs’ impact on algorithms’ outputs; cybersecurity in the context of AI/ML-based SaMD; and evolving knowledge about algorithm-driven bias.

When it comes to learning versus locked algorithms, AMIA told the FDA that “while the framework acknowledges the two different kinds of algorithms” it is concerned that the framework is “rooted in a concept that both locked and continuously learning SaMD provides opportunity for periodic, intentional updates.”

Although AMIA’s letter to the agency voiced appreciation for the fact that the framework accounts for new inputs into a SaMD’s algorithm, the group said it is “concerned that a user of SaMD in practice would not have a practical way to know whether the device reasonably applied to their population, and therefore, whether adapting to data on their population would be likely to cause a change based on the SaMD’s learning.”

In addition, AMIA claimed that the framework “fails to discuss how modifications to SaMD algorithms may be the result of breaches of cybersecurity and the need to make this a component of periodic evaluation” and that the FDA should “consider how cybersecurity risks, such as hacking or data manipulation that may influence the algorithm’s output, may be addressed in a future version of the framework.”

Finally, the group recommended that the agency develop guidance about how and how often developers of SaMD-based products test their products for algorithm-driven biases.

For reprint and licensing requests for this article, click here.