The Food and Drug Administration’s draft guidance regarding clinical decision support software is getting poor reviews from stakeholder groups that contend that the regulatory agency still has a lot of work to do, according to comments provided this week to the FDA.
This past December, the FDA released draft guidance meant to clarify what types of CDS will no longer be defined as medical devices and therefore will not be regulated by the agency—based on provisions of the 21st Century Cures Act.
But, according to Bradley Merrill Thompson, general counsel for the CDS Coalition, which represents software developers, healthcare payers, providers and medical device manufacturers, the FDA’s draft guidance fails to take a risk-based approach—despite assurances from the agency—and, if implemented, would force sellers of existing software to remove their products from the market.
“The draft guidance would hurt patients,” says Thompson. “The draft guidance would significantly expand the scope of FDA regulation and sweep in many low-risk software programs that have been on the market for years—if not decades—without FDA oversight. The draft guidance also clouds, not clarifies, the pathway for new CDS programs that incorporate technologies such as machine learning algorithms.”
However, according to Bakul Patel, associate director for digital health in the FDA’s Center for Devices and Radiological Health, the agency’s draft CDS guidance does not propose to expand the scope of regulation. In fact, he contends that under the 21st Century Cures Act, certain CDS software is now outside the scope of FDA regulation.
“The guidance is about clarifying a set of products that are no longer considered devices under the 21st Century Cures Act, and it then identifies a new set of devices for patients on which we would not focus our regulatory efforts,” says Patel. “The guidance, if finalized, would not increase FDA regulatory oversight of any software program. Our draft guidance clarifies that software that allows the provider to independently review the basis for the treatment recommendation are excluded from FDA regulation.”
However, the CDS Coalition isn’t the only group to find fault with the FDA’s draft guidance. While the American Medical Informatics Association applauded development of the guidance, AMIA warned in its comments to the FDA that there is “lingering confusion among developers and clinicians trying to determine whether specific decision support software is, or is not, considered a device,” adding that this is “because some of the criteria used to determine if a functionality should be excluded is ambiguous.”
To address this ambiguity, AMIA recommends that the FDA:
- Articulate why examples of software are categorized as exempt or not, using the criteria established by Cures.
- Include discussion regarding FDA’s intended regulatory controls for CDS software considered a device, even if the intended regulatory controls are still in development.
- Include discussion regarding the anticipated literacy levels, and their variance, across intended users of patient decision support software functions.
In its comments, AMIA also points out that the FDA’s guidance has “engendered much conversation regarding a host of implicit concepts, especially regarding the use of machine learning methods and other similar tools in decision support software.”
As a result, AMIA has called on the FDA to “host a public convening to discuss standards for transparency and performance of decision support software in machine learning-based environments.”
Among the topics that should be discussed related to transparency and performance are:
- Methods and standards to describe the underlying data used to develop the algorithm(s).
- Plain language descriptions of the logic used by an algorithm(s) to render a decision.
- Policies needed to encourage the public availability of development data set(s) used to produce a decision recommendation.
- Methods, standards and policies to test the algorithm(s) on one’s own comparable data set, to establish comparable performance characteristics prior to operational use in-situ.
When it comes to algorithms, Patel emphasizes that the policies in the guidance apply to all types of algorithms, including artificial intelligence. “As this is a draft guidance, we look forward to substantive comments from stakeholders on how they believe the statute and policies apply in circumstances that they care about,” he adds.
Nonetheless, Thompson says the CDS Coalition expected much more from the FDA, especially given that it took the agency seven years to develop the draft guidance.
“FDA had been repeatedly promising to make the guidance risk-based, but then didn’t. We have no idea why,” he concludes. “We certainly hope that FDA reconsiders its approach and develops a new proposed guidance.”
Register or login for access to this item and much more
All Health Data Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access