Industry group issues its clinical decision support guidelines

CDS Coalition seeks to set rules for applications that assist physicians with making decisions on medium-risk healthcare interventions, says Bradley Merrill Thompson.


The CDS Coalition, a group of software developers and medical device manufacturers, has released final guidelines on the design of medium-risk clinical decision support that it hopes will serve to self-regulate the industry as the Food and Drug Administration looks to publish its own guidance on high-risk CDS early next year.

“The coalition wanted to do its part to assure the safety of unregulated CDS by developing guidelines that would ensure that medium-risk CDS either leaves the decision-making in the hands of qualified users or is properly validated,” says Bradley Merrill Thompson, general counsel for the group. “We also wanted to leave low-risk CDS untouched to avoid burdening innovation in that area.”

Also See: 21st Century Cures Act clarifies FDA regulation of software

Under the 21st Century Cures Act signed into law late last year, certain CDS software is now outside the scope of FDA regulation. While the regulatory agency will need to interpret the exact line drawn by the statute, the CDS Coalition wants to give FDA confidence that, for unregulated software, the “industry will do an adequate job of self-regulation,” Thompson contends.

The voluntary guidelines issued by the CDS Coalition are meant to assure the central and independent role of healthcare professionals in clinical decision-making. In particular, the guidelines are based on three pillars—software transparency, competent human intervention and sufficient time for clinicians to review the basis for a clinical recommendation.

According to Thompson, the most difficult technological issue that the CDS Coalition had to address in its guidelines was the technical challenge involved in making machine learning transparent.

“Our guidelines require transparency, and transparency means that the user of the software can understand what data the software is analyzing, and what the underlying basis is for the recommendations the software makes,” he notes. “That transparency is necessary to leave the user in control of the decision-making rather than having to rely on a black box.”

However, as Thompson points out, the problem with machine learning software is that it “does a lousy job of explaining how the software arrived at a given conclusion” in helping physicians make more accurate clinical decisions. “An awful lot of very smart computer scientists are trying to improve software capabilities in this area, but they have a long way to go,” he adds.

To address these issues, the CDS Coalition devised five steps that developers of machine learning could follow to be transparent:
  • Explain what can be explained. Don’t make the problem bigger than it has to be. If the software is actually a blend of expert systems and machine learning, and if a particular recommendation is based on expert systems, such as simply looking up the drug allergy in the patient’s EHR, following a simple computational model or recommending a treatment because it is cheaper, the recommendation ought to reveal that reason.
  • Communicate the quality of the machine learning algorithms. When the source is truly machine learning, the software needs to reveal that source, along with information that will help the user gauge the quality and reliability of the machine learning algorithm. Through a page in the user interface that can be periodically updated, the developer could explain to the user the extent to which the system has been validated and the historical batting average of the software. That context helps the user understand the reliability of the software in general.
  • Describe the data sources used for learning. Providing a thorough explanation of the data sets used to feed and test the machine can provide important context and assurance to the clinician.
  • State the association as precisely as possible. With machine learning, really what we are seeing is an association—something in the patient-specific information triggers an association to what the software has seen in other cases. Even though the software can’t articulate exactly what it is about the data that triggers the association or even what features it looked at, that doesn’t make it any different than a radiologist who points to a place on an image and says, “I’ve seen that before, and it’s been malignant.” Much of what we “know” in medicine is really just associations without a deeper understanding of a causal relationship. Software built on machine learning needs to explain that it has spotted an association, and state as precisely as it can the details of that association.
  • Convey the confidence level. While software based on machine learning does a miserable job of explaining the clinical logic it follows, machine learning excels at communicating its confidence level in reaching a particular recommendation. And that’s quite valuable. That information helps the user decide how much deference the user should give a particular recommendation.
“The steps are demanding,” concludes Thompson. “But there’s no inherent right to declare machine learning transparent. It’s going to take some work by developers.”

More for you

Loading data for hdm_tax_topic #better-outcomes...