MD Anderson using NLP to automate radiology reporting

The interpretative reports that radiologists issue for imaging studies tend to vary in format and in the information included.

The interpretative reports that radiologists issue for imaging studies tend to vary in format and in the information included. Even the terms used to describe specific clinical findings or categorize a patient’s disease risk may differ from one radiologist to another.

In an effort to standardize radiology reporting, the American College of Radiology (ACR) has developed Reporting and Data System (RADS) frameworks, which serve to guide radiologists in assessing, categorizing and reporting imaging findings and recommendations. To date, 10 ACR-RADS have been released, including breast imaging (BI-RADS), head injury imaging (HI-RADS) and coronary artery disease (CAD-RADS).

According to ACR’s website, the goal is “… to reduce the variability of terminology in reports and to ease communication between radiologists and referring physicians.”

However, getting radiologists to adopt the 10 RADS frameworks can be challenging, says David J. Vining, MD, medical director of the image processing and visualization laboratory at The University of Texas MD Anderson Cancer Center, Houston. “The [RADS] … are complex and difficult to recall during daily clinical practice,” he said during a presentation at RSNA 2018 in Chicago. One issue is that the 10 RADS vary in terms of what is assessed, how they are organized and how disease risk is determined (for example, by objective assessment, via calculated sums).

To make it as simple as possible for radiologists to adopt RADS, MD Anderson is using a structured reporting solution called VisionSR, which employs natural language processing to collect the common data elements used in RADS. The approach is intended to fit into the radiologist’s natural work flow. Radiologists do not have to manually enter data elements or select data from pull-down menus. Instead, radiologists orally dictate their findings as they assess an imaging scan on-screen.

“The radiologists are free to talk as much as they wish,” Vining said. “The natural language processing populates the common data elements as the radiologist describes the disease.”

Because it encompasses a dictation capability, the structured reporting solution has replaced the conventional dictation system used by radiologists at MD Anderson.

The solution also uses visual prompts to remind radiologists to dictate the data elements used in RADS. “When the radiologist says ‘lung nodule,’ a window pops up with the common data elements,” Vining said. “As the radiologist dictates these elements, the system automatically populates those fields.” The multimedia solution also marks the anatomical location of the radiologist’s findings on the image.

In real time, the software then calculates a disease classification, or grade, based on the data provided by the radiologist. For instance, for breast cancer imaging, there are six potential classifications, ranging from “negative” to “highly suggestive of malignancy.”

A standardized reporting structure can be used for more than RADS reporting, says Vining, who is also CEO of VisionSR. Standardizing radiology reporting also makes it easier to track changes in a patient’s imaging studies over time and to mine radiology data and identify trends across patients.

More for you

Loading data for hdm_tax_topic #better-outcomes...