Research looks to bring deep learning to radiology
Some leading healthcare organizations are beginning to apply deep learning to research efforts intended to help radiological initiatives to better diagnose diseases.
Deep learning is a subset of artificial intelligence and is used by researchers to help solve many big data problems such as computer vision, speech recognition, and natural language processing. For healthcare organizations doing pioneer work with deep learning, this includes image recognition and the ability to pair that recognition with algorithms to assist in diagnosis.
Currently, few healthcare organizations have the technical capacity to do research in deep learning, but early efforts are beginning to unearth findings that hold promise within radiology, says Luciano Prevedello, MD, division chief in medical imaging informatics at The Ohio State University Wexner Medical Center.
Prevedello leads a lab at OSU Wexner that is looking at the use of augmented intelligence in imaging, staffed by two physicians, three engineers and one medical physicist, he said a presentation at the recent annual meeting of the Radiological Society of North America. The lab is able to use three supercomputers that can run a variety of open-source deep learning frameworks, including Python, Caffe and TensorFlow.
One of the early projects at OSU Wexner involves using deep learning to help support the prioritization of imaging studies, Prevedello says. “One of the problems is that 40 percent of inpatient studies are (ordered with high priority), so how do you sort them and know which ones should really be done first?” Early deep learning work has centered on using deep learning for examining images and extracting critical findings.
For example, in looking at computed tomography images of the head, deep learning efforts have been aimed at “training” the computer to separate out normal images from abnormal images. In a learning set of images, the OSU initiative has been able to make correct classification in 91 out of every 100 images that human radiologists have already studied; in stroke cases, the rate is 81 correct classifications out of every 100 images.
“The idea here is to make our scanners more intelligent,” Prevedello says. If algorithms can be developed to identify problematic cases, we can then notify radiologists to read those cases sooner, or reshuffle schedules to have them read the highest priority cases first.”
Deep learning also is being applied to text analysis, assisting radiologists beyond just image interpretation by creating study protocols and attempting to use natural language processing to make clinical discoveries.
At Stanford University Medical Center, early work with deep learning is focusing on cohort selection and image labeling, says Curtis Langlotz, MD, professor of radiology and biomedical informatics at the organization. A variety of data from the medical center, including its electronic medical record, genomics data, biobank information and imaging studies, are brought together and processed on a node of Stanford University’s High Performance Computing Center.
“We have software that does kind of a Google search of radiology reports,” Langlotz says. “It’s not exact, but it’s a good way to get a sense of how many cases have a phrase in reports. When we automate this notion of labeling, it’s not perfect but we look at it as kind of a pipeline to enable further research.”
Such studies can be used to find similar cases among radiology free-text reports and determine variations in treatment, and which cases resulted in the best outcomes. In addition, deep learning has been used to improve diagnosis of diseases—for example, Stanford researchers used it to develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Its model, CheXNet, is a 121-layer convolutional neural network that inputs a chest X-ray image and outputs the probability of pneumonia along with a map indicating areas of the image most indicative of pneumonia.
Work is also underway at the Mayo Clinic, says Bradley Erickson, MD, associate chair for research who is leading efforts at Mayo’s Radiology Informatics Lab, which is developing new informatics tools that extract and convey information available in medical images. This includes optimizing radiology systems, including picture archiving and communication system, radiology information system, 3-D imaging and computer-aided diagnosis tools.
The main research focus of the informatics lab is developing image-derived biomarkers of disease. The majority of work focuses on brain cancer, but there are also projects centered on lung cancer and interstitial disease, as well as polycystic kidney disease, Erickson says.