When radiologists at The University of Virginia Health System pull up abdominal CT scans, they see a green or red icon in the corner of the study. This means a machine learning algorithm has already reviewed the image. A green icon means the application found nothing troubling, while a red icon can be clicked on to see what abnormalities the technology found.

The Charlottesville-based health system has been partnering with Zebra Medical Vision to test beta versions of algorithms that detect five medical conditions or abnormalities on CT scans: emphysema, coronary artery calcification, fatty liver, spinal fractures and spinal low bone density.

While the algorithms are doing a good job identifying routine specific findings, they are not yet reducing a radiologist’s workload or improving the accuracy of radiology reports. What needs to occur before that happens?

The following Q&A shares frontline insights from two UVA Health System radiologists: Christopher Gaskin, MD, professor and vice chair, operations and informatics; and Arun Krishnaraj, MD, associate professor of radiology and medical imaging.

How might AI be best used in diagnostic radiology?

Krishnaraj: I see AI as an enormous boon to radiology. It’s the first time in my career where we’re starting to scratch the surface of technology that will improve the quality of patient care and help radiologists sort through the current information overload. I tell my residents, “Most of our CT scans have over a thousand images, and most of our MRIs have three thousand. We probably look at 50 to 60 of each a day.”

Adobe Stock

The human brain is subject to fatigue and was never designed to identify patterns in that many images over the course of the day. Yet, we’re asking human beings to make important decisions about the health of other human beings when they may be exceeding the threshold of what their brains can do. So this is the perfect time for the second brain to come into play.

Gaskin: If our interpretation of images is assisted by AI, our reports would be more accurate. The technologies can help us avoid misses or come up with diagnoses that we don’t see, which would, in turn, improve our reports. Machines can also help us digest all the information in the patient chart to help us make a more informed diagnosis on the imaging study.

Furthermore, AI can also call our attention to urgent findings and help us prioritize which studies to read first. As soon as images are produced, AI or machine learning algorithms can instantly read those images to look for urgent findings, such as blood in the brain. A flag could be applied to those studies so that the radiologist would know, “Hey, read this first.”

What needs to happen before these algorithms can reduce the work load of radiologists?

Gaskin: It’s not as simple as, “Can an algorithm find a black dot on a white background?” When a radiologist interprets an image, he or she is considering hundreds of factors. Our diagnoses are partially objective, based on what we see in images, and partially subjective, based on patient information and extensive medical experience.

The existing algorithms tend to focus on a specific, very limited library of imaging findings, which represent a small piece of what we do. It’s difficult to apply machine learning to hundreds of decision points that are complex and subjective.

We have to teach the machine, which means we have to take large numbers of images with known confirmed diagnoses and manually mark them so that the machine can read these positive findings. If you take pictures of thousands of animals and circle every cat and say, “This is a cat,” then the machine can learn to distinguish cats from other animals.

Is there anything AI and machine learning might do better than radiologists?

Krishnaraj: The algorithm I am finding the most useful is the bone density score in cases of osteopenia in the spine, which basically means thin bones. This is a subtle finding that is not visually easy to see. The algorithm can measure the bone and compute a DEXA score [for bone density]. If the score falls outside of two standard deviations of normal, it indicates osteopenia.

How soon before AI becomes commonplace in radiology?

Krishnaraj: I think that’s the million dollar question. I don’t think anybody has a realistic timeline, but based on how quickly things like this have progressed in other industries, it will happen far sooner than even the best people can predict. These types of technologies tend to have a snowball effect. Once people see their utility, they ask for more, and then creative people respond.

How will the radiologist’s role change as AI grows?

Krishnaraj: Until now, the best radiologists were often identified as the ones with the keenest eyes, or the ability to identify abnormalities on imaging studies consistently without mistake. I think that skill will eventually be supplanted by machine-learning algorithms.

So the value of a radiologist is going to be less about identifying abnormalities, and more about synthesizing all available information — from imaging, the electronic health record, and conversations with the patient or referring provider — into a meaningful, actionable report that is easily digestible. That role involves more work than what we have traditionally been asked to do. And a big tool that we need to be able to accomplish this task is AI, which will help discern the signals among all the noise, or bring what’s important front and center.

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access