Smartphone images, AI offer new way to screen for diabetic retinopathy

An artificial intelligence software platform is able to provide automated real-time interpretation of retinal images from a smartphone-mounted device for quick detection of diabetic retinopathy.

The eye condition that can lead to permanent vision loss if not caught early by clinicians trained to recognize signs of the disease. However, ophthalmic screening of retinal images is time consuming and requires expert readers.

“It can take two to seven days for an ophthalmologist to interpret the images,” according to Yannis Paulus, MD, a vitreoretinal surgeon at the University of Michigan Kellogg Eye Center. “To make screening truly accessible, we need to provide on-the-spot feedback, taking the photo and interpreting it while the patient is there to schedule an eye appointment if necessary.”

Paulus and a team of Kellogg researchers have developed a device— called RetinaScope—that turns a smartphone into a functioning retinal camera for rapid, portable DR screening. In a new study, they showed how retinal images taken by RetinaScope can be accurately analyzed using a proprietary AI software platform called EyeArt from vendor Eyenuk.

“This is the first study to combine the imaging technology with automated real-time interpretation and compare it to gold standard dilated eye examination—and, the results are very encouraging,” says Paulus, who is lead author of the study presented at this week’s annual meeting of the Association for Research in Vision and Ophthalmology.

Paulus-Yannis-CROP.jpg

In the study of 69 adult patients with diabetes seen in the Kellogg Eye Center Retina Clinic, retinal images from RetinaScope were analyzed with the EyeArt software, which graded them as referral-warranted diabetic retinopathy (RWDR) or non-referral-warranted DR. The images were also independently evaluated by two human graders.

Also See: Clinician-AI combination is best in diagnosing diabetic retinopathy

“We took the extra step of comparing both automated interpretation and human expert graders with slit lamp evaluation to overcome what we felt was a shortcoming of similar studies conducted elsewhere,” says Michael Aaberg, a research assistant in the Paulus lab and co-author of the study. “When human grading is used as the only check of AI-based grading, there’s a risk that photos that fail to accurately capture the pathology of DR could be interpreted incorrectly by both.”

The slit lamp evaluation, using a binocular microscope that enables a doctor to see the structures within the eyes, confirmed RWDR in 53 patients. The AI software had a sensitivity of 86.8 percent, which is above the 80 percent recommended for an ophthalmic screening device, and specificity of 73.3 percent.

While one of the human graders achieved a level of sensitivity (96.2 percent) that was higher by a statistically-significant factor, both readers of the images had lower specificity (40 percent and 46.7 percent, respectively).

“This is the first time AI used on a smartphone-based platform has been shown to be effective when compared to the gold standard of clinical evaluation,” contends Paulus, whose team is continuing to tweak the hardware-software solution and is pursuing FDA clearance.

“The key to preventing DR-related vision loss is early detection through regular screening,” he adds. “We think the key to that is bringing portable, easy-to-administer, reliable retinal screening to primary care doctors’ offices and health clinics.”

For reprint and licensing requests for this article, click here.