Deep learning technique works on smartphone to find skin lesions

A novel way to locate “regions of interest” in dermoscopy images can improve the detection of skin lesions.


A novel way to locate “regions of interest” in dermoscopy images can improve the detection of skin lesions.

Such review, augmented by deep learning, can be conducted using an app on a consumer mobile device, enabling real time identification and diagnosis of cancer and other skin conditions, according to new research reported in arXiv.org, part of the Cornell University Library.

The incidence of skin cancer is more than breast, lung, colon and prostate cancer combined.

Dermatologists primarily examine patients visually and take manual measurements, but this method is inadequate because it’s difficult to identify lesion types with the naked eye.

Noninvasive dermoscopy imaging, which enables in vivo evaluation of colors and microstructures in the skin not visible to the naked eye, is increasingly being used to automatically detect skin lesions. The imaging, conducted with a hand-held device called a dermatoscope or dermoscope, uses algorithms to facilitate visualization of the structures under the skin.

However, the available data sets of dermoscopy images have not been well labeled, since it’s costly and burdensome to do so. There are also different kinds of skin lesions, such as benign nevi, melanoma and seborrheic keratosis, which themselves vary by color, size, place and appearance.



The researchers, from the School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, UK, proposed that deep learning could automatically detect and localize regions of interest—bounding boxes around the lesions—in the images, which would help the algorithms understand the more precise features of different lesion types.

They used two localization meta-architectures for region of interest skin lesion detection and trained their models for region of interest detection using dermoscopy images from a publicly available data set. They then tested the algorithms on other publicly available data sets of skin lesions.

Since there was no earlier work in region of interest detection for skin lesions with convoluted neural networks, they compared the performance of their skin localization method to the state of the art segmentation method, which partitions an image into multiple segments and assigns labels to them, to detect regions of interest. Their two models were better at detecting the regions of interest, with a precision rate of 94.5 percent.

They also found that their methods could be used to automatically augment the data set of skin lesion images with different angles and magnification without capturing redundant data or affecting the quality of the images. Their program was able to remove unwanted artifacts, such as hairs or specular reflection, which can confuse the algorithms.

The researchers then created a mobile app for a smartphone to demonstrate the practical application of their models. They did so by using a molescope, a smartphone attachment for dermoscopy that provides a high resolution detailed view of the skin through magnification and specialized lighting. They then tested 50 moles on 25 different people to show that the models provide accurate real time detection and localization of skin lesions.

“Automated [region of interest] detection in skin lesions has great potential in improving the quality of the dataset, the accuracy of lesion localization and reducing the laborious process in the manual annotation,” the study authors say.

More for you

Loading data for hdm_tax_topic #better-outcomes...