New forms of cyberattacks against medical deep learning systems may leave three popular imaging tasks particularly vulnerable to attack by hackers, according to a recently published report by a team of researchers from the Harvard Medical School and Massachusetts Institute of Technology.

Specifically, adversarial attacks pose specific risks to healthcare because of the monetary incentives and technical vulnerability, say the researchers, led by Samuel Finlayson of the department of systems biology and Isaac S. Kohane of the department of biomedical informatics at Harvard Medical School. Attacks could leave otherwise highly accurate AI systems almost completely ineffective.

Adversarial attacks involve “inputs to machine learning models that have been crafted to force the model to make a classification error,” the report notes. These attacks—such as feeding bad data or manipulated images to algorithms—could cause deep learning systems to make mistakes in coming to decisions, which could result in medical errors or erroneous treatment decisions.

Also See: Deep learning will create more benefits than classic machine learning

Hackers are likely to be drawn to attack deep learning systems in healthcare because of the enormous size of its economy, and the likelihood that algorithms will increasingly be relied upon to make reimbursement decisions, as well as potential prescriptions of pharmaceuticals or medical devices.

Bloomberg file photo

The authors contend that medical imaging might be particularly susceptible to advanced forms of hacking. “The medical imaging pipeline has many potential attackers and is thus vulnerable at many different stages,” they say. “While in theory one could devise elaborate image verification schemes throughout the image processing pipeline to try to guard against attacks, this would be extremely costly and difficult to implement in practice.”

To test the hypothesis, the researchers staged mock adversarial attacks under a variety of threat models, implementing both human-imperceptible and patch attacks in chest X-ray, fundoscopy and dermoscopy. “All adversarial attacks were extremely successful,” they found. “Adversarial patch attacks can be applied to any image and in any position. As such, for optical attacks they could even be physically built directly into image capture processes.”

Adversarial attacks could affect radiology in several ways. For example, a bad actor could add adversarial noise to chest X-ray studies, which are typically used to justify the use of more complex and expensive imaging studies, to ensure that a deep-learning model always gives a desired diagnosis.

The authors conclude that deep learning initiatives in healthcare have significant potential, but that adversarial attacks “pose a disproportionately large threat in the medical domain … For machine learning researchers, we recommend research into infrastructural and algorithmic solutions designed to guarantee that attacks are infeasible or at least can be retroactively identified.”

“We urge caution in employing deep learning systems in clinical settings, and encourage research into domain-specific defense strategies,” the researchers conclude.

Register or login for access to this item and much more

All Health Data Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access