Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors

@article{Wetstein2021AdversarialAV,
  title={Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors},
  author={Suzanne C. Wetstein and Cristina Gonz'alez-Gonzalo and Gerda Bortsova and Bart Liefers and Florian Dubost and Ioannis Katramados and Laurens Hogeweg and Bram van Ginneken and Josien P. W. Pluim and Marleen de Bruijne and Clara I. S'anchez and Mitko Veta},
  journal={Medical image analysis},
  year={2021},
  volume={73},
  pages={
          102141
        }
}
Adversarial attacks are considered a potentially serious security threat for machine learning systems. Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial attacks due to strong financial incentives and the associated technological infrastructure. In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology. We focus on… Expand

Figures and Tables from this paper

A Review of Generative Adversarial Networks in Cancer Imaging: New Applications, New Solutions
TLDR
The potential of GANs to address a number of key challenges of cancer imaging, including data scarcity and imbalance, domain and dataset shifts, data access and privacy, data annotation and quantification, as well as cancer detection, tumour profiling and treatment planning are assessed. Expand
A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis
In the past years, deep neural networks (DNN) have become popular in many disciplines such as computer vision (CV), natural language processing (NLP), etc. The evolution of hardware has helpedExpand
Adversarial Attack Driven Data Augmentation for Accurate And Robust Medical Image Segmentation
TLDR
A new augmentation method by introducing adversarial learning attack techniques, specifically Fast Gradient Sign Method (FGSM) and the concept of Inverse FGSM, which works in the opposite manner of FGSM for the data augmentation. Expand
Adversarial Attack Vulnerability of Deep Learning Models for Oncologic Images
TLDR
While medical DL systems are extremely susceptible to adversarial attacks, adversarial training show promise as an effective defense against attacks, and adversarial sensitivity as a metric to improve model performance. Expand
Adversarial Heart Attack: Neural Networks Fooled to Segment Heart Symbols in Chest X-Ray Images
TLDR
This article showed that, by adding almost imperceptible noise to the image, it can reliably force state-of-the-art neural networks to segment the heart as a heart symbol instead of its real anatomical shape, and explored the limits of adversarial manipulation of segmentations. Expand
Stratification by Tumor Grade Groups in a Holistic Evaluation of Machine Learning for Brain Tumor Segmentation
  • Snehal Prabhudesai, Nicholas Chandler Wang, +4 authors Arvind Rao
  • Medicine
  • Frontiers in Neuroscience
  • 2021
TLDR
This work performs a comprehensive evaluation of a glioma segmentation ML algorithm by stratifying data by specific tumor grade groups and evaluates these algorithms on each of the four axes of model evaluation—diagnostic performance, model confidence, robustness, and data quality. Expand
Using Adversarial Images to Assess the Stability of Deep Learning Models Trained on Diagnostic Images in Oncology
Deep learning (DL) models have rapidly become a popular and cost-effective tool for image classification within oncology. A major limitation of DL models is output instability, as small perturbationsExpand
Accurate and adversarially robust classification of medical images and ECG time-series with gradient-free trained sign activation neural networks
TLDR
It is shown that adversaries targeting the gradient-free sign networks are visually distinguishable from the original data and thus likely to be detected by human inspection and it is expected an automated method could be developed to detect and deter attacks in advance. Expand

References

SHOWING 1-10 OF 89 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Generalizability vs
  • robustness: adversarial examples for medical imaging. arXiv preprint arXiv:1804.00504.
  • 2018
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. Expand
One in four US consumers have had their healthcare data breached, Accenture survey reveals
  • 2017
Image quality assessment: from error visibility to structural similarity
TLDR
A structural similarity index is developed and its promise is demonstrated through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. Expand
Adversarial Attacks Against Medical Deep Learning Systems
TLDR
This paper demonstrates that adversarial examples are capable of manipulating deep learning systems across three clinical domains, and outlines the healthcare economy and the incentives it creates for fraud and provides concrete examples of how and why such attacks could be realistically carried out. Expand
GE healthcare receives FDA clearance of first artificial intelligence algorithms embedded on-device to prioritize critical chest X-ray review
  • GE Reports
  • 2018
Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks
TLDR
This paper extensively analyzed the performance of two state-of-the-art classification deep networks on chest X-ray images and modified the pooling operations in the two classification networks to measure their sensitivities against different attacks, on the specific task of chestX-ray classification. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
...
1
2
3
4
5
...