• Corpus ID: 239024371

ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using Conditional Generative Adversarial Networks

  title={ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using Conditional Generative Adversarial Networks},
  author={Khondker Fariha Hossain and Sharif Amit Kamran and Xingjun Ma and A. Tavakkoli},
Recently deep learning has reached human-level performance in classifying arrhythmia from Electrocardiogram (ECG). However, deep neural networks (DNN) are vulnerable to adversarial attacks, which can misclassify ECG signals by decreasing the model’s precision. Adversarial attacks are crafted perturbations injected in data that manifest the conventional DNN models to misclassify the correct class. Thus, safety concerns arise as it becomes challenging to establish the system’s reliability, given… 

Figures and Tables from this paper


Deep learning models for electrocardiograms are susceptible to adversarial attack
A method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation is developed and it is shown that a deep learning model for arrhythmia detection from single-lead ECG 6 is vulnerable to this type of attack.
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
This paper analyzes the properties of ECGs to design effective attacks schemes under two attacks models respectively and demonstrates the blind spots of DNN-powered diagnosis systems under adversarial attacks, which calls attention to adequate countermeasures.
Generalization of Convolutional Neural Networks for ECG Classification Using Generative Adversarial Networks
This work proposes a novel data-augmentation technique using generative adversarial networks (GANs) to restore the balance of the MIT-BIH arrhythmia dataset, and demonstrates that augmenting the heartbeats using GANs outperforms other common data augmentation techniques.
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems
Towards Deep Learning Models Resistant to Adversarial Attacks
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support
Towards Evaluating the Robustness of Neural Networks
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Adversarial attacks on medical machine learning
Far from discouraging continued innovation with medical machine learning, this work calls for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.
Adversarial examples in the physical world
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
A deep convolutional neural network model to classify heartbeats