• Corpus ID: 239024371

ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using Conditional Generative Adversarial Networks

@article{Hossain2021ECGATKGANRA,
  title={ECG-ATK-GAN: Robustness against Adversarial Attacks on ECG using Conditional Generative Adversarial Networks},
  author={Khondker Fariha Hossain and Sharif Amit Kamran and Xingjun Ma and A. Tavakkoli},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.09983}
}
Automating arrhythmia detection from ECG requires a robust and trusted system that retains high accuracy under electrical disturbances. Many machine learning approaches have reached human-level performance in classifying arrhythmia from ECGs. However, these architectures are vulnerable to adversarial attacks, which can misclassify ECG signals by decreasing the model’s accuracy. Adversarial attacks are small crafted perturbations injected in the original data which manifest the out-of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 40 REFERENCES
ECG-Adv-GAN: Detecting ECG Adversarial Examples with Conditional Generative Adversarial Networks
TLDR
A novel Conditional Generative Adversarial Network is proposed to simultaneously generate ECG signals for different categories and detect cardiac abnormalities and it outperforms other classification models in normal/abnormal ECG signal detection by benchmarking real world and adversarial signals.
Deep learning models for electrocardiograms are susceptible to adversarial attack
TLDR
A method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation is developed and it is shown that a deep learning model for arrhythmia detection from single-lead ECG 6 is vulnerable to this type of attack.
Hard-Label Black-Box Adversarial Attack on Deep Electrocardiogram Classifier
TLDR
This work attacks the DNN classification model for the PhysioNet Computing in Cardiology Challenge 2017 database and demonstrates that it can effectively generate the adversarial ECG inputs in this black-box setting, which raises significant concerns regarding the potential applications of DNN-based ECG classifiers in security-critical systems.
Generalization of Convolutional Neural Networks for ECG Classification Using Generative Adversarial Networks
TLDR
This work proposes a novel data-augmentation technique using generative adversarial networks (GANs) to restore the balance of the MIT-BIH arrhythmia dataset, and demonstrates that augmenting the heartbeats using GANs outperforms other common data augmentation techniques.
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
TLDR
This paper analyzes the properties of ECGs to design effective attacks schemes under two attacks models respectively and demonstrates the blind spots of DNN-powered diagnosis systems under adversarial attacks, which calls attention to adequate countermeasures.
SimGANs: Simulator-Based Generative Adversarial Networks for ECG Synthesis to Improve Deep ECG Classification
TLDR
This work uses a system of ordinary differential equations representing heart dynamics, and incorporates this ODE system into the optimization process of a generative adversarial network to create biologically plausible ECG training examples and shows that heart simulation knowledge during the generation process improves ECG classification.
PGANs: Personalized Generative Adversarial Networks for ECG Synthesis to Improve Patient-Specific Deep ECG Classification
TLDR
A generative model is proposed that learns to synthesize patient-specific ECG signals, which can then be used as additional training data to improve a patient- specific classifier performance.
Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
TLDR
The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.
Adversarial Robustness Toolbox v1.0.0
TLDR
Adversarial Robustness Toolbox is a Python library supporting developers and researchers in defending Machine Learning models against adversarial threats and helps making AI systems more secure and trustworthy.
...
...