• Corpus ID: 237260138

Application of Adversarial Examples to Physical ECG Signals

  title={Application of Adversarial Examples to Physical ECG Signals},
  author={Taiga Ono and Takeshi Sugawara and Jun Sakuma and Tatsuya Mori},
This work aims to assess the reality and feasibility of the adversarial attack against cardiac diagnosis system powered by machine learning algorithms. To this end, we introduce “adversarial beats”, which are adversarial perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system. We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate. Next, to evaluate its… 

Figures and Tables from this paper



Adversarial Examples for Electrocardiograms

A neural network model is implemented achieving state-of-the-art performance on the data from the 2017 PhysioNet/Computing-in-Cardiology Challenge for arrhythmia detection from single lead ECG classification and a method to construct smoothed adversarial examples for single-lead ECG is developed.

Adversarial Attacks Against Medical Deep Learning Systems

This paper demonstrates that adversarial examples are capable of manipulating deep learning systems across three clinical domains, and outlines the healthcare economy and the incentives it creates for fraud and provides concrete examples of how and why such attacks could be realistically carried out.

ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

This paper analyzes the properties of ECGs to design effective attacks schemes under two attacks models respectively and demonstrates the blind spots of DNN-powered diagnosis systems under adversarial attacks, which calls attention to adequate countermeasures.

Adversarial examples in the physical world

It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.

Robust Audio Adversarial Example for a Physical Attack

Evaluation and a listening experiment demonstrated that adversarial examples generated by the proposed method are able to attack a state-of-the-art speech recognition model in the physical world without being noticed by humans, suggesting that audio adversarial example may become a real threat.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

ECG Heartbeat Classification: A Deep Transferable Representation

A method based on deep convolutional neural networks for the classification of heartbeats which is able to accurately classify five different arrhythmias in accordance with the AAMI EC57 standard is proposed.

SoK: Security and Privacy in Machine Learning

It is apparent that constructing a theoretical understanding of the sensitivity of modern ML algorithms to the data they analyze, à la PAC theory, will foster a science of security and privacy in ML.

Synthesizing Robust Adversarial Examples

The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation.

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text

A white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end has a 100% success rate, and the feasibility of this attack introduce a new domain to study adversarial examples.