A Study on the Transferability of Adversarial Attacks in Sound Event Classification

  title={A Study on the Transferability of Adversarial Attacks in Sound Event Classification},
  author={Vinod Subramanian and Arjun Pankajakshan and Emmanouil Benetos and Ning Xu and SKoT McDonald and Mark Sandler},
  journal={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to… 

Figures and Tables from this paper

Transferability of Adversarial Attacks on Synthetic Speech Detection
A comprehensive benchmark to evaluate the transferability of adversarial attacks on the synthetic speech detection task is established and the weaknesses of synthetic speech detec-tors and the transferable behaviours of adversarian attacks are summarised to provide insights for future research.
Audio Attacks and Defenses against AED Systems - A Practical Study
The robustness of multiple security critical AED tasks, implemented as CNNs classifiers, as well as existing thirdparty Nest devices, manufactured by Google, which run their own black-box deep learning models are tested.
Adversarial Attacks on Audio Source Separation
A simple yet effective regularization method is proposed to obtain imperceptible adversarial noise while maximizing the impact on separation quality with low computational complexity and shows the robustness of source separation models against a black-box attack.
End-to-End Adversarial White Box Attacks on Music Instrument Classification
This work presents the very first end-to-end adversarial attacks on a music instrument classification system allowing to add perturbations directly to audio waveforms instead of spectrograms.
Black-Box Attacks on Spoofing Countermeasures Using Transferability of Adversarial Examples
Spoofing countermeasure models are also vulnerable to black-box attacks, so an iterative ensemble method (IEM) combined with MI-FGSM could effectively generate adversarial examples with higher transferability.
Contrastive Predictive Coding of Audio with an Adversary
This work investigates learning general audio representations directly from raw signals using the Contrastive Predictive Coding objective and extends it by leveraging ideas from adversarial machine learning to produce additive perturbations that effectively makes the learning harder, such that the predictive tasks will not be distracted by trivial details.


Robustness of Adversarial Attacks in Sound Event Classification
This paper investigates the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification to provide insights on strengths and weaknesses in current adversarial attack algorithms and provide a baseline for defenses against adversarial attacks.
A Robust Approach for Securing Audio Classification Against Adversarial Attacks
A novel approach based on pre-processed DWT representation of audio signals and SVM to secure audio systems against adversarial attacks and shows competitive performance compared to the deep neural networks both in terms of accuracy and robustness against strong adversarial attack.
Characterizing Audio Adversarial Examples Using Temporal Dependency
The results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples and offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarialExamples.
Deep Learning and Music Adversaries
This work builds adversaries for deep learning systems applied to image object recognition by exploiting the parameters of the system to find the minimal perturbation of the input image such that the system misclassifies it with high confidence.
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
New transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees are introduced.
Towards Evaluating the Robustness of Neural Networks
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Evasion Attacks against Machine Learning at Test Time
This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
This paper develops effectively imperceptible audio adversarial examples by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets and makes progress towards physical-world over-the-air audio adversaria examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
A new type of adversarial examples based on psychoacoustic hiding is introduced, which allows us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal.
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
A white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end has a 100% success rate, and the feasibility of this attack introduce a new domain to study adversarial examples.