• Corpus ID: 220845484

End-to-End Adversarial White Box Attacks on Music Instrument Classification

@article{Prinz2020EndtoEndAW,
  title={End-to-End Adversarial White Box Attacks on Music Instrument Classification},
  author={Katharina Prinz and Arthur Flexer},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.14714}
}
Small adversarial perturbations of input data are able to drastically change performance of machine learning systems, thereby challenging the validity of such systems. We present the very first end-to-end adversarial attacks on a music instrument classification system allowing to add perturbations directly to audio waveforms instead of spectrograms. Our attacks are able to reduce the accuracy close to a random baseline while at the same time keeping perturbations almost imperceptible and… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 28 REFERENCES
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
TLDR
A white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end has a 100% success rate, and the feasibility of this attack introduce a new domain to study adversarial examples.
Robustness of Adversarial Attacks in Sound Event Classification
TLDR
This paper investigates the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification to provide insights on strengths and weaknesses in current adversarial attack algorithms and provide a baseline for defenses against adversarial attacks.
Deep learning, audio adversaries, and music content analysis
TLDR
This work designs an adversary for a DNN that takes as input short-time spectral magnitudes of recorded music and outputs a high-level music descriptor, and demonstrates how this adversary can make the DNN behave in any way with only extremely minor changes to the music recording signal.
A Robust Approach for Securing Audio Classification Against Adversarial Attacks
TLDR
A novel approach based on pre-processed DWT representation of audio signals and SVM to secure audio systems against adversarial attacks and shows competitive performance compared to the deep neural networks both in terms of accuracy and robustness against strong adversarial attack.
A Study on the Transferability of Adversarial Attacks in Sound Event Classification
TLDR
This work demonstrates differences in transferability properties from those observed in computer vision and shows that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and Techniques such as knowledge distillation do not increase the transferable of attacks.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
TLDR
This paper develops effectively imperceptible audio adversarial examples by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets and makes progress towards physical-world over-the-air audio adversaria examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.
Detection of Adversarial Attacks and Characterization of Adversarial Subspace
TLDR
The experimental results on three benchmarking datasets of environmental sounds represented by spectrograms reveal high detection rate of the proposed detector for eight types of adversarial attacks and it also outperforms other detection approaches.
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
TLDR
A new type of adversarial examples based on psychoacoustic hiding is introduced, which allows us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal.
Characterizing Audio Adversarial Examples Using Temporal Dependency
TLDR
The results reveal the importance of using the temporal dependency in audio data to gain discriminate power against adversarial examples and offer novel insights in exploiting domain-specific data properties to mitigate negative effects of adversarialExamples.
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
...
...