• Corpus ID: 235422230

Audio Attacks and Defenses against AED Systems - A Practical Study

@article{Santos2021AudioAA,
  title={Audio Attacks and Defenses against AED Systems - A Practical Study},
  author={Rodrigo dos Santos and Shirin Nilizadeh},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.07428}
}
In this paper, we evaluate deep learning-enabled AED systems against evasion attacks based on adversarial examples. We test the robustness of multiple security critical AED tasks, implemented as CNNs classifiers, as well as existing thirdparty Nest devices, manufactured by Google, which run their own black-box deep learning models. Our adversarial examples use audio perturbations made of white and background noises. Such disturbances are easy to create, to perform and to reproduce, and can be… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 125 REFERENCES
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
TLDR
A new type of adversarial examples based on psychoacoustic hiding is introduced, which allows us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal.
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
  • K. Rajaratnam, J. Kalita
  • Computer Science, Engineering
    2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)
  • 2018
TLDR
This work explores the idea of flooding p articular frequency bands of an audio signal with random noise in order to detect adversarial examples, and builds on the idea that speech classifiers are relatively robust to natural noise.
Selective Audio Adversarial Example in Evasion Attack on Speech Recognition System
TLDR
A selective audio adversarial example with minimum distortion that will be misclassified as the targetphrase by a victim classifier but correctly classified as the original phrase by a protected classifier is proposed.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
TLDR
This paper develops effectively imperceptible audio adversarial examples by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets and makes progress towards physical-world over-the-air audio adversaria examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
TLDR
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
  • Jianyu Wang
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
The experiment on the very (computationally) challenging ImageNet dataset further demonstrates the effectiveness of the fast method, which shows that random start and the most confusing target attack effectively prevent the label leaking and gradient masking problem.
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
TLDR
This work revisits the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, and proposes a multi-strength adversarial training method (MAT) that combines the adversarialTraining examples with different adversarial strengths to defend adversarial Attacks.
Adversarial Music : Real world Audio Adversary against Wake-word Detection Systems
Voice Assistants (VAs) such as Amazon Alexa, Google Assistant rely on wake word detection to respond to people’s commands, which could potentially be vulnerable to audio adversarial examples. In this
One Pixel Attack for Fooling Deep Neural Networks
TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations
TLDR
AdvPulse is proposed, a systematic approach to generate subsecond audio adversarial perturbations that achieves the capability to alter the recognition results of streaming audio inputs in a targeted and synchronization-free manner and exploits penalty-based universal adversarialperturbation generation algorithm and incorporates the varying time delay into the optimization process.
...
1
2
3
4
5
...