Adversarial Attacks in Sound Event Classification
@article{Subramanian2019AdversarialAI, title={Adversarial Attacks in Sound Event Classification}, author={Vinod Subramanian and Emmanouil Benetos and N. Xu and SKoT McDonald and Mark Sandler}, journal={ArXiv}, year={2019}, volume={abs/1907.02477} }
Adversarial attacks refer to a set of methods that perturb the input to a classification model in order to fool the classifier. In this paper we apply different gradient based adversarial attack algorithms on five deep learning models trained for sound event classification. Four of the models use mel-spectrogram input and one model uses raw audio input. The models represent standard architectures such as convolutional, recurrent and dense networks. The dataset used for training is the Freesound…Â
6 Citations
Robustness of Adversarial Attacks in Sound Event Classification
- Computer Science, MathematicsDCASE
- 2019
This paper investigates the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification to provide insights on strengths and weaknesses in current adversarial attack algorithms and provide a baseline for defenses against adversarial attacks.
Adversarial Attacks Against Audio Surveillance Systems
- Computer Science2022 30th European Signal Processing Conference (EUSIPCO)
- 2022
It is shown that several attack types are able to reach high success rate levels by injecting relatively small perturbations on the original audio signals, which underlines the need of suitable and effective defense strategies, which will boost reliability in machine learning based solution.
Generation of Black-box Audio Adversarial Examples Based on Gradient Approximation and Autoencoders
- Computer ScienceACM J. Emerg. Technol. Comput. Syst.
- 2022
A real-time attack framework that utilizes the neural network trained by the gradient approximation method to generate adversarial examples on Keyword Spotting (KWS) systems that can easily fool a black-box KWS system to output incorrect results with only one inference.
Adversarial jamming attacks and defense strategies via adaptive deep reinforcement learning
- Computer ScienceArXiv
- 2020
This paper considers a victim user that performs DRL-based dynamic channel access, and an attacker that executes DRLbased jamming attacks to disrupt the victim, and proposes three defense strategies, namely diversified defense with proportional-integral-derivative (PID) control, diversifieddefense with an imitation attacker, and defense via orthogonal policies.
Resilient Dynamic Channel Access via Robust Deep Reinforcement Learning
- Computer ScienceIEEE Access
- 2021
This paper considers a victim user that performs DRL-based dynamic channel access, and an attacker that executes D RL-based jamming attacks to disrupt the victim, and proposes three defense strategies, namely diversified defense with proportional-integral-derivative (PID) control, diversifieddefense with an imitation attacker, and defense via orthogonal policies.
Adversarial Jamming Attacks on Deep Reinforcement Learning Based Dynamic Multichannel Access
- Computer Science2020 IEEE Wireless Communications and Networking Conference (WCNC)
- 2020
This paper proposes two adversarial policies, one based on feed-forward neural networks (FNNs) and the other based on deep reinforcement learning (DRL) policies, which aim at minimizing the accuracy of a DRL-based dynamic channel access agent.
References
SHOWING 1-10 OF 30 REFERENCES
Deep Learning and Music Adversaries
- Computer ScienceIEEE Transactions on Multimedia
- 2015
This work builds adversaries for deep learning systems applied to image object recognition by exploiting the parameters of the system to find the minimal perturbation of the input image such that the system misclassifies it with high confidence.
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems
- Computer ScienceAsiaCCS
- 2020
SirenAttack is evaluated on a set of state-of-the-art deep learning-based acoustic systems (including speech command recognition, speaker recognition and sound event classification), with results showing the versatility, effectiveness, and stealthiness of SirenAttack.
The Limitations of Deep Learning in Adversarial Settings
- Computer Science2016 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2016
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
mixup: Beyond Empirical Risk Minimization
- Computer ScienceICLR
- 2018
This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust.
Towards Evaluating the Robustness of Neural Networks
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
General-purpose audio tagging from noisy labels using convolutional neural networks
- Computer ScienceDCASE
- 2018
A system using an ensemble of convolutional neural networks trained on log-scaled mel spectrograms to address general-purpose audio tagging challenges and to reduce the effects of label noise is proposed.
Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification
- Computer ScienceIEEE Signal Processing Letters
- 2017
It is shown that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a “shallow” dictionary learning model with augmentation.
Intriguing properties of neural networks
- Computer ScienceICLR
- 2014
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.