Transferability of Adversarial Attacks on Synthetic Speech Detection

@article{Deng2022TransferabilityOA,
  title={Transferability of Adversarial Attacks on Synthetic Speech Detection},
  author={Jiacheng Deng and Shunyi Chen and Li Dong and Diqun Yan and Rangding Wang},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.07711}
}
Synthetic speech detection is one of the most important research problems in audio security. Meanwhile, deep neural networks are vulnerable to adversarial attacks. Therefore, we establish a comprehensive benchmark to evaluate the transferability of adversarial attacks on the synthetic speech detection task. Specifically, we attempt to investigate: 1) The transferability of adversarial attacks between different features. 2) The influence of varying extraction hyperparameters of features on the… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 31 REFERENCES
A Robust Approach for Securing Audio Classification Against Adversarial Attacks
TLDR
A novel approach based on pre-processed DWT representation of audio signals and SVM to secure audio systems against adversarial attacks and shows competitive performance compared to the deep neural networks both in terms of accuracy and robustness against strong adversarial attack.
A Study on the Transferability of Adversarial Attacks in Sound Event Classification
TLDR
This work demonstrates differences in transferability properties from those observed in computer vision and shows that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and Techniques such as knowledge distillation do not increase the transferable of attacks.
Robustness of Adversarial Attacks in Sound Event Classification
TLDR
This paper investigates the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification to provide insights on strengths and weaknesses in current adversarial attack algorithms and provide a baseline for defenses against adversarial attacks.
Generating Audio Adversarial Examples with Ensemble Substituted Models
TLDR
This paper points out that constructing a good substituted model architecture is crucial to the effectiveness of the attack, as it helps to generate a more sophisticated set of adversarial examples and finds that ensemble substituted models can achieve the optimal attack effect.
SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems
TLDR
SirenAttack is evaluated on a set of state-of-the-art deep learning-based acoustic systems (including speech command recognition, speaker recognition and sound event classification), with results showing the versatility, effectiveness, and stealthiness of SirenAttack.
Enhancing the Transferability of Adversarial Attacks through Variance Tuning
  • Xiaosen Wang, Kun He
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
This work proposes a new method called variance tuning to enhance the class of iterative gradient based attack methods and improve their attack transferability, and considers the gradient variance of the previous iteration to tune the current gradient so as to stabilize the update direction and escape from poor local optima.
A Capsule Network Based Approach for Detection of Audio Spoofing Attacks
TLDR
A capsule network is introduced to enhance the generalization of the detection system for anti-spoofing attacks, and the results indicate that the proposed approach is also highly capable of detecting replay attacks.
Universal Adversarial Audio Perturbations
TLDR
It is demonstrated the existence of universal adversarial perturbations, which can fool a family of audio classification architectures, for both targeted and untargeted attack scenarios, and a proof that the proposed penalty method theoretically converges to a solution that corresponds to universal adversaries.
Boosting Adversarial Attacks with Momentum
TLDR
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Boosting Adversarial Transferability through Enhanced Momentum
TLDR
This work proposes an enhanced momentum iterative gradient-based method that accumulates the gradient so as to stabilize the update direction and escape from poor local maxima of momentum-based methods.
...
...