Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
@article{Chen2021WhoIR, title={Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems}, author={Guangke Chen and Sen Chen and Lingling Fan and Xiaoning Du and Zhe Zhao and Fu Song and Yang Liu}, journal={2021 IEEE Symposium on Security and Privacy (SP)}, year={2021}, pages={694-711} }
Speaker recognition (SR) is widely used in our daily life as a biometric authentication or identification mechanism. The popularity of SR brings in serious security concerns, as demonstrated by recent adversarial attacks. However, the impacts of such threats in the practical black-box setting are still open, since current attacks consider the white-box setting only.In this paper, we conduct the first comprehensive and systematic study of the adversarial attacks on SR systems (SRSs) to…
Figures and Tables from this paper
49 Citations
Adversarial attacks and defenses in Speaker Recognition Systems: A survey
- Computer ScienceJ. Syst. Archit.
- 2022
Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning
- Computer ScienceIEEE/ACM Transactions on Audio, Speech, and Language Processing
- 2022
Since there is no common metric for evaluating the ASV performance under adversarial attacks, this work formalizes evaluation metrics for adversarial defense considering both purification and detection based approaches into account and encourages future works to benchmark their approaches based on the proposed evaluation framework.
Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification
- Computer ScienceINTERSPEECH
- 2020
This work proposes to defend ASV systems against adversarial attacks with a separate detection network, rather than augmenting adversarial data into ASV training, and introduces a VGG-like binary classification detector, which is demonstrated to be effective on detecting adversarial samples.
Voting for the right answer: Adversarial defense for speaker verification
- Computer ScienceInterspeech
- 2021
This work proposes the idea of “voting for the right answer” to prevent risky decisions of ASV in blind spot areas, by employing random sampling and voting, and shows that the proposed method improves the robustness against both the limited-knowledge attackers and the perfect- knowledge attackers.
Adversarial Defense for Automatic Speaker Verification by Self-Supervised Learning
- Computer Science
- 2021
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms and formalizes evaluation metrics for adversarialdefense considering both purification and detection based approaches into account.
Defending Against Adversarial Attacks in Speaker Verification Systems
- Computer Science2021 IEEE International Performance, Computing, and Communications Conference (IPCCC)
- 2021
This work designs and implements a defense system that is simple, light-weight, and effective against adversarial attacks for speaker verification, and shows that denoising and noise-adding can significantly degrade the performance of a state-of-the-art adversarial attack.
Attack on Practical Speaker Verification System Using Universal Adversarial Perturbations
- Computer ScienceICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2021
This work shows that by playing the crafted adversarial perturbation as a separate source when the adversary is speaking, the practical speaker verification system will misjudge the adversary as a target speaker.
SEC4SR: A Security Analysis Platform for Speaker Recognition
- Computer ScienceArXiv
- 2021
SEC4SR is presented, the first platform enabling researchers to systematically and comprehensively evaluate adversarial attacks and defenses in SR and provides lots of useful findings that may advance future research.
Adversarial Attack and Defense Strategies for Deep Speaker Recognition Systems
- Computer ScienceComput. Speech Lang.
- 2021
Dictionary Attacks on Speaker Verification
- Computer ScienceArXiv
- 2022
A generic formulation of the attack that can be used with various speech representations and threat models is introduced and master voices that are effective in the most challenging conditions and transferable between speaker encoders are obtained.
References
SHOWING 1-10 OF 98 REFERENCES
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
- Computer ScienceNDSS
- 2019
A new type of adversarial examples based on psychoacoustic hiding is introduced, which allows us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal.
Adversarial Black-Box Attacks for Automatic Speech Recognition Systems Using Multi-Objective Genetic Optimization
- Computer ScienceArXiv
- 2018
A multi-objective genetic algorithm based approach is used to perform both targeted and un-targeted black-box attacks on automatic speech recognition (ASR) systems, proposing a generic framework which can be used to attack any ASR system, even if it's internal working is hidden.
Fooling End-To-End Speaker Verification With Adversarial Examples
- Computer Science2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2018
This paper presents white-box attacks on a deep end-to-end network that was either trained on YOHO or NTIMIT, and shows that one can significantly decrease the accuracy of a target system even when the adversarial examples are generated with different system potentially using different features.
Boosting Adversarial Attacks with Momentum
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
- Computer ScienceArXiv
- 2018
A first of its kind demonstration of adversarial attacks against speech classification model by adding small background noise without having to know the underlying model parameter and architecture is presented.
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
- Computer ScienceICLR
- 2018
The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.
Practical Black-Box Attacks against Machine Learning
- Computer ScienceAsiaCCS
- 2017
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer ScienceICLR
- 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Black-box Adversarial Attacks with Limited Queries and Information
- Computer ScienceICML
- 2018
This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
- Computer ScienceICML
- 2019
This paper develops effectively imperceptible audio adversarial examples by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets and makes progress towards physical-world over-the-air audio adversaria examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.