Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

@article{Chen2021WhoIR,
  title={Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems},
  author={Guangke Chen and Sen Chen and Lingling Fan and Xiaoning Du and Zhe Zhao and Fu Song and Yang Liu},
  journal={2021 IEEE Symposium on Security and Privacy (SP)},
  year={2021},
  pages={694-711}
}
Speaker recognition (SR) is widely used in our daily life as a biometric authentication or identification mechanism. The popularity of SR brings in serious security concerns, as demonstrated by recent adversarial attacks. However, the impacts of such threats in the practical black-box setting are still open, since current attacks consider the white-box setting only.In this paper, we conduct the first comprehensive and systematic study of the adversarial attacks on SR systems (SRSs) to… 
Adversarial attacks and defenses in Speaker Recognition Systems: A survey
Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning
TLDR
Since there is no common metric for evaluating the ASV performance under adversarial attacks, this work formalizes evaluation metrics for adversarial defense considering both purification and detection based approaches into account and encourages future works to benchmark their approaches based on the proposed evaluation framework.
Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification
TLDR
This work proposes to defend ASV systems against adversarial attacks with a separate detection network, rather than augmenting adversarial data into ASV training, and introduces a VGG-like binary classification detector, which is demonstrated to be effective on detecting adversarial samples.
Voting for the right answer: Adversarial defense for speaker verification
TLDR
This work proposes the idea of “voting for the right answer” to prevent risky decisions of ASV in blind spot areas, by employing random sampling and voting, and shows that the proposed method improves the robustness against both the limited-knowledge attackers and the perfect- knowledge attackers.
Adversarial Defense for Automatic Speaker Verification by Self-Supervised Learning
TLDR
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms and formalizes evaluation metrics for adversarialdefense considering both purification and detection based approaches into account.
Defending Against Adversarial Attacks in Speaker Verification Systems
TLDR
This work designs and implements a defense system that is simple, light-weight, and effective against adversarial attacks for speaker verification, and shows that denoising and noise-adding can significantly degrade the performance of a state-of-the-art adversarial attack.
Attack on Practical Speaker Verification System Using Universal Adversarial Perturbations
  • Weiyi Zhang, Shuning Zhao, Xiaolin Hu
  • Computer Science
    ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
TLDR
This work shows that by playing the crafted adversarial perturbation as a separate source when the adversary is speaking, the practical speaker verification system will misjudge the adversary as a target speaker.
SEC4SR: A Security Analysis Platform for Speaker Recognition
TLDR
SEC4SR is presented, the first platform enabling researchers to systematically and comprehensively evaluate adversarial attacks and defenses in SR and provides lots of useful findings that may advance future research.
Dictionary Attacks on Speaker Verification
TLDR
A generic formulation of the attack that can be used with various speech representations and threat models is introduced and master voices that are effective in the most challenging conditions and transferable between speaker encoders are obtained.
...
...

References

SHOWING 1-10 OF 98 REFERENCES
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
TLDR
A new type of adversarial examples based on psychoacoustic hiding is introduced, which allows us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal.
Adversarial Black-Box Attacks for Automatic Speech Recognition Systems Using Multi-Objective Genetic Optimization
TLDR
A multi-objective genetic algorithm based approach is used to perform both targeted and un-targeted black-box attacks on automatic speech recognition (ASR) systems, proposing a generic framework which can be used to attack any ASR system, even if it's internal working is hidden.
Fooling End-To-End Speaker Verification With Adversarial Examples
TLDR
This paper presents white-box attacks on a deep end-to-end network that was either trained on YOHO or NTIMIT, and shows that one can significantly decrease the accuracy of a target system even when the adversarial examples are generated with different system potentially using different features.
Boosting Adversarial Attacks with Momentum
TLDR
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
TLDR
A first of its kind demonstration of adversarial attacks against speech classification model by adding small background noise without having to know the underlying model parameter and architecture is presented.
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
TLDR
The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.
Practical Black-Box Attacks against Machine Learning
TLDR
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Black-box Adversarial Attacks with Limited Queries and Information
TLDR
This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models.
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
TLDR
This paper develops effectively imperceptible audio adversarial examples by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets and makes progress towards physical-world over-the-air audio adversaria examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.
...
...