Backdoor Attack Against Speaker Verification

@article{Zhai2021BackdoorAA,
  title={Backdoor Attack Against Speaker Verification},
  author={Tongqing Zhai and Yiming Li and Zi-Mou Zhang and Baoyuan Wu and Yong Jiang and Shutao Xia},
  journal={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2021},
  pages={2560-2564}
}
  • Tongqing ZhaiYiming Li Shutao Xia
  • Published 22 October 2020
  • Computer Science
  • ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting… 

Figures and Tables from this paper

FenceSitter: Black-box, Content-Agnostic, and Synchronization-Free Enrollment-Phase Attacks on Speaker Recognition Systems

A new attack surface of SRSs is explored by presenting an enrollment-phase attack paradigm, named FenceSitter, where the adversary poisons the S RS using imperceptible adversarial ambient sound when the legitimate user registers into the SRS.

Audio-domain position-independent backdoor attack via unnoticeable triggers

This work explores the severity of audio-domain backdoor attacks and demonstrates their feasibility under practical scenarios of voice user interfaces, where an adversary injects an unnoticeable audio trigger into live speech to launch the attack.

Opportunistic Backdoor Attacks: Exploring Human-imperceptible Vulnerabilities on Speech Recognition Systems

This work proposes the first audible backdoor attack paradigm for speech recognition, characterized by passively triggering and opportunistically invoking and is demonstrated to be able to resist typical speech enhancement techniques and general countermeasures.

Can You Hear It?: Backdoor Attacks via Ultrasonic Triggers

This work explores backdoor attacks for automatic speech recognition systems where inaudible triggers are injected, and observes that short, non-continuous triggers result in highly successful attacks.

Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography

A novel audio steganography-based personalized trigger backdoor attack that embeds hidden trigger techniques into deep neural networks and provides a new attack direction for speaker verification is proposed.

Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification

A novel backdoor attack on deep ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA), which can dynamically generate new triggers for any unknown identities.

PBSM: Backdoor attack against Keyword spotting based on pitch boosting and sound masking

This paper designs a backdoor attack scheme based on Pitch Boosting and Sound Masking for KWS, ab-breviated as PBSM and demonstrates that PBSM is feasible to achieve an average attack success rate close to 90% in three victim models when poisoning less than 1% of the training data.

Towards Backdoor Attacks against LiDAR Object Detection in Autonomous Driving

A novel backdoor attack strategy based on which the attacker can achieve the attack goal by poisoning a small number of point cloud samples is proposed, and it allows the attacker to easily perform the attack using some common objects as the triggers.

The"Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices

This work proposes a novel technique, Beatrix (backdoor detection via Gram matrix), which utilizes Gram matrix to capture not only the feature correlations but also the appropriately high-order information of the representations and can identify poisoned samples by capturing the anomalies in activation patterns.

Black-box Ownership Verification for Dataset Protection via Backdoor Watermarking

This paper forms the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model, where defenders can only query the model while having no information about its parameters and training details.

References

SHOWING 1-10 OF 22 REFERENCES

Rethinking the Trigger of Backdoor Attack

This paper demonstrates that many backdoor attack paradigms are vulnerable when the trigger in testing images is not consistent with the one used for training, and proposes a transformation-based attack enhancement to improve the robustness of existing attacks towards transformation- based defense.

Backdoor Learning: A Survey

This article summarizes and categorizes existing backdoor attacks and defenses based on their characteristics, and provides a unified framework for analyzing poisoning-based backdoor attacks, and summarizes widely adopted benchmark datasets.

Label-Consistent Backdoor Attacks

This work leverages adversarial perturbations and generative models to execute efficient, yet label-consistent, backdoor attacks, based on injecting inputs that appear plausible, yet are hard to classify, hence causing the model to rely on the (easier-to-learn) backdoor trigger.

BadNets: Evaluating Backdooring Attacks on Deep Neural Networks

It is shown that the outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has the state-of-the-art performance on the user's training and validation samples but behaves badly on specific attacker-chosen inputs.

Input-Aware Dynamic Backdoor Attack

A novel backdoor attack technique in which the triggers vary from input to input, and an input-aware trigger generator driven by diversity loss is implemented, making backdoor verification impossible.

A Survey on Neural Trojans

This paper surveys a myriad of neural Trojan attack and defense techniques that have been proposed over the last few years and systematizes the above attack anddefense approaches.

Speaker Recognition for Multi-speaker Conversations Using X-vectors

It is found that diarization substantially reduces error rate when there are multiple speakers, while maintaining excellent performance on single-speaker recordings.

Voxceleb: Large-scale speaker verification in the wild

Generalized End-to-End Loss for Speaker Verification

A new loss function called generalized end-to-end (GE2E) loss is proposed, which makes the training of speaker verification models more efficient than the previous tuple-based end- to- end (TE2e) loss function.

Deep Neural Network Embeddings for Text-Independent Speaker Verification

It is found that the embeddings outperform i-vectors for short speech segments and are competitive on long duration test conditions, which are the best results reported for speaker-discriminative neural networks when trained and tested on publicly available corpora.