Corpus ID: 233289927

A Method to Reveal Speaker Identity in Distributed ASR Training, and How to Counter It

@article{Dang2021AMT,
  title={A Method to Reveal Speaker Identity in Distributed ASR Training, and How to Counter It},
  author={Trung D. Q. Dang and Om Thakkar and Swaroop Indra Ramaswamy and Rajiv Mathews and Peter Chin and Franccoise Beaufays},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.07815}
}
End-to-end Automatic Speech Recognition (ASR) models are commonly trained over spoken utterances using optimization methods like Stochastic Gradient Descent (SGD). In distributed settings like Federated Learning, model training requires transmission of gradients over a network. In this work, we design the first method for revealing the identity of the speaker of a training utterance with access only to a gradient. We propose HessianFree Gradients Matching, an input reconstruction technique that… Expand

References

SHOWING 1-10 OF 50 REFERENCES
Inverting Gradients - How easy is it to break privacy in federated learning?
Deep Leakage from Gradients
Deep Learning with Differential Privacy
Deep Speaker: an End-to-End Neural Speaker Embedding System
iDLG: Improved Deep Leakage from Gradients
Deep Speech: Scaling up end-to-end speech recognition
Understanding Unintended Memorization in Federated Learning
A Federated Approach in Training Acoustic Models
...
1
2
3
4
5
...