Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
@article{Nasr2021AdversaryIL, title={Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning}, author={Milad Nasr and Shuang Song and Abhradeep Thakurta and Nicolas Papernot and Nicholas Carlini}, journal={ArXiv}, year={2021}, volume={abs/2101.04535} }
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D′ that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary’s odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially… Expand
Figures and Tables from this paper
4 Citations
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
- Computer Science, Mathematics
- ArXiv
- 2021
- 1
- PDF
The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation
- Computer Science, Mathematics
- ArXiv
- 2021
- PDF
References
SHOWING 1-10 OF 57 REFERENCES
Evaluating Differentially Private Machine Learning in Practice
- Computer Science
- USENIX Security Symposium
- 2019
- 107
- PDF
Membership Inference Attacks Against Machine Learning Models
- Computer Science, Mathematics
- 2017 IEEE Symposium on Security and Privacy (SP)
- 2017
- 978
- PDF
Exploiting Unintended Feature Leakage in Collaborative Learning
- Computer Science
- 2019 IEEE Symposium on Security and Privacy (SP)
- 2019
- 277
- PDF
Differentially Private Empirical Risk Minimization
- Computer Science, Medicine
- J. Mach. Learn. Res.
- 2011
- 826
- PDF
Auditing Differentially Private Machine Learning: How Private is Private SGD?
- Computer Science, Business
- NeurIPS
- 2020
- 14
- Highly Influential
- PDF
Towards Practical Differentially Private Convex Optimization
- Computer Science
- 2019 IEEE Symposium on Security and Privacy (SP)
- 2019
- 49
- PDF
Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds
- Mathematics, Computer Science
- 2014 IEEE 55th Annual Symposium on Foundations of Computer Science
- 2014
- 392
- Highly Influential
- PDF
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
- Computer Science, Mathematics
- 2019 IEEE Symposium on Security and Privacy (SP)
- 2019
- 187
- PDF