• Corpus ID: 231879744

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

@article{Chourasia2021DifferentialPD,
  title={Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent},
  author={Rishav Chourasia and Jiayuan Ye and R. Shokri},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.05855}
}
We model the dynamics of privacy loss in Langevin diffusion and extend it to the noisy gradient descent algorithm: we compute a tight bound on Rényi differential privacy and the rate of its change throughout the learning process. We prove that the privacy loss converges exponentially fast. This significantly improves the prior privacy analysis of differentially private (stochastic) gradient descent algorithms, where (Rényi) privacy loss constantly increases over the training iterations. Unlike… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 41 REFERENCES
Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC
TLDR
This work establishes rapid convergence for differentially private algorithms under distance measures more suitable for differential privacy and gives the first results proving convergence in Renyi divergence for smooth, strongly-convex f.
Differential Privacy without Sensitivity
TLDR
This paper extends the classical exponential mechanism, allowing the loss functions to have an unbounded sensitivity of the loss function.
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
TLDR
In the practical setting common to many real-world deployments, there is a gap between the lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
Extracting Training Data from Large Language Models
TLDR
This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model, and finds that larger models are more vulnerable than smaller models.
Auditing Differentially Private Machine Learning: How Private is Private SGD?
TLDR
This work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that it believes has the potential to complement and influence analytical work on differential privacy.
On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning
TLDR
A new framework, termed Bayes-Stability, is developed for proving algorithm-dependent generalization error bounds for learning general non-convex objectives and it is demonstrated that the data-dependent bounds can distinguish randomly labelled data from normal data.
Privacy Amplification of Iterative Algorithms via Contraction Coefficients
TLDR
It is demonstrated that differential privacy guarantees of iterative mappings can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for f-divergences by generalizing the Dobrushin’s contraction coefficient for total variation distance to an f-Divergence known as Eγ-diversgence.
Private stochastic convex optimization: optimal rates in linear time
TLDR
Two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n 2/d}) gradient computations are described.
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
TLDR
The reasons why deep learning models may leak information about their training data are investigated and new algorithms tailored to the white-box setting are designed by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks.
Privacy Amplification by Mixing and Diffusion Mechanisms
TLDR
This paper investigates under what conditions stochastic post-processing can amplify the privacy of a mechanism, and gives a series of amplification results in terms of uniform mixing properties of the Markov process defined by said operator.
...
1
2
3
4
5
...