Corpus ID: 59336400

Differentially Private Markov Chain Monte Carlo

@inproceedings{Heikkil2019DifferentiallyPM,
  title={Differentially Private Markov Chain Monte Carlo},
  author={Mikko A. Heikkil{\"a} and Joonas J{\"a}lk{\"o} and Onur Dikmen and Antti Honkela},
  booktitle={NeurIPS},
  year={2019}
}
Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects. In this paper, we further extend the applicability of DP Bayesian learning by presenting the first general DP Markov chain Monte Carlo (MCMC) algorithm whose privacy-guarantees are not subject to unrealistic assumptions on Markov chain convergence and that is applicable to posterior inference in arbitrary models. Our… Expand
Differentially Private Hamiltonian Monte Carlo
TLDR
A DP variant of HMC is presented using the MH acceptance test that builds on a recently proposed DP MCMC algorithm called the penalty algorithm, and adds noise to the gradient evaluations of H MC to prove that the resulting algorithm converges to the correct distribution, and is ergodic. Expand
Differentially Private Bayesian Inference for Generalized Linear Models
TLDR
This work proposes a generic noise-aware Bayesian framework to quantify the parameter uncertainty for a GLM at hand, given noisy sufficient statistics, and experimentally demonstrates that the posteriors obtained from the model, while adhering to strong privacy guarantees, are similar to the non-private posteriors. Expand
Gaussian Processes with Differential Privacy
TLDR
This work introduces GPs with DP protection for both model inputs and outputs and proposes a method for hyperparameter learning using a private selection protocol applied to validation set log-likelihood. Expand
Differentially Private Bayesian Neural Networks on Accuracy, Privacy and Reliability
TLDR
This work uses recent development in Bayesian deep learning and privacy accounting to offer a more precise analysis of the trade-off between privacy and accuracy in BNN, and proposes three DP-BNNs that characterize the weight uncertainty for the same network architecture in distinct ways. Expand
Differentially Private ERM Based on Data Perturbation
TLDR
This paper proposes a new `Data perturbation' based (DB) paradigm for DP-ERM: adding random noise to the original training data and achieving ($\epsilon,\delta$)-differential privacy on the final machine learning model, along with the preservation on the original data. Expand
Privacy-preserving data sharing via probabilistic modeling
TLDR
This work proposes formulating the problem of private data release through probabilistic modeling, and demonstrates empirically, in an epidemiological study, that statistical discoveries can be reliably reproduced from the synthetic data. Expand
Upper Bounds on the Generalization Error of Private Algorithms for Discrete Data
TLDR
This work develops a strategy using this formulation, based on the method of types and typicality, to find explicit upper bounds on the generalization error of stable algorithms, i.e., algorithms that produce similar output hypotheses given similar input datasets. Expand

References

SHOWING 1-10 OF 43 REFERENCES
Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo
TLDR
It is shown that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable; and this observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. Expand
Exact MCMC with differentially private moves
TLDR
The penalty algorithm of Ceperley and Dewing (J Chem Phys 110(20):9812–9820, 1999), a Markov chain Monte Carlo algorithm for Bayesian inference, is viewed in the context of data privacy and advocate its use for data privacy. Expand
On Connecting Stochastic Gradient MCMC and Differential Privacy
TLDR
It is shown that stochastic gradient Markov chain Monte Carlo (SG-MCMC) -- a class of scalable Bayesian posterior sampling algorithms proposed recently -- satisfies strong differential privacy with carefully chosen step sizes. Expand
Robust and Private Bayesian Inference
TLDR
Borders on the robustness of the posterior are proved, a posterior sampling mechanism is introduced, it is shown that it is differentially private and finite sample bounds for distinguishability-based privacy under a strong adversarial model are provided. Expand
Variational Bayes In Private Settings (VIPS)
TLDR
A general privacy-preserving framework for Variational Bayes (VB), a widely used optimization-based Bayesian inference method, that respects differential privacy, the gold-standard privacy criterion, and encompasses a large class of probabilistic models, called the Conjugate Exponential (CE) family. Expand
Differentially Private Bayesian Inference for Exponential Families
TLDR
This work presents the first method for private Bayesian inference in exponential families that properly accounts for noise introduced by the privacy mechanism, and gives properly calibrated posterior beliefs in the non-asymptotic data regime. Expand
Differentially Private Variational Inference for Non-conjugate Models
TLDR
This work adds differential privacy into doubly stochastic variational inference by clipping and perturbing the gradients and demonstrates the method can reach an accuracy close to non-private level under reasonably strong privacy guarantees, clearly improving over previous sampling-based alternatives especially in the strong privacy regime. Expand
Differential Privacy for Bayesian Inference through Posterior Sampling
TLDR
This work defines differential privacy over arbitrary data set metrics, outcome spaces and distribution families, and proves bounds on the sensitivity of the posterior to the data, which delivers a measure of robustness. Expand
The Differential Privacy of Bayesian Inference
Differential privacy is one recent framework for analyzing and quantifying the amount of privacy lost when data is released. Meanwhile, multiple imputation is an existing Bayesian-inference basedExpand
Stochastic gradient descent with differentially private updates
TLDR
This paper derives differentially private versions of stochastic gradient descent, and test them empirically to show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly. Expand
...
1
2
3
4
5
...