Robust and Private Bayesian Inference

  title={Robust and Private Bayesian Inference},
  author={Christos Dimitrakakis and Blaine Nelson and Aikaterini Mitrokotsa and Benjamin I. P. Rubinstein},
We examine the robustness and privacy of Bayesian inference, under assumptions on the prior, and with no modifications to the Bayesian framework. First, we generalise the concept of differential privacy to arbitrary dataset distances, outcome spaces and distribution families. We then prove bounds on the robustness of the posterior, introduce a posterior sampling mechanism, show that it is differentially private and provide finite sample bounds for distinguishability-based privacy under a strong… 

Differential Privacy in a Bayesian setting through posterior sampling

Borders on the robustness of the posterior are proved, a posterior sampling mechanism is introduced, it is shown that it is differentially private and finite sample bounds for distinguishability-based privacy under a strong adversarial model are provided.

On the Differential Privacy of Bayesian Inference

This work studies how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy, with a novel focus on the influence of graph structure on privacy.

A New Bound for Privacy Loss from Bayesian Posterior Sampling

The privacy loss quantified by the new bound is applied to release differentially private synthetic data from Bayesian models in several experiments and the improved utility of the synthetic data is shown compared to those generated from explicitly designed randomization mechanisms that privatize posterior distributions.

Differentially Private Bayesian Inference for Generalized Linear Models

This work proposes a generic noise-aware Bayesian framework to quantify the parameter uncertainty for a GLM at hand, given noisy sufficient statistics, and experimentally demonstrates that the posteriors obtained from the model, while adhering to strong privacy guarantees, are similar to the non-private posteriors.

Bayesian Differential Privacy through Posterior Sampling

The answer is affirmative: under certain conditions on the prior, sampling from the posterior distribution can be used to achieve a desired level of privacy and utility, and bounds on the sensitivity of the posterior to the data are proved, which gives a measure of robustness.

Differential Privacy and Private Bayesian Inference.

The question of whether B can select a prior distribution so that a computationally unbounded A cannot obtain private information from queries is formalised and answered.

Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

It is shown that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable; and this observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint.

Differentially Private Bayesian Inference for Exponential Families

This work presents the first method for private Bayesian inference in exponential families that properly accounts for noise introduced by the privacy mechanism, and gives properly calibrated posterior beliefs in the non-asymptotic data regime.

Improved Accounting for Differentially Private Learning

It is demonstrated that the proposed generic privacy accountant is able to achieve state-of-the-art estimates of DP guarantees and can be applied to new areas like variational inference.

Privacy-Preserving Parametric Inference: A Case for Robust Statistics

It is demonstrated that differential privacy is a weaker stability requirement than infinitesimal robustness, and it is shown that robust M-estimators can be easily randomized to guarantee both differential privacy and robustness toward the presence of contaminated data.



Probabilistic Inference and Differential Privacy

It is found that probabilistic inference can improve accuracy, integrate multiple observations, measure uncertainty, and even provide posterior distributions over quantities that were not directly measured.

Differentially-private learning and information theory

This work examines differential privacy in the PAC-Bayesian framework and establishes a connection between the exponential mechanism, which is the most general differentially private mechanism and the Gibbs estimator encountered in PAC- Bayesian bounds.

Local privacy and statistical minimax rates

Borders on information-theoretic quantities that influence estimation rates as a function of the amount of privacy preserved can be viewed as quantitative data-processing inequalities that allow for precise characterization of statistical rates under local privacy constraints.

Differential privacy and robust statistics

We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially

Local Privacy, Data Processing Inequalities, and Statistical Minimax Rates

This work proves bounds on information-theoretic quantities, including mutual information and Kullback-Leibler divergence, that depend on the privacy guarantees, and provides a treatment of several canonical families of problems: mean estimation, parameter estimation in fixed-design regression, multinomial probability estimation, and nonparametric density estimation.

Convergence Rates for Differentially Private Statistical Estimation

This work studies differentially-private statistical estimation, and shows upper and lower bounds on the convergence rates of differentially private approximations to statistical estimators, and reveals a formal connection between differential privacy and the notion of Gross Error Sensitivity in robust statistics.

Differentially Private Empirical Risk Minimization

This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.

Calibrating Noise to Sensitivity in Private Data Analysis

The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.

Broadening the Scope of Differential Privacy Using Metrics

Differential Privacy is one of the most prominent frameworks used to deal with disclosure prevention in statistical databases, ensuring that sensitive information relative to individuals cannot be easily inferred by disclosing answers to aggregate queries.

Mechanism Design via Differential Privacy

It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie.