• Corpus ID: 232240118

A Central Limit Theorem for Differentially Private Query Answering

@inproceedings{Dong2021ACL,
  title={A Central Limit Theorem for Differentially Private Query Answering},
  author={Jinshuo Dong and Weijie J. Su and Linjun Zhang},
  booktitle={Neural Information Processing Systems},
  year={2021}
}
Perhaps the single most important use case for differential privacy is to privately answer numerical queries, which is usually achieved by adding noise to the answer vector. The central question is, therefore, to understand which noise distribution optimizes the privacy-accuracy trade-off, especially when the dimension of the answer vector is high. Accordingly, an extensive literature has been dedicated to the question and the upper and lower bounds have been successfully matched up to constant… 

Figures and Tables from this paper

Differential privacy for symmetric log-concave mechanisms

This work provides a sufficient and necessary condition for ( (cid:15), δ )-differential privacy for all symmetric and log-concave noise densities and demonstrates that this can yield significantly lower mean squared errors than those incurred by the currently used Laplace and Gaussian mechanisms.

Log-Concave and Multivariate Canonical Noise Distributions for Differential Privacy

This paper shows that pure ( cid:15) -DP cannot be decomposed in either way and that there is neither a log-concave CND nor any multivariate CND for (cid: 15) - DP, and shows that Gaussian-DP, (0, δ ) -DP, and Laplace-DP each have both log- Concave and multivariateCNDs.

Seconder of the vote of thanks to Dong et al. and contribution to the Discussion of ‘Gaussian Differential Privacy’

  • Marco Avella-Medina
  • Computer Science
    Journal of the Royal Statistical Society: Series B (Statistical Methodology)
  • 2022
A canonical singleparameter family of privacy notions within the fDP class that is referred to as ‘Gaussian differential privacy’ (GDP), defined based on hypothesis testing of two shifted Gaussian distributions, which is the focal privacy definition among the family of fDP guarantees.

Proposer of the vote of thanks to Dong et al. and contribution to the Discussion of ‘Gaussian Differential Privacy’

A canonical singleparameter family of privacy notions within the fDP class that is referred to as ‘Gaussian differential privacy’ (GDP), defined based on hypothesis testing of two shifted Gaussian distributions, which is the focal privacy definition among the family of fDP guarantees.

Gaussian differential privacy

The privacy guarantees of any hypothesis testing based definition of privacy (including the original differential privacy definition) converges to GDP in the limit under composition and a Berry–Esseen style version of the central limit theorem is proved, which gives a computationally inexpensive tool for tractably analysing the exact composition of private algorithms.

Rejoinder: Gaussian Differential Privacy

This rejoinder aims to address two broad issues that cover most comments made in the discussion and discusses some theoretical aspects of the authors' work and comment on how this work might impact the theoretical foundation of privacy-preserving data analysis.

References

SHOWING 1-10 OF 37 REFERENCES

Fingerprinting codes and the price of approximate differential privacy

The results rely on the existence of short fingerprinting codes (Boneh and Shaw, CRYPTO'95; Tardos, STOC'03), which are closely connected to the sample complexity of differentially private data release.

Unconditional differentially private mechanisms for linear queries

This work gives a mechanism that works unconditionally, and also gives an improved O(log2 d) approximation to the expected l22 error, via a symmetrization argument which argues that there always exists a near optimal differentially private mechanism which adds noise that is independent of the input database.

The power of factorization mechanisms in local and central differential privacy

New characterizations of the sample complexity of answering linear queries in the local and central models of differential privacy show that a particular factorization mechanism is approximately optimal, and the optimal sample complexity is bounded from above and below by well studied factorization norms of a matrix associated with the queries.

The Optimal Noise-Adding Mechanism in Differential Privacy

The fundamental tradeoff between privacy and utility in differential privacy is characterized, and the optimal ϵ-differentially private mechanism for a single realvalued query function under a very general utility-maximization (or cost-minimization) framework is derived.

The Geometry of Differential Privacy: The Small Database and Approximate Cases

This work studies trade-offs between accuracy and privacy in the context of linear queries over histograms and gives an O(\log^2 d) approximation guarantee for the case of $(\varepsilon,\delta)-approximate differential privacy.

Calibrating Noise to Sensitivity in Private Data Analysis

The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.

The optimal mechanism in differential privacy

This work derives the optimal ε-differentially private mechanism for single real-valued query function under a very general utility-maximization (or cost-minimization) framework and concludes that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.

Optimal Differential Privacy Composition for Exponential Mechanisms and the Cost of Adaptivity

It is shown that there is a difference in the privacy loss when the exponential mechanism is chosen adaptively versus non-adaptively, and the best previously known upper bounds for adaptive composition of exponential mechanisms with efficiently computable formulations are improved.

Tight Analysis of Privacy and Utility Tradeoff in Approximate Differential Privacy

It is shown that the multiplicative gap of the lower bounds and upper bounds goes to zero in various high privacy regimes, proving the tightness of theLower and upper limits and thus establishing the optimality of the truncated Laplacian mechanism.

Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

This work provides new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded.