• Corpus ID: 247084157

Debugging Differential Privacy: A Case Study for Privacy Auditing

@article{Tramr2022DebuggingDP,
  title={Debugging Differential Privacy: A Case Study for Privacy Auditing},
  author={Florian Tram{\`e}r and A. Terzis and Thomas Steinke and Shuang Song and Matthew Jagielski and Nicholas Carlini},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.12219}
}
Differential Privacy can provide provable privacy guarantees for training data in machine learning. However, the presence of proofs does not preclude the presence of errors. Inspired by recent advances in auditing which have been used for estimating lower bounds on differentially private algorithms, here we show that auditing can also be used to find flaws in (purportedly) differentially private schemes. In this case study, we audit a recent open source implementation of a differentially private… 

Figures from this paper

Unlocking High-Accuracy Differentially Private Image Classification through Scale
TLDR
It is demonstrated that DP-SGD on over-parameterized models can perform significantly better than previously thought and is believed to be a step towards closing the accuracy gap between private and non-private image classi-cation benchmarks.

References

SHOWING 1-9 OF 9 REFERENCES
Auditing Differentially Private Machine Learning: How Private is Private SGD?
TLDR
This work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that it believes has the potential to complement and influence analytical work on differential privacy.
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
TLDR
In the practical setting common to many real-world deployments, there is a gap between the lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
The Composition Theorem for Differential Privacy
TLDR
This paper proves an upper bound on the overall privacy level and construct a sequence of privatization mechanisms that achieves this bound by introducing an operational interpretation of differential privacy and the use of a data processing inequality.
Deep Learning with Differential Privacy
TLDR
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Backpropagation Clipping for Deep Learning with Differential Privacy
TLDR
Backpropagation clipping is presented, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning that clips each trainable layer's inputs and its upstream gradients to ensure bounded global sensitivity for the layer’s gradient.
Variational Bayes In Private Settings (VIPS)
TLDR
This work introduces a general privacy-preserving framework for Variational Bayes (VB), a widely used optimization-based Bayesian inference method that respects differential privacy, the gold-standard privacy criterion, and encompasses a large class of probabilistic models, called the Conjugate Exponential (CE) family.
Tensorflow privacy issue #153: Incorrect comparison between privacy amplification by iteration and DPSGD
  • https://github.com/tensorflow/ privacy/issues/153,
  • 2020