Automatic Discovery of Privacy–Utility Pareto Fronts

@article{Avent2020AutomaticDO,
  title={Automatic Discovery of Privacy–Utility Pareto Fronts},
  author={Brendan Avent and Javier I. Gonz{\'a}lez and Tom Diethe and Andrei Paleyes and Borja Balle},
  journal={Proceedings on Privacy Enhancing Technologies},
  year={2020},
  volume={2020},
  pages={5 - 23}
}
Abstract Differential privacy is a mathematical framework for privacy-preserving data analysis. Changing the hyperparameters of a differentially private algorithm allows one to trade off privacy and utility in a principled way. Quantifying this trade-off in advance is essential to decision-makers tasked with deciding how much privacy can be provided in a particular application while maintaining acceptable utility. Analytical utility guarantees offer a rigorous tool to reason about this tradeoff… 

Figures and Tables from this paper

Partial sensitivity analysis in differential privacy
TLDR
This work extends the view of individual RDP by introducing a new concept the authors call partial sensitivity, which leverages symbolic automatic differentiation to determine the influence of each input feature on the gradient norm of a function.
Tempered Sigmoid Activations for Deep Learning with Differential Privacy
TLDR
This paper is the first to observe that the choice of activation function is central to bounding the sensitivity of privacy-preserving deep learning, and demonstrates analytically and experimentally how a general family of bounded activation functions, the tempered sigmoids, consistently outperform unbounded activation functions like ReLU.
Balancing Utility and Fairness against Privacy in Medical Data
TLDR
The effects of two de-identification techniques, k-anonymity and differential privacy, on both utility and fairness are investigated and two measures to calculate the trade-off between privacy-utility and privacy-fairness are proposed.
On the Importance of Architecture and Feature Selection in Differentially Private Machine Learning
TLDR
This work systematically study a pitfall in the typical work for differentially private machine learning by providing an explanatory framework and proving that the phenomenon arises naturally from the addition of noise to satisfy differential privacy.
Efficient Hyperparameter Optimization for Differentially Private Deep Learning
TLDR
This work forms this problem into a general optimization framework for establishing a desirable privacy-utility tradeoff, and systematically study three cost-effective algorithms for being used in the proposed framework: evolutionary, Bayesian, and reinforcement learning.
The Role of Adaptive Optimizers for Honest Private Hyperparameter Selection
TLDR
This work shows that standard composition tools outperform more advanced techniques in many settings, and empirically and theoretically demonstrates an intrinsic connection between the learning rate and clipping norm hyperparameters.
Optimized Deep Learning for Enhanced Trade-off in Differentially Private Learning
TLDR
An optimized differentially private deep learning mechanism is proposed that enhances the trade-off between the conflicting objectives of privacy, accuracy, and performance and gives a quantifiable trade-offs between these contradictory objectives.
Medical imaging deep learning with differential privacy
The successful training of deep learning models for diagnostic deployment in medical imaging applications requires large volumes of data. Such data cannot be procured without consideration for
Is It Possible to Preserve Privacy in the Age of AI?
TLDR
It is found that privacy-preserving research, specifically for AI, is in its early stage and requires more effort to address the current challenges and research gaps.
Multi-Objective Learning to Predict Pareto Fronts Using Hypervolume Maximization
TLDR
A novel learning approach to estimate the Pareto front by maximizing the dominated hypervolume (HV) of the average loss vectors corresponding to a set of learners, leveraging established multi-objective optimization methods.
...
...

References

SHOWING 1-10 OF 53 REFERENCES
Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM
TLDR
A general "noise reduction" framework that can apply to a variety of private empirical risk minimization (ERM) algorithms, using them to "search" the space of privacy levels to find the empirically strongest one that meets the accuracy constraint, and incurring only logarithmic overhead in the number ofPrivacy levels searched.
The Large Margin Mechanism for Differentially Private Maximization
TLDR
This work provides the first general-purpose, range-independent algorithm for private maximization that guarantees approximate differential privacy and its applicability is demonstrated on two fundamental tasks in data mining and machine learning.
Differentially Private Bayesian Optimization
TLDR
Methods for releasing the best hyper-parameters and classifier accuracy privately are introduced, and it is proved that under a GP assumption these private quantities are often near-optimal.
Differential Privacy: A Primer for a Non-Technical Audience
TLDR
This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.
Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences
TLDR
This paper presents a general method that recovers and improves prior analyses, yields lower bounds and derives new instances of privacy amplification by subsampling, which leverages a characterization of differential privacy as a divergence which emerged in the program verification community.
The Algorithmic Foundations of Differential Privacy
TLDR
The preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example.
Private selection from private candidates
TLDR
This work considers the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private, and presents algorithms that are near-optimal along the three relevant dimensions: privacy, utility and computational efficiency.
An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices
TLDR
An economic solution is proposed: operate where the marginal cost of increasing privacy equals the marginal benefit, and the model of production, from computer science, assumes data are published using an efficient differentially private algorithm.
Differentially Private Regression with Gaussian Processes
TLDR
This cloaking method achieves the greatest accuracy, while still providing privacy guarantees, and offers practical DP for regression over multi-dimensional inputs and provides a starter toolkit for combining differential privacy and GPs.
Privacy Amplification by Iteration
TLDR
This work demonstrates that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees, and can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.
...
...