# Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

@article{Bassily2014PrivateER, title={Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds}, author={Raef Bassily and Adam D. Smith and Abhradeep Thakurta}, journal={2014 IEEE 55th Annual Symposium on Foundations of Computer Science}, year={2014}, pages={464-473} }

Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run… Expand

#### Supplemental Code

Github Repo

Via Papers with Code

This repository contains the codes for first large-scale investigation of Differentially Private Convex Optimization algorithms.

#### Paper Mentions

#### 460 Citations

Private stochastic convex optimization: optimal rates in linear time

- Computer Science, Mathematics
- STOC
- 2020

Two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n 2/d}) gradient computations are described. Expand

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

- Computer Science, Mathematics
- ArXiv
- 2019

It is found that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using the approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset. Expand

Private Stochastic Convex Optimization with Optimal Rates

- Computer Science, Mathematics
- NeurIPS
- 2019

The approach builds on existing differentially private algorithms and relies on the analysis of algorithmic stability to ensure generalization and implies that, contrary to intuition based on private ERM, private SCO has asymptotically the same rate of $1/\sqrt{n}$ as non-private SCO in the parameter regime most common in practice. Expand

Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps

- Computer Science, Mathematics
- ArXiv
- 2021

This work gets a (nearly) optimal bound on the excess empirical risk and excess population loss with subquadratic gradient complexity on the differentially private Empirical Risk Minimization and Stochastic Convex Optimization problems for non-smooth convex functions. Expand

Adapting to Function Difficulty and Growth Conditions in Private Optimization

- Computer Science, Mathematics
- ArXiv
- 2021

Algorithms for private stochastic convex optimization that adapt to the hardness of the specific function the authors wish to optimize are developed and it is demonstrated that the adaptive algorithm is simultaneously (minimax) optimal over all κ ≥ 1 + c whenever c = Θ(1). Expand

Noninteractive Locally Private Learning of Linear Models via Polynomial Approximations

- Computer Science, Mathematics
- ALT
- 2019

This work considers differentially private algorithms that operate in the local model, where each data record is stored on a separate user device and randomization is performed locally by those devices. Expand

Algorithms for Stochastic Convex Optimization

- 2015

Stochastic convex optimization, where the objective is the expectation of a random convex function, is an important and widely used method with numerous applications in machine learning, statistics,… Expand

Statistical Query Algorithms for Stochastic Convex Optimization

- Mathematics, Computer Science
- ArXiv
- 2015

It is shown that well-known and popular methods, including first-order iterative methods and polynomial-time methods, can be implemented using only statistical queries, and nearly matching upper and lower bounds on the estimation (sample) complexity including linear optimization in the most general setting. Expand

Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms

- Computer Science, Mathematics
- ArXiv
- 2019

This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy with a particular class of convex but non-smooth regularizers that induce structured sparsity and loss functions for generalized linear models. Expand

Efficient Empirical Risk Minimization with Smooth Loss Functions in Non-interactive Local Differential Privacy

- Computer Science
- ArXiv
- 2018

This paper shows that if the ERM loss function is $(\infty, T)$-smooth, then it can avoid a dependence of the sample complexity, to achieve error $\alpha$ on the exponential of the dimensionality $p$ with base $1/\alpha$, which answers a question in \cite{smith2017interaction}. Expand

#### References

SHOWING 1-10 OF 56 REFERENCES

(Near) Dimension Independent Risk Bounds for Differentially Private Learning

- Mathematics, Computer Science
- ICML
- 2014

This paper shows that under certain assumptions, variants of both output and objective perturbation algorithms have no explicit dependence on p; the excess risk depends only on the L2-norm of the true risk minimizer and that of training points. Expand

Private Convex Empirical Risk Minimization and High-dimensional Regression

- Mathematics
- COLT 2012
- 2012

We consider differentially private algorithms for convex empirical risk minimization (ERM). Differential privacy (Dwork et al., 2006b) is a recently introduced notion of privacy which guarantees that… Expand

The geometry of differential privacy: the sparse and approximate cases

- Mathematics, Computer Science
- STOC '13
- 2013

The connection between the hereditary discrepancy and the privacy mechanism enables the first polylogarithmic approximation to the hereditary discrepancies of a matrix A to be derived. Expand

(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings

- Computer Science
- NIPS
- 2013

The technique leads to the first nonprivate algorithms for private online learning in the bandit setting, and in many cases, the algorithms match the dependence on the input length of the optimal nonprivate regret bounds up to logarithmic factors in T. Expand

On the geometry of differential privacy

- Mathematics, Computer Science
- STOC '10
- 2010

The lower bound is strong enough to separate the concept of differential privacy from the notion of approximate differential privacy where an upper bound of O(√{d}/ε) can be achieved. Expand

Stochastic Convex Optimization

- Mathematics, Computer Science
- COLT
- 2009

Stochastic convex optimization is studied, and it is shown that the key ingredient is strong convexity and regularization, which is only a sufficient, but not necessary, condition for meaningful non-trivial learnability. Expand

Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso

- Mathematics, Computer Science
- COLT
- 2013

This work designs differentially private algorithms for statistical model selection and gives sufficient conditions for the LASSO estimator to be robust to small changes in the data set, and shows that these conditions hold with high probability under essentially the same stochastic assumptions that are used in the literature to analyze convergence of the LassO. Expand

Differentially Private Empirical Risk Minimization

- Medicine, Computer Science
- J. Mach. Learn. Res.
- 2011

This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance. Expand

Differentially Private Online Learning

- Computer Science, Mathematics
- COLT
- 2012

This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well. Expand

Sample Complexity Bounds for Differentially Private Learning

- Computer Science, Medicine
- COLT
- 2011

An upper bound on the sample requirement of learning with label privacy is provided that depends on a measure of closeness between and the unlabeled data distribution and applies to the non-realizable as well as the realizable case. Expand