Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

@article{Bassily2014PrivateER,
  title={Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds},
  author={Raef Bassily and Adam D. Smith and Abhradeep Thakurta},
  journal={2014 IEEE 55th Annual Symposium on Foundations of Computer Science},
  year={2014},
  pages={464-473}
}
Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run… 

Figures and Tables from this paper

Private Stochastic Convex Optimization: Optimal Rates in 𝓁1 Geometry

The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020a) with a new analysis of private regularized mirror descent and is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data.

Oracle Efficient Private Non-Convex Optimization

It is found that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using the approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.

Private stochastic convex optimization: optimal rates in linear time

Two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n 2/d}) gradient computations are described.

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

It is found that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using the approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.

Private Stochastic Convex Optimization with Optimal Rates

The approach builds on existing differentially private algorithms and relies on the analysis of algorithmic stability to ensure generalization and implies that, contrary to intuition based on private ERM, private SCO has asymptotically the same rate of $1/\sqrt{n}$ as non-private SCO in the parameter regime most common in practice.

Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps

This work gets a (nearly) optimal bound on the excess empirical risk and excess population loss with subquadratic gradient complexity on the differentially private Empirical Risk Minimization and Stochastic Convex Optimization problems for non-smooth convex functions.

Adapting to Function Difficulty and Growth Conditions in Private Optimization

Algorithms for private stochastic convex optimization that adapt to the hardness of the specific function the authors wish to optimize are developed and it is demonstrated that the adaptive algorithm is simultaneously (minimax) optimal over all κ ≥ 1 + c whenever c = Θ(1).

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems

This work shows that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function; and a sampling with replacement variant achieves optimal risk bounds withrespect to a weak gap function.

Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms

We study and provide instance-optimal algorithms in differential privacy by ex-tending and approximating the inverse sensitivity mechanism. We provide two approximation frameworks, one which only

Noninteractive Locally Private Learning of Linear Models via Polynomial Approximations

This work considers differentially private algorithms that operate in the local model, where each data record is stored on a separate user device and randomization is performed locally by those devices.
...

References

SHOWING 1-10 OF 52 REFERENCES

Private Convex Empirical Risk Minimization and High-dimensional Regression

This work significantly extends the analysis of the “objective perturbation” algorithm of Chaudhuri et al. (2011) for convex ERM problems, and gives the best known algorithms for differentially private linear regression.

The geometry of differential privacy: the sparse and approximate cases

The connection between the hereditary discrepancy and the privacy mechanism enables the first polylogarithmic approximation to the hereditary discrepancies of a matrix A to be derived.

(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings

The technique leads to the first nonprivate algorithms for private online learning in the bandit setting, and in many cases, the algorithms match the dependence on the input length of the optimal nonprivate regret bounds up to logarithmic factors in T.

On the geometry of differential privacy

The lower bound is strong enough to separate the concept of differential privacy from the notion of approximate differential privacy where an upper bound of O(√{d}/ε) can be achieved.

Stochastic Convex Optimization

Stochastic convex optimization is studied, and it is shown that the key ingredient is strong convexity and regularization, which is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.

Fingerprinting codes and the price of approximate differential privacy

The results rely on the existence of short fingerprinting codes (Boneh and Shaw, CRYPTO'95; Tardos, STOC'03), which are closely connected to the sample complexity of differentially private data release.

Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso

This work designs differentially private algorithms for statistical model selection and gives sufficient conditions for the LASSO estimator to be robust to small changes in the data set, and shows that these conditions hold with high probability under essentially the same stochastic assumptions that are used in the literature to analyze convergence of the LassO.

Differentially Private Empirical Risk Minimization

This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.

Differentially Private Online Learning

This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well.

Sample Complexity Bounds for Differentially Private Learning

An upper bound on the sample requirement of learning with label privacy is provided that depends on a measure of closeness between and the unlabeled data distribution and applies to the non-realizable as well as the realizable case.
...