• Corpus ID: 30133

Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry

@article{Talwar2014PrivateER,
  title={Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry},
  author={Kunal Talwar and Abhradeep Thakurta and Li Zhang},
  journal={ArXiv},
  year={2014},
  volume={abs/1411.5417}
}
Empirical Risk Minimization (ERM) is a standard technique in machine learning, where a model is selected by minimizing a loss function over constraint set. When the training dataset consists of private information, it is natural to use a differentially private ERM algorithm, and this problem has been the subject of a long line of work started with Chaudhuri and Monteleoni 2008. A private ERM algorithm outputs an approximate minimizer of the loss function and its error can be measured as the… 

Tables from this paper

Differentially Private Empirical Risk Minimization with Smooth Non-Convex Loss Functions: A Non-Stationary View
TLDR
This paper investigates the DP-ERM problem in high dimensional space, and shows that by measuring the utility with Frank-Wolfe gap, it is possible to bound the utility by the Gaussian Width of the constraint set, instead of the dimensionality p of the underlying space.
Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps
TLDR
This work gets a (nearly) optimal bound on the excess empirical risk and excess population loss with subquadratic gradient complexity on the differentially private Empirical Risk Minimization and Stochastic Convex Optimization problems for non-smooth convex functions.
Non-Euclidean Differentially Private Stochastic Convex Optimization
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function,
Evading the Curse of Dimensionality in Unconstrained Private GLMs
TLDR
It is shown that for unconstrained convex generalized linear models (GLMs), one can obtain an excess empirical risk of Õ (√ rank/εn ) , where rank is the rank of the feature matrix in the GLM problem, n is the number of data samples, and ε is the privacy parameter.
Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms
TLDR
This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy with a particular class of convex but non-smooth regularizers that induce structured sparsity and loss functions for generalized linear models.
Efficient Private Empirical Risk Minimization for High-dimensional Learning
TLDR
This paper theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain) of ERM problems, and shows that for the class of generalized linear functions, given only the projected data and the projection matrix, excess risk bounds can be obtained.
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
TLDR
The expected excess empirical risk from convex loss functions to non-convex ones satisfying the Polyak-Lojasiewicz condition is generalized and a tighter upper bound on the utility is given.
SGD with low-dimensional gradients with applications to private and distributed learning
TLDR
This paper designs an optimization algorithm that operates with the lower-dimensional (com-pressed) stochastic gradients, and establishes that with the right set of parameters it has the exact same dimension-free convergence guarantees as that of regular SGD for popular convex and nonconvex optimization settings.
Privately Learning Markov Random Fields
TLDR
It is shown that only structure learning under approximate differential privacy maintains the non-private logarithmic dependence on the dimensionality of the data, while a change in either the learning goal or the privacy notion would necessitate a polynomial dependence.
Private Non-smooth ERM and SCO in Subquadratic Steps
TLDR
A (nearly) optimal bound is got on the excess empirical risk with O ( N 3 / 2 d 1 / 8 + N 2 d ) gradient queries, which is achieved with the help of subsampling and smoothing the function via convolution.
...
...

References

SHOWING 1-10 OF 65 REFERENCES
(Near) Dimension Independent Risk Bounds for Differentially Private Learning
TLDR
This paper shows that under certain assumptions, variants of both output and objective perturbation algorithms have no explicit dependence on p; the excess risk depends only on the L2-norm of the true risk minimizer and that of training points.
Private Empirical Risk Minimization, Revisited
TLDR
This paper provides new algorithms and matching lower bounds for private ERM assuming only that each data point’s contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded, and implies that algorithms from previous work can be used to obtain optimal error rates.
Stochastic Convex Optimization
TLDR
Stochastic convex optimization is studied, and it is shown that the key ingredient is strong convexity and regularization, which is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.
Private Multiplicative Weights Beyond Linear Queries
TLDR
This work shows how to give accurate and differentially private solutions to exponentially many convex minimization problems on a sensitive dataset.
(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings
TLDR
The technique leads to the first nonprivate algorithms for private online learning in the bandit setting, and in many cases, the algorithms match the dependence on the input length of the optimal nonprivate regret bounds up to logarithmic factors in T.
Differentially Private Online Learning
TLDR
This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well.
Analyze gauss: optimal bounds for privacy-preserving principal component analysis
TLDR
It is shown that the well-known, but misnamed, randomized response algorithm provides nearly optimal additive quality gap compared to the best possible singular subspace of A, and that when ATA has a large eigenvalue gap -- a reason often cited for PCA -- the quality improves significantly.
The LASSO Risk for Gaussian Matrices
TLDR
This result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances and is based on the analysis of AMP, a recently developed efficient algorithm that is inspired from graphical model ideas.
Nearly optimal minimax estimator for high-dimensional sparse linear regression
  • L. Zhang
  • Computer Science, Mathematics
  • 2013
TLDR
An approximation algorithm for computing the minimax risk for any such estimation task and also a polynomial time nearly optimal estimator for the important case of $\ell_1$ sparsity constraint are obtained.
Logarithmic Regret Algorithms for Strongly Convex Repeated Games
TLDR
This paper describes a family of prediction algorithms for strongly convex repeated games that attain logarithmic regret and applies it for solving regularized loss minimization problems.
...
...