• Corpus ID: 30133

Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry

@article{Talwar2014PrivateER,
  title={Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry},
  author={Kunal Talwar and Abhradeep Thakurta and Li Zhang},
  journal={ArXiv},
  year={2014},
  volume={abs/1411.5417}
}
Empirical Risk Minimization (ERM) is a standard technique in machine learning, where a model is selected by minimizing a loss function over constraint set. When the training dataset consists of private information, it is natural to use a differentially private ERM algorithm, and this problem has been the subject of a long line of work started with Chaudhuri and Monteleoni 2008. A private ERM algorithm outputs an approximate minimizer of the loss function and its error can be measured as the… 

Tables from this paper

Differentially Private Empirical Risk Minimization with Smooth Non-Convex Loss Functions: A Non-Stationary View

TLDR
This paper investigates the DP-ERM problem in high dimensional space, and shows that by measuring the utility with Frank-Wolfe gap, it is possible to bound the utility by the Gaussian Width of the constraint set, instead of the dimensionality p of the underlying space.

The Cost of a Reductions Approach to Private Fair Optimization

TLDR
A reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression is examined and an algorithm-agnostic information-theoretic lower bound on the excess risk of any solution to the problem of private constrained group-objective optimization is shown.

Non-Euclidean Differentially Private Stochastic Convex Optimization

Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function,

Evading the Curse of Dimensionality in Unconstrained Private GLMs

TLDR
It is shown that for unconstrained convex generalized linear models (GLMs), one can obtain an excess empirical risk of Õ (√ rank/εn ) , where rank is the rank of the feature matrix in the GLM problem, n is the number of data samples, and ε is the privacy parameter.

Non-Euclidean Differentially Private Stochastic Convex Optimization: Optimal Rates in Linear Time

TLDR
A systematic study of DP-SCO for ℓ p -setups under a standard smoothness assumption on the loss, which shows that existing linear-time constructions for the Euclidean setup attain a nearly optimal excess risk in the low-dimensional regime.

Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms

TLDR
This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy with a particular class of convex but non-smooth regularizers that induce structured sparsity and loss functions for generalized linear models.

Efficient Private Empirical Risk Minimization for High-dimensional Learning

TLDR
This paper theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain) of ERM problems, and shows that for the class of generalized linear functions, given only the projected data and the projection matrix, excess risk bounds can be obtained.

Characterizing Private Clipped Gradient Descent on Convex Generalized Linear Problems

TLDR
The first formal convergence analysis of clipped DP-GD is provided, showing that the value which one sets for clipping really matters and if the clipping norm is set within at most a constant factor higher than the optimal, then one can obtain an excess empirical risk guarantee that is independent of the dimensionality of the model space.

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

TLDR
The expected excess empirical risk from convex loss functions to non-convex ones satisfying the Polyak-Lojasiewicz condition is generalized and a tighter upper bound on the utility is given.

SGD with low-dimensional gradients with applications to private and distributed learning

TLDR
This paper designs an optimization algorithm that operates with the lower-dimensional (com-pressed) stochastic gradients, and establishes that with the right set of parameters it has the exact same dimension-free convergence guarantees as that of regular SGD for popular convex and nonconvex optimization settings.

References

SHOWING 1-10 OF 59 REFERENCES

(Near) Dimension Independent Risk Bounds for Differentially Private Learning

TLDR
This paper shows that under certain assumptions, variants of both output and objective perturbation algorithms have no explicit dependence on p; the excess risk depends only on the L2-norm of the true risk minimizer and that of training points.

Private Convex Empirical Risk Minimization and High-dimensional Regression

TLDR
This work significantly extends the analysis of the “objective perturbation” algorithm of Chaudhuri et al. (2011) for convex ERM problems, and gives the best known algorithms for differentially private linear regression.

Private Empirical Risk Minimization, Revisited

TLDR
This paper provides new algorithms and matching lower bounds for private ERM assuming only that each data point’s contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded, and implies that algorithms from previous work can be used to obtain optimal error rates.

Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso

TLDR
This work designs differentially private algorithms for statistical model selection and gives sufficient conditions for the LASSO estimator to be robust to small changes in the data set, and shows that these conditions hold with high probability under essentially the same stochastic assumptions that are used in the literature to analyze convergence of the LassO.

The geometry of differential privacy: the sparse and approximate cases

TLDR
The connection between the hereditary discrepancy and the privacy mechanism enables the first polylogarithmic approximation to the hereditary discrepancies of a matrix A to be derived.

Stochastic Convex Optimization

TLDR
Stochastic convex optimization is studied, and it is shown that the key ingredient is strong convexity and regularization, which is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.

Differentially Private Empirical Risk Minimization

TLDR
This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.

Fingerprinting codes and the price of approximate differential privacy

TLDR
The results rely on the existence of short fingerprinting codes (Boneh and Shaw, CRYPTO'95; Tardos, STOC'03), which are closely connected to the sample complexity of differentially private data release.

Private Multiplicative Weights Beyond Linear Queries

TLDR
This work shows how to give accurate and differentially private solutions to exponentially many convex minimization problems on a sensitive dataset.

Differentially Private Online Learning

TLDR
This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well.
...