• Corpus ID: 14305556

# Private Convex Empirical Risk Minimization and High-dimensional Regression

@inproceedings{Kifer2012PrivateCE,
title={Private Convex Empirical Risk Minimization and High-dimensional Regression},
booktitle={Annual Conference Computational Learning Theory},
year={2012}
}
• Published in
Annual Conference…
2012
• Computer Science, Mathematics
We consider differentially private algorithms for convex empirical risk minimization (ERM. [] Key Method To this end: (a) We significantly extend the analysis of the “objective perturbation” algorithm of Chaudhuri et al. (2011) for convex ERM problems. We show that their method can be modified to use less noise (be more accurate), and to apply to problems with hard constraints and non-differentiable regularizers. We also give a tighter, data-dependent analysis of the additional error introduced by their…
267 Citations

## Tables from this paper

• Computer Science
ICML
• 2014
This paper shows that under certain assumptions, variants of both output and objective perturbation algorithms have no explicit dependence on p; the excess risk depends only on the L2-norm of the true risk minimizer and that of training points.
• Computer Science
ICML
• 2016
This paper theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain) of ERM problems, and shows that for the class of generalized linear functions, given only the projected data and the projection matrix, excess risk bounds can be obtained.
• Computer Science
ArXiv
• 2019
This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy with a particular class of convex but non-smooth regularizers that induce structured sparsity and loss functions for generalized linear models.
• Computer Science, Mathematics
COLT
• 2021
Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function,
• Computer Science
ArXiv
• 2021
This work gets a (nearly) optimal bound on the excess empirical risk and excess population loss with subquadratic gradient complexity on the differentially private Empirical Risk Minimization and Stochastic Convex Optimization problems for non-smooth convex functions.
• Computer Science
2014 IEEE 55th Annual Symposium on Foundations of Computer Science
• 2014
This work provides new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded.
• Computer Science
ICML
• 2021
The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020a) with a new analysis of private regularized mirror descent and is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data.
• Computer Science
ICML
• 2020
This paper proposes a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), and provides a gradient smoothing and trimming based scheme to achieve excess population risks.
• Computer Science
AISTATS
• 2021
It is shown that for unconstrained convex generalized linear models (GLMs), one can obtain an excess empirical risk of Õ (√ rank/εn ) , where rank is the rank of the feature matrix in the GLM problem, n is the number of data samples, and ε is the privacy parameter.
• Computer Science, Mathematics
ALT
• 2019
This work considers differentially private algorithms that operate in the local model, where each data record is stored on a separate user device and randomization is performed locally by those devices.

## References

SHOWING 1-10 OF 26 REFERENCES

• Computer Science
J. Mach. Learn. Res.
• 2011
This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.
• Computer Science
COLT
• 2009
Stochastic convex optimization is studied, and it is shown that the key ingredient is strong convexity and regularization, which is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.
It is shown that for a large class of statistical estimators T and input distributions P, there is a differentially private estimator AT with the same asymptotic distribution as T, which implies that AT (X) is essentially as good as the original statistic T(X) for statistical inference, for sufficiently large samples.
• Computer Science
TCC
• 2006
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output.
• Computer Science, Mathematics
NIPS
• 2009
A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results.
• Computer Science
STOC '07
• 2007
This is the first formal analysis of the effect of instance-based noise in the context of data privacy, and shows how to do this efficiently for several different functions, including the median and the cost of the minimum spanning tree.
• Economics
48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07)
• 2007
It is shown that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie.
• Computer Science
NIPS 2004
• 2004
A sparse Bayesian learning-based method of minimizing the l0-norm while reducing the number of troublesome local minima is demonstrated and it is demonstrated that there are typically many fewer for general problems of interest.
• Computer Science, Mathematics
KDD
• 2008
This paper investigates composition attacks, in which an adversary uses independent anonymized releases to breach privacy, and provides a precise formulation of this property, and proves that an important class of relaxations of differential privacy also satisfy the property.
• Computer Science
SIGMOD '11
• 2011
This paper argues that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process, different from limiting the inference about the presence of a tuple.