Corpus ID: 3620489

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

@article{Wang2017DifferentiallyPE,
  title={Differentially Private Empirical Risk Minimization Revisited: Faster and More General},
  author={Di Wang and Minwei Ye and Jinhui Xu},
  journal={ArXiv},
  year={2017},
  volume={abs/1802.05251}
}
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional ($p\gg n$) setting, we give an algorithm which achieves the upper bound with less gradient… Expand
Differentially Private Empirical Risk Minimization with Smooth Non-Convex Loss Functions: A Non-Stationary View
TLDR
This paper investigates the DP-ERM problem in high dimensional space, and shows that by measuring the utility with Frank-Wolfe gap, it is possible to bound the utility by the Gaussian Width of the constraint set, instead of the dimensionality p of the underlying space. Expand
Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps
TLDR
This work gets a (nearly) optimal bound on the excess empirical risk and excess population loss with subquadratic gradient complexity on the differentially private Empirical Risk Minimization and Stochastic Convex Optimization problems for non-smooth convex functions. Expand
Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
TLDR
This work studies differentially private stochastic optimization in convex and non-convex settings and focuses on the family of non-smooth generalized linear losses. Expand
Private Stochastic Convex Optimization with Optimal Rates
TLDR
The approach builds on existing differentially private algorithms and relies on the analysis of algorithmic stability to ensure generalization and implies that, contrary to intuition based on private ERM, private SCO has asymptotically the same rate of $1/\sqrt{n}$ as non-private SCO in the parameter regime most common in practice. Expand
Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data
TLDR
Improved upper bounds on the excess population risk under approximate differential privacy of convex and strongly convex loss functions are provided, and nearly-matching lower bounds under the constraint of pure differential privacy are proved, giving strong evidence that the bounds are tight. Expand
Faster Rates of Differentially Private Stochastic Convex Optimization
  • Su Jinyan, Di Wang
  • Computer Science, Mathematics
  • ArXiv
  • 2021
In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) and provide excess population risks for some special classes of functions that are faster thanExpand
Renyi Differentially Private ERM for Smooth Objectives
TLDR
The proposed Renyi Differentially Private stochastic gradient descent algorithm uses output perturbation and leverages randomness inside SGD, which creates a "randomized sensitivity", in order to reduce the amount of noise that is added. Expand
Curse of Dimensionality in Unconstrained Private Convex ERM
TLDR
The lower bounds of differentially private empirical risk minimization for general convex functions for convex generalized linear models and an Ω( p nǫ ) lower bound for unconstrained pure-DP ERM which recovers the result in the constrained case are considered. Expand
Towards Sharper Utility Bounds for Differentially Private Pairwise Learning
TLDR
This paper proposes a new differential privacy paradigm for pairwise learning, based on gradient perturbation, and uses the on-average stability and the pairwise locally elastic stability theories to analyze the expectation bound and the high probability bound. Expand
Private Stochastic Non-convex Optimization with Improved Utility Rates
  • Qiuchen Zhang, Jing Ma, Jian Lou, Li Xiong
  • Computer Science
  • IJCAI
  • 2021
We study the differentially private (DP) stochastic nonconvex optimization with a focus on its understudied utility measures in terms of the expected excess empirical and population risks. While theExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
Efficient Private ERM for Smooth Objectives
TLDR
An RRPSGD (Random Round Private Stochastic Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee is proposed, which consistently outperforms existing method in both utility and running time. Expand
Private Convex Empirical Risk Minimization and High-dimensional Regression
We consider differentially private algorithms for convex empirical risk minimization (ERM). Differential privacy (Dwork et al., 2006b) is a recently introduced notion of privacy which guarantees thatExpand
Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds
TLDR
This work provides new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. Expand
Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry
TLDR
It is shown that the geometric properties of the constraint set can be used to derive significantly better results in ERM, and when the loss function is Lipschitz with respect to the $\ell_1$ norm, a differentially private version of the Frank-Wolfe algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$. Expand
Efficient Private Empirical Risk Minimization for High-dimensional Learning
TLDR
This paper theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain) of ERM problems, and shows that for the class of generalized linear functions, given only the projected data and the projection matrix, excess risk bounds can be obtained. Expand
Beyond worst-case analysis in private singular vector computation
TLDR
This work achieves bounds by giving a robust analysis of the well-known power iteration algorithm, and proves a matching lower bound showing that the guarantee is nearly optimal for every setting of the coherence parameter. Expand
(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings
TLDR
The technique leads to the first nonprivate algorithms for private online learning in the bandit setting, and in many cases, the algorithms match the dependence on the input length of the optimal nonprivate regret bounds up to logarithmic factors in T. Expand
Differentially Private Online Learning
TLDR
This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well. Expand
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
TLDR
This work proposes and analyzes a new proximal stochastic gradient method, which uses a multistage scheme to progressively reduce the variance of the stochastics gradient. Expand
Nearly Optimal Private LASSO
TLDR
This work presents a nearly optimal differentially private version of the well known LASSO estimator that achieves such a bound without the polynomial dependence on p under no additional assumptions on the design matrix. Expand
...
1
2
3
4
...