• Corpus ID: 235294227

Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data

@article{Kamath2022ImprovedRF,
  title={Improved Rates for Differentially Private Stochastic Convex Optimization with Heavy-Tailed Data},
  author={Gautam Kamath and Xingtu Liu and Huanyu Zhang},
  journal={ArXiv},
  year={2022},
  volume={abs/2106.01336}
}
We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy (DP). Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu [WXDX20], we study general convex loss functions with the assumption that the distribution of gradients has bounded k -th moments. We provide improved upper bounds on the excess population risk under concentrated DP for convex and… 

Private Stochastic Convex Optimization and Sparse Learning with Heavy-tailed Data Revisited

TLDR
This paper revisits the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) with heavy-tailed data and proposes a novel robust and private mean estimator which is optimal.

DP-PCA: Statistically Optimal and Differentially Private PCA

We study the canonical statistical task of computing the principal component from n i.i.d. data in d dimensions under (ε, δ)-differential privacy. Although extensively studied in literature, existing

Differentially Private Regression with Unbounded Covariates

TLDR
Through the case of Binary Regression, this work captures the fundamental and widely-studied models of logistic regression and linearly-separable SVMs, learning an unbiased estimate of the true regression vector, up to a scaling factor.

Differentially Private ℓ1-norm Linear Regression with Heavy-tailed Data

  • Di WangJinhui Xu
  • Computer Science, Mathematics
    2022 IEEE International Symposium on Information Theory (ISIT)
  • 2022
TLDR
An algorithm is proposed which is based on the exponential mechanism and it is shown that it is possible to achieve an upper bound of $\tilde O\left( {\sqrt {\frac{d}{{n\varepsilon }}} } \right)$ (with high probability).

High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data

TLDR
This paper provides the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space and proposes a truncated DP-IHT method whose output could achieve an error of Õ ((s*2 log2d)/nε), where s* is the sparsity of the underlying parameter.

Private Stochastic Optimization in the Presence of Outliers: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

TLDR
The weaker assumption that stochastic gradients have bounded k -th moments for some k ě 2 moments is made, allowing for significantly faster rates in the presence of outliers and an accelerated algorithm that runs in linear time and yields improved and nearly optimal excess risk for smooth losses.

Efficient Private SCO for Heavy-Tailed Data via Clipping

TLDR
This paper derives the first high-probability bounds for private stochastic method with clipping for heavy-tailed data with the guarantee of being differentially private (DP) and establishes new excess risk bounds without bounded domain assumption.

Beyond Uniform Lipschitz Condition in Differentially Private Optimization

TLDR
This work derives new convergence results for DP- SGD on both convex and nonconvex functions when the per-sample Lipschitz constants have bounded moments, and provides principled guidance on choosing the clip norm in DP-SGD for convex settings satisfying the relaxed version of LipsChitzness.

On Private Online Convex Optimization: Optimal Algorithms in 𝓁p-Geometry and High Dimensional Contextual Bandits

TLDR
This paper proposes a private variant of online Frank-Wolfe algorithm with recursive gradients for variance reduction to update and reveal the parameters upon each data, and designs the first DP algorithm for high-dimensional generalized linear bandits with logarithmic regret.

New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma

TLDR
New lower bounds for statistical estimation tasks under the constraint of p ε, δ q differential privacy are proved and a tight Ω ` d α 2 ε ˘ lower bound for estimating the mean of a distribution with bounded covariance to α -error in ℓ 2 -distance is shown.

References

SHOWING 1-10 OF 49 REFERENCES

Private Non-smooth ERM and SCO in Subquadratic Steps

TLDR
A (nearly) optimal bound is got on the excess empirical risk with O ( N 3 / 2 d 1 / 8 + N 2 d ) gradient queries, which is achieved with the help of subsampling and smoothing the function via convolution.

High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data

TLDR
This paper provides the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space and proposes a truncated DP-IHT method whose output could achieve an error of Õ ((s*2 log2d)/nε), where s* is the sparsity of the underlying parameter.

Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps

TLDR
This work gets a (nearly) optimal bound on the excess empirical risk and excess population loss with subquadratic gradient complexity on the differentially private Empirical Risk Minimization and Stochastic Convex Optimization problems for non-smooth convex functions.

Non-Euclidean Differentially Private Stochastic Convex Optimization

Differentially private (DP) stochastic convex optimization (SCO) is a fundamental problem, where the goal is to approximately minimize the population risk with respect to a convex loss function,

Private Stochastic Convex Optimization: Optimal Rates in 𝓁1 Geometry

TLDR
The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020a) with a new analysis of private regularized mirror descent and is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data.

Wide Network Learning with Differential Privacy

TLDR
This work addresses the problem of differentially private Empirical Risk Minimization (ERM) for models that admit sparse gradients, and proposes a novel algorithm for privately training neural networks.

Efficient Privacy-Preserving Stochastic Nonconvex Optimization.

TLDR
A new differentially private stochastic gradient descent algorithm is proposed for nonconvex ERM that achieves strong privacy guarantees efficiently, and is extended to the distributed setting using secure multi-party computation, and shows it is possible for a distributed algorithm to match the privacy and utility guarantees of a centralized algorithm in this setting.

On Differentially Private Stochastic Convex Optimization with Heavy-tailed Data

TLDR
This paper proposes a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), and provides a gradient smoothing and trimming based scheme to achieve excess population risks.

Differentially Private Assouad, Fano, and Le Cam

TLDR
This work establishes the optimal sample complexity of discrete distribution estimation under total variation distance and $\ell_2$ distance and provides lower bounds for several other distribution classes, including product distributions and Gaussian mixtures that are tight up to logarithmic factors.

Private Mean Estimation of Heavy-Tailed Distributions

TLDR
Algorithms for the multivariate setting whose sample complexity is a factor of $O(d)$ larger than the univariate case are given, for which the sample simplicity is identical for all $k \geq 2$.