Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization

@article{Zhang2022BringYO,
  title={Bring Your Own Algorithm for Optimal Differentially Private Stochastic Minimax Optimization},
  author={L. Zhang and Kiran Koshy Thekumparampil and Sewoong Oh and Niao He},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.00363}
}
We study differentially private (DP) algorithms for smooth stochastic minimax optimization, with stochastic minimization as a byproduct. The holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss, using an algorithm with a linear time-complexity in the number of training samples. We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring… 

Tables from this paper

References

SHOWING 1-10 OF 52 REFERENCES

Private stochastic convex optimization: optimal rates in linear time

Two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n 2/d}) gradient computations are described.

Efficient Private ERM for Smooth Objectives

An RRPSGD (Random Round Private Stochastic Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee is proposed, which consistently outperforms existing method in both utility and running time.

Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

This work provides new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded.

Output Perturbation for Differentially Private Convex Optimization with Improved Population Loss Bounds, Runtimes and Applications to Private Adversarial Training

A completely general family of convex, Lipschitz loss functions is studied and the first known DP excess risk and runtime bounds for optimizing this broad class are established and the theory quantifies tradeoffs between adversarial robustness, privacy, and runtime.

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems

This work shows that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function; and a sampling with replacement variant achieves optimal risk bounds withrespect to a weak gap function.

Differentially Private SGDA for Minimax Problems

This paper proves that the DP-SGDA can achieve an optimal utility rate in terms of the weak primal-dual population risk in both smooth and non-smooth cases, and provides its utility analysis in the nonconvex-strongly-concave setting.

Private Stochastic Convex Optimization: Optimal Rates in 𝓁1 Geometry

The upper bound is based on a new algorithm that combines the iterative localization approach of Feldman et al. (2020a) with a new analysis of private regularized mirror descent and is achieved by a new variance-reduced version of the Frank-Wolfe algorithm that requires just a single pass over the data.

Stability and Generalization of Differentially Private Minimax Problems

This paper focuses on the privacy of the general minimax setting, combining differential privacy together with minimax optimization paradigm, and theoretically analyzes the high probability generalization performance of the differen- tially private minimax algorithm under the strongly-convex-strongly-concave condition.

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

This work provides the first method for non-smooth weakly convex stochastic optimization with rate Õ ( 1 n1/4 + d 1/6 (nε)1/3 ) which matches the best existing non-private algorithm when d = O( √ n).

Private Non-smooth ERM and SCO in Subquadratic Steps

A (nearly) optimal bound is got on the excess empirical risk with O ( N 3 / 2 d 1 / 8 + N 2 d ) gradient queries, which is achieved with the help of subsampling and smoothing the function via convolution.
...