• Corpus ID: 247446907

# Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms

@inproceedings{Beznosikov2020DistributedSP,
title={Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms},
author={Aleksandr Beznosikov and Valentin Samokhin and Alexander V. Gasnikov},
year={2020}
}
• Published 25 October 2020
• Computer Science
This paper focuses on the distributed optimization of stochastic saddle point problems. The first part of the paper is devoted to lower bounds for the cenralized and decentralized distributed methods for smooth (strongly) convex-(strongly) concave saddle-point problems as well as the near-optimal algorithms by which these bounds are achieved. Next, we present a new federated algorithm for cenralized distributed saddle point problems – Extra Step Local SGD. Theoretical analysis of the new method…

## Figures and Tables from this paper

• Computer Science
• 2022
This work contains generalization of recently proposed sliding for centralized problem and through speciﬁc penalization method and this sliding the authors obtain algorithm for non-smooth decentralized saddle-point problems.
• Computer Science, Mathematics
• 2022
This paper studies distributed saddle-point problems (SPP) with strongly-convex-strongly-concave smooth objectives that have different strong convexity and strong concavity parameters of composite terms, which correspond to min and max variables, and bilinear saddle- point part.
• Computer Science
ArXiv
• 2022
To the best the authors' knowl-edge, DREAM is the ﬁrst algorithm whose SFO and communication complexities simultaneously achieve the optimal dependency on ǫ and λ 2 ( W ) for this problem.
• Computer Science
ICML
• 2022
A generic approach is proposed that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems and elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.
• Computer Science
ArXiv
• 2022
A novel decentralized optimization algorithm, called multi-consensus stochastic variance reduced extragradient, is proposed, which achieves the best known stoChastic first-order oracle (SFO) complexity for this problem.
• Mathematics, Computer Science
ArXiv
• 2022
It is shown that for strongly monotone problems it is possible to achieve linear convergence to a solution using the stochastic variational inequalities method based on the SARAH variance reduction technique.
• Mathematics
ArXiv
• 2022
This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities. To begin with, we give the deterministic foundation from which the stochastic methods
• Computer Science
ArXiv
• 2022
FedGDA-GT is proposed, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT) that converges linearly with a constant stepsize to global ǫ -approximation solution with O (log(1 /ǫ )) rounds of communication, which matches the time complexity of centralized GDA method.
ProxSkip-VIP algorithm is proposed, which generalizes the original ProxSkip framework to VIP, and it is explained how the approach achieves acceleration in terms of the communication complexity over existing state-of-the-art FL algorithms.
• Computer Science
ArXiv
• 2022
This work proves the first high-probability complexity results with logarithmic dependence on the confidence level for stochastic methods for solving monotone and structured non-monotone VIPs with non-sub-Gaussian (heavy-tailed) noise and unbounded domains.

## References

SHOWING 1-10 OF 57 REFERENCES

• Computer Science, Mathematics
ArXiv
• 2021
The work of the proposed algorithm is illustrated on the prominent problem of computing Wasserstein barycenters (WB), where a non-euclidean proximal setup arises naturally in a bilinear saddle point reformulation of the WB problem.
• Computer Science, Mathematics
ArXiv
• 2019
This paper proposes a decentralized variant of the proximal point method that is the first decentralized algorithm with theoretical guarantees for solving a non-convexnon-concave decentralized saddle point problem and the numerical results for training a general adversarial network in a decentralized manner match the theoretical guarantees.
• Computer Science
2020 59th IEEE Conference on Decision and Control (CDC)
• 2020
This work proposes a decentralized algorithm based on the Extragradient method, whose centralized implementation has been shown to achieve good performance on a wide range of min-max problems, and shows that the proposed method achieves linear convergence under suitable assumptions.
• Computer Science
ICML
• 2017
The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified.
• Computer Science
MPS-SIAM series on optimization
• 2001
The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.
• Computer Science
ArXiv
• 2021
This work designs an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors) and gives the first federated minimax optimization algorithm that achieves this goal.
• Mathematics, Computer Science
IEEE Transactions on Automatic Control
• 2009
The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
• Mathematics, Computer Science
• 2008
A novel Stochastic Mirror-Prox algorithm is developed for solving s.v.i. variational inequalities with monotone operators and it is shown that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters.
We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone
• Y. Nesterov
• Mathematics, Computer Science
Math. Program.
• 2007
This paper shows that with an appropriate step-size strategy, their method is optimal both for Lipschitz continuous operators and for the operators with bounded variations.