Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms
@inproceedings{Beznosikov2020DistributedSP, title={Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms}, author={Aleksandr Beznosikov and Valentin Samokhin and Alexander V. Gasnikov}, year={2020} }
This paper focuses on the distributed optimization of stochastic saddle point problems. The first part of the paper is devoted to lower bounds for the cenralized and decentralized distributed methods for smooth (strongly) convex-(strongly) concave saddle-point problems as well as the near-optimal algorithms by which these bounds are achieved. Next, we present a new federated algorithm for cenralized distributed saddle point problems – Extra Step Local SGD. Theoretical analysis of the new method…
Figures and Tables from this paper
11 Citations
The Mirror-Prox Sliding Method for Non-smooth decentralized saddle-point problems
- Computer Science
- 2022
This work contains generalization of recently proposed sliding for centralized problem and through specific penalization method and this sliding the authors obtain algorithm for non-smooth decentralized saddle-point problems.
Decentralized Saddle-Point Problems with Different Constants of Strong Convexity and Strong Concavity
- Computer Science, Mathematics
- 2022
This paper studies distributed saddle-point problems (SPP) with strongly-convex-strongly-concave smooth objectives that have different strong convexity and strong concavity parameters of composite terms, which correspond to min and max variables, and bilinear saddle- point part.
A Simple and Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization
- Computer ScienceArXiv
- 2022
To the best the authors' knowl-edge, DREAM is the first algorithm whose SFO and communication complexities simultaneously achieve the optimal dependency on ǫ and λ 2 ( W ) for this problem.
The power of first-order smooth optimization for black-box non-smooth problems
- Computer ScienceICML
- 2022
A generic approach is proposed that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems and elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.
Decentralized Stochastic Variance Reduced Extragradient Method
- Computer ScienceArXiv
- 2022
A novel decentralized optimization algorithm, called multi-consensus stochastic variance reduced extragradient, is proposed, which achieves the best known stoChastic first-order oracle (SFO) complexity for this problem.
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities
- Mathematics, Computer ScienceArXiv
- 2022
It is shown that for strongly monotone problems it is possible to achieve linear convergence to a solution using the stochastic variational inequalities method based on the SARAH variance reduction technique.
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey
- MathematicsArXiv
- 2022
This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities. To begin with, we give the deterministic foundation from which the stochastic methods…
A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning
- Computer ScienceArXiv
- 2022
FedGDA-GT is proposed, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT) that converges linearly with a constant stepsize to global ǫ -approximation solution with O (log(1 /ǫ )) rounds of communication, which matches the time complexity of centralized GDA method.
ProxSkip for Stochastic Variational Inequalities: A Federated Learning Algorithm for Provable Communication Acceleration
- Computer Science
- 2022
ProxSkip-VIP algorithm is proposed, which generalizes the original ProxSkip framework to VIP, and it is explained how the approach achieves acceleration in terms of the communication complexity over existing state-of-the-art FL algorithms.
Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise
- Computer ScienceArXiv
- 2022
This work proves the first high-probability complexity results with logarithmic dependence on the confidence level for stochastic methods for solving monotone and structured non-monotone VIPs with non-sub-Gaussian (heavy-tailed) noise and unbounded domains.
References
SHOWING 1-10 OF 57 REFERENCES
Decentralized Distributed Optimization for Saddle Point Problems
- Computer Science, MathematicsArXiv
- 2021
The work of the proposed algorithm is illustrated on the prominent problem of computing Wasserstein barycenters (WB), where a non-euclidean proximal setup arises naturally in a bilinear saddle point reformulation of the WB problem.
A Decentralized Proximal Point-type Method for Saddle Point Problems
- Computer Science, MathematicsArXiv
- 2019
This paper proposes a decentralized variant of the proximal point method that is the first decentralized algorithm with theoretical guarantees for solving a non-convexnon-concave decentralized saddle point problem and the numerical results for training a general adversarial network in a decentralized manner match the theoretical guarantees.
A decentralized algorithm for large scale min-max problems
- Computer Science2020 59th IEEE Conference on Decision and Control (CDC)
- 2020
This work proposes a decentralized algorithm based on the Extragradient method, whose centralized implementation has been shown to achieve good performance on a wide range of min-max problems, and shows that the proposed method achieves linear convergence under suitable assumptions.
Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks
- Computer ScienceICML
- 2017
The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified.
Lectures on modern convex optimization - analysis, algorithms, and engineering applications
- Computer ScienceMPS-SIAM series on optimization
- 2001
The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.
Efficient Algorithms for Federated Saddle Point Optimization
- Computer ScienceArXiv
- 2021
This work designs an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors) and gives the first federated minimax optimization algorithm that achieves this goal.
Distributed Subgradient Methods for Multi-Agent Optimization
- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2009
The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Mathematics, Computer Science
- 2008
A novel Stochastic Mirror-Prox algorithm is developed for solving s.v.i. variational inequalities with monotone operators and it is shown that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters.
Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Mathematics, Computer ScienceSIAM J. Optim.
- 2004
We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone…
Dual extrapolation and its applications to solving variational inequalities and related problems
- Mathematics, Computer ScienceMath. Program.
- 2007
This paper shows that with an appropriate step-size strategy, their method is optimal both for Lipschitz continuous operators and for the operators with bounded variations.