• Corpus ID: 247446907

Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms

@inproceedings{Beznosikov2020DistributedSP,
  title={Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms},
  author={Aleksandr Beznosikov and Valentin Samokhin and Alexander V. Gasnikov},
  year={2020}
}
This paper focuses on the distributed optimization of stochastic saddle point problems. The first part of the paper is devoted to lower bounds for the cenralized and decentralized distributed methods for smooth (strongly) convex-(strongly) concave saddle-point problems as well as the near-optimal algorithms by which these bounds are achieved. Next, we present a new federated algorithm for cenralized distributed saddle point problems – Extra Step Local SGD. Theoretical analysis of the new method… 

Figures and Tables from this paper

The Mirror-Prox Sliding Method for Non-smooth decentralized saddle-point problems

This work contains generalization of recently proposed sliding for centralized problem and through specific penalization method and this sliding the authors obtain algorithm for non-smooth decentralized saddle-point problems.

Decentralized Saddle-Point Problems with Different Constants of Strong Convexity and Strong Concavity

This paper studies distributed saddle-point problems (SPP) with strongly-convex-strongly-concave smooth objectives that have different strong convexity and strong concavity parameters of composite terms, which correspond to min and max variables, and bilinear saddle- point part.

The power of first-order smooth optimization for black-box non-smooth problems

A generic approach is proposed that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems and elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.

Decentralized Stochastic Variance Reduced Extragradient Method

A novel decentralized optimization algorithm, called multi-consensus stochastic variance reduced extragradient, is proposed, which achieves the best known stoChastic first-order oracle (SFO) complexity for this problem.

SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities

It is shown that for strongly monotone problems it is possible to achieve linear convergence to a solution using the stochastic variational inequalities method based on the SARAH variance reduction technique.

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey

This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities. To begin with, we give the deterministic foundation from which the stochastic methods

A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

FedGDA-GT is proposed, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT) that converges linearly with a constant stepsize to global ǫ -approximation solution with O (log(1 /ǫ )) rounds of communication, which matches the time complexity of centralized GDA method.

Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise

This work proves the first high-probability complexity results with logarithmic dependence on the confidence level for stochastic methods for solving monotone and structured non-monotone VIPs with non-sub-Gaussian (heavy-tailed) noise and unbounded domains.

A Unified Analysis of Variational Inequality Methods: Variance Reduction, Sampling, Quantization and Coordinate Descent

Унифицированный анализ методов решения вариационных неравенств: редукция дисперсии, сэмплирование, квантизация и покомпонентный спуск1 © 2021 г. А. Н. Безносиков, А. В. Гасников, К.Э. Зайнуллина,

References

SHOWING 1-10 OF 57 REFERENCES

Decentralized Distributed Optimization for Saddle Point Problems

The work of the proposed algorithm is illustrated on the prominent problem of computing Wasserstein barycenters (WB), where a non-euclidean proximal setup arises naturally in a bilinear saddle point reformulation of the WB problem.

A Decentralized Proximal Point-type Method for Saddle Point Problems

This paper proposes a decentralized variant of the proximal point method that is the first decentralized algorithm with theoretical guarantees for solving a non-convexnon-concave decentralized saddle point problem and the numerical results for training a general adversarial network in a decentralized manner match the theoretical guarantees.

A decentralized algorithm for large scale min-max problems

This work proposes a decentralized algorithm based on the Extragradient method, whose centralized implementation has been shown to achieve good performance on a wide range of min-max problems, and shows that the proposed method achieves linear convergence under suitable assumptions.

Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems

It is proved that for strongly convex problems, O (1/t^2) is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems.

Multi-consensus Decentralized Accelerated Gradient Descent

A novel algorithm is proposed that can achieve near optimal communication complexity, matching the known lower bound up to a logarithmic factor of the condition number of the problem.

Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks

The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified.

Lectures on modern convex optimization - analysis, algorithms, and engineering applications

The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.

Efficient Algorithms for Federated Saddle Point Optimization

This work designs an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors) and gives the first federated minimax optimization algorithm that achieves this goal.

Distributed Subgradient Methods for Multi-Agent Optimization

The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.

Solving variational inequalities with Stochastic Mirror-Prox algorithm

A novel Stochastic Mirror-Prox algorithm is developed for solving s.v.i. variational inequalities with monotone operators and it is shown that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters.
...