• Corpus ID: 246634379

Optimal Algorithms for Decentralized Stochastic Variational Inequalities

  title={Optimal Algorithms for Decentralized Stochastic Variational Inequalities},
  author={Dmitry Kovalev and Aleksandr Beznosikov and Abdurakhmon Sadiev and Michael Persiianov and Peter Richt{\'a}rik and Alexander V. Gasnikov},
Variational inequalities are a formalism that includes games, minimization, saddle point, and equilibrium problems as special cases. Methods for variational inequalities are therefore universal approaches for many applied tasks, including machine learning problems. This work concentrates on the decentralized setting, which is increasingly important but not well understood. In particular, we consider decentralized stochastic (sum-type) variational inequalities over fixed and time-varying… 

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey

This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities. To begin with, we give the deterministic foundation from which the stochastic methods

A Simple and Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization

To the best the authors' knowl-edge, DREAM is the first algorithm whose SFO and communication complexities simultaneously achieve the optimal dependency on ǫ and λ 2 ( W ) for this problem.

Decentralized Saddle-Point Problems with Different Constants of Strong Convexity and Strong Concavity

This paper studies distributed saddle-point problems (SPP) with strongly-convex-strongly-concave smooth objectives that have different strong convexity and strong concavity parameters of composite terms, which correspond to min and max variables, and bilinear saddle- point part.

The Mirror-Prox Sliding Method for Non-smooth decentralized saddle-point problems

This work contains generalization of recently proposed sliding for centralized problem and through specific penalization method and this sliding the authors obtain algorithm for non-smooth decentralized saddle-point problems.

Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities

This paper considers a combination of two popular approaches: compression and data similarity, and shows that this synergy can be moreective than each of the approaches separately in solving distributed smooth strongly monotone variational inequalities.

Decentralized optimization over time-varying graphs: a survey

Decentralized optimization over time-varying networks has a wide range of applications in distributed learning, signal processing and various distributed control problems. The agents of the

On Scaled Methods for Saddle Point Problems

A theoretical analysis of the following scaling techniques for solving SPPs: the well-known Adam and RmsProp scaling and the newer AdaHessian and OASIS based on Hutchison approximation.

A General Framework for Distributed Partitioned Optimization



Lower Complexity Bounds of Finite-Sum Optimization Problems: The Results and Construction

The lower bound complexity for the minimax optimization problem whose objective function is the average of n individual smooth component functions is studied and Proximal Incremental First-order algorithms which have access to gradient and proximal oracle for each individual component are considered.

Finite-Dimensional Variational Inequalities and Complementarity Problems

Newton Methods for Nonsmooth Equations.- Global Methods for Nonsmooth Equations.- Equation-Based Algorithms for Complementarity Problems.- Algorithms for Variational Inequalities.- Interior and

Randomized gossip algorithms

This work analyzes the averaging problem under the gossip constraint for an arbitrary network graph, and finds that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm.

Optimal decentralized protocol for electric vehicle charging

A decentralized algorithm to optimally schedule electric vehicle (EV) charging as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles.

A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science

This work generalizes the primal-dual hybrid gradient (PDHG) algorithm to a broader class of convex optimization problems, and surveys several closely related methods and explains the connections to PDHG.

A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging

  • A. ChambolleT. Pock
  • Mathematics, Computer Science
    Journal of Mathematical Imaging and Vision
  • 2010
A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.

Variance Reduction for Matrix Games

The algorithm combines Nemirovski's "conceptual prox-method" and a novel reduced-variance gradient estimator based on "sampling from the difference" between the current iterate and a reference point to solve the problem of additive error in time.

Reducing Noise in GAN Training with Variance Reduced Extragradient

A novel stochastic variance-reduced extragradient optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.