Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation

@article{Kotsalis2020SimpleAO,
  title={Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation},
  author={Georgios Kotsalis and Guanghui Lan and Tianjiao Li},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.02987}
}
In this paper we first present a novel operator extrapolation (OE) method for solving deterministic variational inequality (VI) problems. Similar to the gradient (operator) projection method, OE updates one single search sequence by solving a single projection subproblem in each iteration. We show that OE can achieve the optimal rate of convergence for solving a variety of VI problems in a much simpler way than existing approaches. We then introduce the stochastic operator extrapolation (SOE… 

Figures and Tables from this paper

Training neural networks using monotone variational inequality

The use of MVI for training multi-layer neural networks is studied and a practical algorithm called stochastic variational inequality (SVI) is proposed, which demonstrates its applicability in training fully-connected neural networks and graph neural networks (GNN) and can be used to train other types of neural networks.

An alternative approach to train neural networks using monotone variational inequality

The solution to MVI can be found by computationally efficient procedures, and importantly, this leads to performance guarantee of bounds on model recovery and prediction accuracy under the theoretical setting of training a single-layer linear neural network.

An Optimal Distributed Algorithm with Operator Extrapolation for Stochastic Aggregative Games

This work proposes a distributed algorithm with operator extrapolation, in which each player maintains an estimate of this aggregate by exchanging this information with its neighbors over a time-varying network, and updates its decision through the mirror descent method.

Mirror frameworks for relatively Lipschitz and monotone-like variational inequalities

Nonconvex-nonconcave saddle-point optimization in machine learning has triggered lots of research for studying non-monotone variational inequalities (VI). In this work, we introduce two mirror

Stochastic first-order methods for average-reward Markov decision processes

An average-reward variant of the stochastic policy mirror descent (SPMD) and an exploratory variance-reduced temporal difference method for insufficiently random policies with comparable convergence guarantees to establish linear convergence rate on the bias of policy evaluation.

Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning

An improved analysis of the standard TD algorithm that can benefit from parallel implementation is provided and versions of a conditional TD algorithm (CTD), that involves periodic updates of the stochastic iterates, which reduce the bias and therefore exhibit improved iteration complexity are presented.

First-Order Algorithms for Nonlinear Generalized Nash Equilibrium Problems

The global convergence rate of the algorithms for solving (strongly) monotone NGNEPs is established and the iteration complexity bounds expressed in terms of the number of gradient evaluations are provided.

Accelerated and instance-optimal policy evaluation with linear function approximation

An accelerated, variance-reduced fast temporal difference algorithm (VRFTD) that simultaneously matches both lower bounds and attains a strong notion of instance-optimality is developed.

Randomized gradient-free methods in convex optimization

This review presents modern gradient-free methods to solve convex optimization problems and mainly focuses on three criteria: oracle complexity, iteration complexity, and the maximum permissible noise level.

Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee

How second-order information can be used to speed up the dynamics of dual extrapolation methods despite inexactness is highlighted, and a simple and intuitive convergence analysis for second- order methods without requiring any compactness assumptions is provided.

References

SHOWING 1-10 OF 31 REFERENCES

Robust Stochastic Approximation Approach to Stochastic Programming

It is intended to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems.

Accelerated schemes for a class of variational inequalities

The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method, which computes weak solutions with the optimal iteration complexity for SVIs.

Solving variational inequalities with Stochastic Mirror-Prox algorithm

A novel Stochastic Mirror-Prox algorithm is developed for solving s.v.i. variational inequalities with monotone operators and it is shown that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters.

Projected Reflected Gradient Methods for Monotone Variational Inequalities

The projected reflected gradient algorithm with a constant stepsize is proposed, which requires only one projection onto the feasible set and only one value of the mapping per iteration and has R-linear rate of convergence under the strong monotonicity assumption.

On the Analysis of Variance-reduced and Randomized Projection Variants of Single Projection Schemes for Monotone Stochastic Variational Inequality Problems

It is shown that the random projection analogs of both schemes display almost sure convergence under a weak-sharpness requirement, and both schemes are characterized by the optimal rate in terms of the gap function of the projection of the averaged sequence onto the set as well as the infeasibility of this sequence.

Interior projection-like methods for monotone variational inequalities

It is demonstrated that within an appropriate primal-dual variational inequality framework, the proposed algorithms can be applied to general convex constraints resulting in methods which at each iteration entail only explicit formulas and do not require the solution of any convex optimization problem.

Dual extrapolation and its applications to solving variational inequalities and related problems

  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2007
This paper shows that with an appropriate step-size strategy, their method is optimal both for Lipschitz continuous operators and for the operators with bounded variations.

Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality

This paper proposes an algorithmic framework motivated by the proximal point method, which solves a sequence of strongly monotone variational inequalities constructed by adding a stronglymonotone mapping to the original mapping with a periodically updated proximal center, and establishes the first work that establishes the non-asymptotic convergence to a stationary point of a non-convexnon-concave min-max problem.

On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators

This paper presents non-Euclidean extragradient (N-EG) methods for computing approximate strong solutions of GMVI problems, and demonstrates how their iteration complexities depend on the global Lipschitz or Hölder continuity properties for their operators and the smoothness properties for the distance generating function used in the N-EG algorithms.

On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean

This paper analyzes the iteration complexity of the hybrid proximal extragradient (HPE) method for finding a zero of a maximal monotone operator recently proposed by Solodov and Svaiter and obtains new complexity bounds for Korpelevich's extrag Radient method which do not require the feasible set to be bounded.