• Corpus ID: 240354778

Mirror-prox sliding methods for solving a class of monotone variational inequalities

@inproceedings{Lan2021MirrorproxSM,
  title={Mirror-prox sliding methods for solving a class of monotone variational inequalities},
  author={Guanghui Lan and Yuyuan Ouyang},
  year={2021}
}
. In this paper we propose new algorithms for solving a class of structured monotone variational inequality (VI) problems over compact feasible sets. By identifying the gradient components existing in the operator of VI, we show that it is possible to skip computations of the gradients from time to time, while still maintaining the optimal iteration complexity for solving these VI problems. Specifically, for deterministic VI problems involving the sum of the gradient of a smooth convex function… 

Tables from this paper

Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity
TLDR
An inexact accelerated gradient sliding method is proposed that can skip the gradient computation for one of these components while still achieving optimal complexity of gradient calls of p and q, that is, O(√ Lp/μ) and O( √ Lq/μ), respectively.
Oracle Complexity Separation in Convex Optimization
TLDR
This work considers the problem of minimizing the sum of two functions and proposes a generic algorithmic framework to separate oracle complexities for each function and obtains accelerated random coordinate descent and accelerated variance reduced methods with oracle complexity separation.
One-Point Feedback for Composite Optimization with Applications to Distributed and Federated Learning
TLDR
This work presents a new method that allows to separate the oracle complexities and compute the gradient for one of the function as rarely as possible, and presents the applicability of this method to the problems of distributed optimization and federated learning.
Recent theoretical advances in decentralized distributed convex optimization.
TLDR
This paper focuses on how the results of decentralized distributed convex optimization can be explained based on optimal algorithms for the non-distributed setup, and provides recent results that have not been published yet.

References

SHOWING 1-10 OF 28 REFERENCES
Accelerated schemes for a class of variational inequalities
TLDR
The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method, which computes weak solutions with the optimal iteration complexity for SVIs.
Solving variational inequalities with Stochastic Mirror-Prox algorithm
TLDR
A novel Stochastic Mirror-Prox algorithm is developed for solving s.v.i. variational inequalities with monotone operators and it is shown that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters.
Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation
TLDR
Stochastic operator extrapolation (SOE) achieves the optimal complexity for solving a fundamental problem, i.e., stochastic smooth and strongly monotone VI, for the first time in the literature.
On Homotopy-Smoothing Methods for Box-Constrained Variational Inequalities
A variational inequality problem with a mapping $g:\Re^n \to \Re^n$ and lower and upper bounds on variables can be reformulated as a system of nonsmooth equations $F(x)=0$ in $\Re^n$. Recently,
Solving Weakly-Convex-Weakly-Concave Saddle-Point Problems as Weakly-Monotone Variational Inequality
TLDR
This paper proposes an algorithmic framework motivated by the proximal point method, which solves a sequence of strongly monotone variational inequalities constructed by adding a stronglymonotone mapping to the original mapping with a periodically updated proximal center, and establishes the first work that establishes the non-asymptotic convergence to a stationary point of a non-convexnon-concave min-max problem.
Dual extrapolation and its applications to solving variational inequalities and related problems
  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2007
TLDR
This paper shows that with an appropriate step-size strategy, their method is optimal both for Lipschitz continuous operators and for the operators with bounded variations.
Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone
Monotone Operators and the Proximal Point Algorithm
For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $
On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean
TLDR
This paper analyzes the iteration complexity of the hybrid proximal extragradient (HPE) method for finding a zero of a maximal monotone operator recently proposed by Solodov and Svaiter and obtains new complexity bounds for Korpelevich's extrag Radient method which do not require the feasible set to be bounded.
An optimal method for stochastic composite optimization
TLDR
The accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP is introduced, and it is shown that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO.
...
...