Solving Smooth Min-Min and Min-Max Problems by Mixed Oracle Algorithms

@article{Gladin2021SolvingSM,
  title={Solving Smooth Min-Min and Min-Max Problems by Mixed Oracle Algorithms},
  author={Egor Gladin and Abdurakhmon Sadiev and Alexander V. Gasnikov and Pavel E. Dvurechensky and Aleksandr Beznosikov and Mohammad S. Alkousa},
  journal={Communications in Computer and Information Science},
  year={2021}
}
  • E. GladinA. Sadiev M. Alkousa
  • Published 28 February 2021
  • Computer Science, Mathematics
  • Communications in Computer and Information Science
In this paper we consider two types of problems which have some similarity in their structure, namely, min-min problems and minmax saddle-point problems. Our approach is based on considering the outer minimization problem as a minimization problem with inexact oracle. This inexact oracle is calculated via inexact solution of the inner problem, which is either a minimization or a maximization problem. Our main assumptions are that the problem is smooth and the available oracle is mixed: it is… 

Tensor methods inside mixed oracle for min-min problems

  • P. Ostroukhov
  • Mathematics, Computer Science
    Computer Research and Modeling
  • 2022
This work considers min-min type of problems or minimization by two groups of variables, high-order tensor methods to solve inner problem and fast gradient method to solve outer problem and assumes strong convexity of the outer problem to be able to usefast gradient method for strongly convex functions.

Oracle Complexity Separation in Convex Optimization

This work considers the problem of minimizing the sum of two functions and proposes a generic algorithmic framework to separate oracle complexities for each function and obtains accelerated random coordinate descent and accelerated variance reduced methods with oracle complexity separation.

The power of first-order smooth optimization for black-box non-smooth problems

A generic approach is proposed that, based on optimal first-order methods, allows to obtain in a black-box fashion new zeroth-order algorithms for non-smooth convex optimization problems and elaborate on extensions for stochastic optimization problems, saddle-point problems, and distributed optimization.

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey

This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities. To begin with, we give the deterministic foundation from which the stochastic methods

Algorithm for Constrained Markov Decision Process with Linear Convergence

A new dual approach is proposed with the integration of two ingredients: entropy-regularized policy optimizer and Vaidya’s dual optimizer, both of which are critical to achieve faster convergence in constrained Markov decision process.

Vaidya's method for convex stochastic optimization in small dimension

В работе рассматривается общая задача выпуклой стохастической оптимизации в пространстве небольшой размерности (например, 100 переменных). Известно, что для детерминированных задач выпуклой

Метод Вайды для задач выпуклой стохастической оптимизации небольшой размерности

В работе рассматривается общая задача выпуклой стохастической оптимизации в пространстве небольшой размерности (например, 100 переменных). Известно, что для детерминированных задач выпуклой

Решение сильно выпукло-вогнутых композитных седловых задач с небольшой размерностью одной из групп переменных

Статья посвящена разработке алгоритмических методов, гарантирующих эффективные оценки сложности для сильно выпукло-вогнутых седловых задач в случае, когда одна из групп переменных имеет большую

Randomized gradient-free methods in convex optimization

This review presents modern gradient-free methods to solve convex optimization problems and mainly focuses on three criteria: oracle complexity, iteration complexity, and the maximum permissible noise level.

References

SHOWING 1-10 OF 35 REFERENCES

Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem

The proposed approach works, at least, like the best existing approaches, but for a special set-up (simplex type constraints and closeness of Lipschitz constants in 1 and 2 norms) the approach reduces n/logn times the required number of oracle calls (function calculations).

Gradient-Free Methods for Saddle-Point Problem

In the paper, we generalize the approach Gasnikov et. al, 2017, which allows to solve (stochastic) convex optimization problems with an inexact gradient-free oracle, to the convex-concave

Inexact Relative Smoothness and Strong Convexity for Optimization and Variational Inequalities by Inexact Model

An extension of relative strong convexity for optimization and variational inequalities is added, which works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness.

Inexact model: a framework for optimization and variational inequalities

A general algorithmic framework for the first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities (VIs), and introduces relative smoothness for operators and an algorithm for VIs with such operators.

First-order methods with inexact oracle: the strongly convex case

It is proved that the notion of (δ, L, μ)-oracle can be used in order to model exact first-order information but for functions with weaker level of smoothness and different level of convexity, which allows methods, originally designed for smooth strongly convex function, to weakly smooth uniformly convex functions and to derive corresponding performance guarantees.

Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems

It is shown that this method often outperforms the standard Fast Gradient Methods on optimization problems with dense data, and the provable acceleration factor with respect to FGM can reach the square root of the number of variables.

A new algorithm for minimizing convex functions over convex sets

This work presents a new algorithm for the feasibility problem that has a significantly better global convergence rate and time complexity than the ellipsoid algorithm and easily adapts to the convex optimization problem.

Subgradient Methods for Saddle-Point Problems

This work presents a subgradient algorithm for generating approximate saddle points and provides per-iteration convergence rate estimates on the constructed solutions, and focuses on Lagrangian duality, where it is shown this algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily.

Adaptive Catalyst for Smooth Convex Optimization

This paper proposes an adaptive variant of Catalyst that doesn’t require prior knowledge about the smoothness constant of the target function, and in combination with the adaptive inner nonaccelerated algorithm, proposes accelerated variants of well-known methods: steepest descent, adaptive coordinate descent, alternating minimization.