Adaptation to Inexactness for some Gradient-type Methods

@inproceedings{Stonyakin2021AdaptationTI,
  title={Adaptation to Inexactness for some Gradient-type Methods},
  author={Fedor S. Stonyakin},
  year={2021}
}
Хорошо известно, что методы градиентного типа отличаются относительной простотой и малыми затратами памяти, что объясняет их популярность в работах по многомерной оптимизации (см., например, [1–9]). Напомним, что для вывода оценок скорости сходимости градиентного метода можно использовать идею аппроксимации функции в исходной точке (текущем положении метода) мажорирующим ее параболоидом вращения. Так, для задачи минимизации выпуклого функционала f : Q → R… 

References

SHOWING 1-10 OF 18 REFERENCES
Relatively Smooth Convex Optimization by First-Order Methods, and Applications
TLDR
A notion of “relative smoothness” and relative strong convexity that is determined relative to a user-specified “reference function” $h(\cdot)$ (that should be computationally tractable for algorithms), and it is shown that many differentiable convex functions are relatively smooth with respect to a correspondingly fairly simple reference function.
Applications of Variational Analysis to a Generalized Fermat-Torricelli Problem
In this paper we develop new applications of variational analysis and generalized differentiation to the following optimization problem and its specifications: given n closed subsets of a Banach
First-order methods of smooth convex optimization with inexact oracle
TLDR
It is demonstrated that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used, and it is proved that, contrary to simple gradient schemes,fast gradient methods must necessarily suffer from error accumulation.
Gradient Methods for Problems with Inexact Model of the Objective
TLDR
This work considers optimization methods for convex minimization problems under inexact information on the objective function, which as a particular cases includes $(\delta,L)$ inexact oracle and relative smoothness condition, and analyzes gradient method which uses this inexact model and obtains convergence rates for conveX and strongly convex problems.
Gradient methods for minimizing composite functions
  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2013
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is
Linear convergence of first order methods for non-strongly convex optimization
TLDR
This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition.
Universal gradient methods for convex optimization problems
  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Program.
  • 2015
TLDR
New methods for black-box convex minimization are presented, which demonstrate that the fast rate of convergence, typical for the smooth optimization problems, sometimes can be achieved even on nonsmooth problem instances.
Smooth Optimization with Approximate Gradient
We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is computed only up to a small, uniformly bounded error. In applications of
Relatively smooth convex optimization by Firstorder methods
  • / SIAM J. Optim
  • 2018
Linear сonvergence of пradient and зroximalgradient methods under the Polyak–Lojasiewicz condition // Machine Learning and Knowledge Discovery in Databases
  • Proc. eds.: B. Berendt and etc. Cham: Springer,
  • 2016
...
1
2
...