@inproceedings{Stonyakin2021AdaptationTI,
author={Fedor S. Stonyakin},
year={2021}
}
Хорошо известно, что методы градиентного типа отличаются относительной простотой и малыми затратами памяти, что объясняет их популярность в работах по многомерной оптимизации (см., например, [1–9]). Напомним, что для вывода оценок скорости сходимости градиентного метода можно использовать идею аппроксимации функции в исходной точке (текущем положении метода) мажорирующим ее параболоидом вращения. Так, для задачи минимизации выпуклого функционала f : Q → R…

## References

SHOWING 1-10 OF 18 REFERENCES
Relatively Smooth Convex Optimization by First-Order Methods, and Applications
• Mathematics, Computer Science
SIAM J. Optim.
• 2018
A notion of “relative smoothness” and relative strong convexity that is determined relative to a user-specified “reference function” $h(\cdot)$ (that should be computationally tractable for algorithms), and it is shown that many differentiable convex functions are relatively smooth with respect to a correspondingly fairly simple reference function.
Applications of Variational Analysis to a Generalized Fermat-Torricelli Problem
• Computer Science, Mathematics
J. Optim. Theory Appl.
• 2011
In this paper we develop new applications of variational analysis and generalized differentiation to the following optimization problem and its specifications: given n closed subsets of a Banach
First-order methods of smooth convex optimization with inexact oracle
• Mathematics, Computer Science
Math. Program.
• 2014
It is demonstrated that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used, and it is proved that, contrary to simple gradient schemes,fast gradient methods must necessarily suffer from error accumulation.
Gradient Methods for Problems with Inexact Model of the Objective
This work considers optimization methods for convex minimization problems under inexact information on the objective function, which as a particular cases includes $(\delta,L)$ inexact oracle and relative smoothness condition, and analyzes gradient method which uses this inexact model and obtains convergence rates for conveX and strongly convex problems.
Gradient methods for minimizing composite functions
• Y. Nesterov
• Mathematics, Computer Science
Math. Program.
• 2013
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is
Linear convergence of first order methods for non-strongly convex optimization
• Computer Science, Mathematics
Math. Program.
• 2019
This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition.
Universal gradient methods for convex optimization problems
• Y. Nesterov
• Mathematics, Computer Science
Math. Program.
• 2015
New methods for black-box convex minimization are presented, which demonstrate that the fast rate of convergence, typical for the smooth optimization problems, sometimes can be achieved even on nonsmooth problem instances.