Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods
@article{Pletnev2020FastAB, title={Fast adaptive by constants of strong-convexity and Lipschitz for gradient first order methods}, author={Nikita Pletnev}, journal={Computer Research and Modeling}, year={2020} }
The work is devoted to the construction of efficient and applicable to real tasks first-order methods of convex optimization, that is, using only values of the target function and its derivatives. Construction uses OGM-G, fast gradient method which is optimal by complexity, but requires to know the Lipschitz constant for gradient and the strong convexity constant to determine the number of steps and step length. This requirement makes practical usage impossible. An adaptive on the constant for…
References
SHOWING 1-10 OF 13 REFERENCES
On the Adaptivity of Stochastic Gradient-Based Optimization
- Computer ScienceSIAM J. Optim.
- 2020
The stochastically controlled stochastic gradient method for composite convex finite-sum optimization problems is presented and it is shown that SCSG is adaptive to both strong convexity and target accuracy.
Universal gradient methods for convex optimization problems
- Computer ScienceMath. Program.
- 2015
New methods for black-box convex minimization are presented, which demonstrate that the fast rate of convergence, typical for the smooth optimization problems, sometimes can be achieved even on nonsmooth problem instances.
Об эффективных численных методах решения задач энтропийно-линейного программирования
- Mathematics
- 2016
Задачи энтропийно-линейного программирования часто возникают в различных приложениях (транспортные задачи, исследования химических реакций и др.). Такие задачи формулируются обычно как задачи…
Fessler Optimizing the Efficiency of First-order Methods for Decreasing the Gradient of Smooth Convex Functions // e-print
- 2018
Moscow: Radio and communication
- 1989
Метод универсального градиентного спуска // e-print
- 2019
Универсальные градиентные методы для задач выпуклой оптимизации // Math. Program
Zheng Qu Restarting accelerated gradient methods with a rough strong convexity estimate // e-print
- 2016
Alexandre d'Aspremont Complexity Guarantees for Polyak Steps with Momentum // e-print
- 2020