A Simple yet Universal Strategy for Online Convex Optimization
@article{Zhang2021ASY, title={A Simple yet Universal Strategy for Online Convex Optimization}, author={Lijun Zhang and Guanghui Wang and Jinfeng Yi and Tianbao Yang}, journal={ArXiv}, year={2021}, volume={abs/2105.03681} }
Recently, several universal methods have been proposed for online convex optimization, and at-tain minimax rates for multiple types of convex functions simultaneously. However, they need to design and optimize one surrogate loss for each type of functions, making it difficult to exploit the structure of the problem and utilize existing algorithms. In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations. The key idea is to construct a…
2 Citations
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
- Computer ScienceNeurIPS
- 2021
This paper presents the first universal algorithm for minimizing the adaptive regret of convex functions, which borrows the idea of maintaining multiple learning rates in MetaGrad to handle the uncertainty of functions, and utilizes the technique of sleeping experts to capture changing environments.
Parameter-free Online Linear Optimization with Side Information via Universal Coin Betting
- Computer ScienceAISTATS
- 2022
A class of parameter-free online linear optimization algorithms is proposed that adapts the structure of an adversarial sequence by adapting to some side information by mod-ifying the context-tree weighting technique of Willems, Shtarkov, and Tjalkens (1995).
References
SHOWING 1-10 OF 56 REFERENCES
Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization
- Computer ScienceUAI
- 2019
The essential idea is to run multiple types of learning algorithms with different learning rates in parallel, and utilize a meta algorithm to track the best one on the fly, and propose a novel online method, namely Maler, which enjoys the optimal regret bounds for general convex, exponentially concave, and strongly convex functions.
Adapting to Smoothness: A More Universal Algorithm for Online Convex Optimization
- Computer ScienceAAAI
- 2020
UFO is the first to achieve the O(log L*) regret bound for strongly convex and smooth functions, which is tighter than the existing small-loss bound by an O(d) factor.
Logarithmic regret algorithms for online convex optimization
- Computer ScienceMachine Learning
- 2007
Several algorithms achieving logarithmic regret are proposed, which besides being more general are also much more efficient to implement, and give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field.
Online Optimization with Gradual Variations
- Computer Science, MathematicsCOLT
- 2012
It is shown that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation, and as an application, this can also have such a logarithmic regret for the portfolio management problem.
Regret bounded by gradual variation for online convex optimization
- Computer ScienceMachine Learning
- 2013
This paper presents two novel algorithms that bound the regret of the Follow the Regularized Leader algorithm by the gradual variation of cost functions, and develops a deterministic algorithm for online bandit optimization in multipoint bandit setting.
MetaGrad: Multiple Learning Rates in Online Learning
- Computer ScienceNIPS
- 2016
This work presents a new method, MetaGrad, that adapts to a much broader class of functions, including exp-concave and strongly convex functions, but also various types of stochastic and non-stochastic functions without any curvature.
Dynamic Regret of Convex and Smooth Functions
- Computer ScienceNeurIPS
- 2020
Novel online algorithms are proposed that are capable of leveraging smoothness and replace the dependence on $T$ in the dynamic regret by problem-dependent quantities: the variation in gradients of loss functions, and the cumulative loss of the comparator sequence.
Proximal regularization for online and batch learning
- Computer ScienceICML '09
- 2009
P proximal regularization is employed, in which the original learning problem is solved via a sequence of modified optimization tasks whose objectives are chosen to have greater curvature than the original problem.
Online Convex Programming and Generalized Infinitesimal Gradient Ascent
- Computer ScienceICML
- 2003
An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent.