## Figures from this paper

## 16 Citations

Adaptive scale-invariant online algorithms for learning linear models

- Computer ScienceICML
- 2019

This paper proposes online algorithms making predictions which are invariant under arbitrary rescaling of the features, which achieve regret bounds matching that of OGD with optimally tuned separate learning rates per dimension, while retaining comparable runtime performance.

Scale-free Unconstrained Online Learning for Curved Losses

- Computer ScienceCOLT
- 2022

This work shows that there is in fact never a price to pay for adaptivity if the authors specialise to any of the other common supervised online learning losses, and provides an adaptive method for linear logistic regression that is as efﬁcient as the recent non-adaptive algorithm by Agarwal et al. (2021).

Lipschitz and Comparator-Norm Adaptivity in Online Learning

- Computer Science, MathematicsCOLT 2020
- 2020

Two prior reductions to the unbounded setting are generalized; one to not need hints, and a second to deal with the range ratio problem (which already arises in prior work).

Parameter-free Online Convex Optimization with Sub-Exponential Noise

- Computer ScienceCOLT
- 2019

It is shown that it is possible to go around the lower bound by allowing the observed subgradients to be unbounded via stochastic noise, and a novel parameter-free OCO algorithm for Banach space, which is called BANCO, achieves the optimal regret rate.

On the Initialization for Convex-Concave Min-max Problems

- Computer ScienceALT
- 2022

This work shows that strict-convexity-strict-concavity iscient to get the convergence rate to depend on the initialization, and shows that the so-called “parameter-free” algorithms allow to achieve improved initialization-dependent asymptotic rates without any learning rate to tune.

Implicit Parameter-free Online Learning with Truncated Linear Models

- Computer ScienceALT
- 2022

New parameter-free algorithms that can take advantage of truncated linear models through a new update that has an “implicit” flavor are proposed that are efficient, efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter- free properties.

A Parameter-free Algorithm for Convex-concave Min-max Problems

- Computer ScienceArXiv
- 2021

This paper provides the first parameter-free algorithm for several classes of convex-concave problems and establishes corresponding state-of-the-art convergence rates, including strictly-convex-strictly-conCave min- max problems and min-max problems with non-Euclidean geometry.

User-Specified Local Differential Privacy in Unconstrained Adaptive Online Learning

- Computer Science, MathematicsNeurIPS
- 2019

This paper derives the first algorithms that have adaptive regret bounds in this setting, i.e. their algorithms adapt to the unknown competitor norm, unknown noise, and unknown sum of the norms of the subgradients, matching state of the art bounds in all cases.

Better Full-Matrix Regret via Parameter-Free Online Learning

- Computer ScienceNeurIPS
- 2020

This work provides online convex optimization algorithms that guarantee improved fullmatrix regret bounds and improves the regret analysis of the full-matrix AdaGrad algorithm by suggesting a better learning rate value and showing how to tune the learning rate to this value on the fly.

Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

- Computer ScienceCOLT
- 2018

We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and…

## References

SHOWING 1-10 OF 28 REFERENCES

A generalized online mirror descent with applications to classification and regression

- Computer ScienceMachine Learning
- 2014

This work generalizes online mirror descent to time-varying regularizers with generic updates, and derives a new second order algorithm with a regret bound invariant with respect to arbitrary rescalings of individual features.

Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations

- Computer Science, MathematicsCOLT
- 2014

A novel characterization of a large class of minimax algorithms, recovering, and even improving, several previous results as immediate corollaries, and developing an algorithm that provides a regret bound of O U q T log(U p T log 2 T + 1) , where U is the L2 norm of an arbitrary comparator and both T and U are unknown to the player.

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

- Computer ScienceJ. Mach. Learn. Res.
- 2010

This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.

Adaptive Bound Optimization for Online Convex Optimization

- Computer ScienceCOLT
- 2010

This work introduces a new online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions observed so far, and proves competitive guarantees that show the algorithm provides a bound within a constant factor of the best possible bound in hindsight in hindsight.

Online Convex Optimization with Unconstrained Domains and Losses

- Computer ScienceNIPS
- 2016

An online convex optimization algorithm (RescaledExp) that achieves optimal regret in the unconstrained setting without prior knowledge of any bounds on the loss functions is proposed and it is shown that it matches prior optimization algorithms that require hyperparameter optimization.

Dimension-Free Exponentiated Gradient

- Computer Science, MathematicsNIPS
- 2013

I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2…

Simultaneous Model Selection and Optimization through Parameter-free Stochastic Learning

- Computer ScienceNIPS
- 2014

This paper proposes a new kernel-based stochastic gradient descent algorithm that performs model selection while training, with no parameters to tune, nor any form of cross-validation, to estimate over time the right regularization in a data-dependent way.

Scale-Free Algorithms for Online Linear Optimization

- Computer ScienceALT
- 2015

This work designs algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors, and works for any decision set, bounded or unbounded.

Uniform regret bounds over Rd for the sequential linear regression problem with the square loss

- Computer Science, MathematicsALT
- 2019

This work considers the setting of online linear regression for arbitrary deterministic sequences, with the square loss, and derives bounds with an optimal constant of $1$ in front of the d B^2 \ln T term for any individual sequence of features and bounded observations.

Training Deep Networks without Learning Rates Through Coin Betting

- Computer ScienceNIPS
- 2017

This paper proposes a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting and reduces the optimization process to a game of betting on a coin and proposes a learning-rate-free optimal algorithm.