Corpus ID: 170078573

Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization

@inproceedings{Goel2019BeyondOB,
  title={Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization},
  author={Gautam Goel and Yiheng Lin and Haoyuan Sun and Adam Wierman},
  booktitle={NeurIPS},
  year={2019}
}
We study online convex optimization in a setting where the learner seeks to minimize the sum of a per-round hitting cost and a movement cost which is incurred when changing decisions between rounds. We prove a new lower bound on the competitive ratio of any online algorithm in the setting where the costs are $m$-strongly convex and the movement costs are the squared $\ell_2$ norm. This lower bound shows that no algorithm can achieve a competitive ratio that is $o(m^{-1/2})$ as $m$ tends to zero… Expand
Revisiting Smoothed Online Learning
TLDR
The proposed algorithm, named as Smoothed Ader, attains an optimal O( √ T (1 + PT )) bound for dynamic regret with switching cost, where PT is the path-length of the comparator sequence. Expand
Online Optimization with Predictions and Non-convex Losses
TLDR
This work gives two general sufficient conditions that specify a relationship between the hitting and movement costs which guarantees that a new algorithm, Synchronized Fixed Horizon Control (SFHC), achieves a 1+O(1/w) competitive ratio, where w is the number of predictions available to the learner. Expand
Online Convex Optimization with Continuous Switching Constraint
TLDR
The essential idea is to carefully design an adaptive adversary, who can adjust the loss function according to the cumulative switching cost of the player incurred so far based on the orthogonal technique, and develop a simple gradient-based algorithm which enjoys the minimax optimal regret bound. Expand
Scale-Free Allocation, Amortized Convexity, and Myopic Weighted Paging
TLDR
A natural myopic model for weighted paging in which an algorithm has access to the relative ordering of all pages with respect to the time of their next arrival is considered, which provides an $\ell$-competitive deterministic and an $O(\log \ell)-competitive randomized algorithm, where $ell$ is the number of distinct weight classes. Expand
Dimension-Free Bounds on Chasing Convex Functions
TLDR
The problem of chasing convex functions, where functions arrive over time, is considered, and an algorithm is given that achieves an $O(\sqrt \kappa)$-competitiveness, when the functions are supported on $k$-dimensional affine subspaces. Expand
Power of Hints for Online Learning with Movement Costs
TLDR
This work studies the stability of simple algorithms that obtain the optimal √ T regret, and provides matching upper and lower bounds showing that incorporating movement costs results in intricate tradeoffs between log T when ≥ 1 and √T regret when = 0. Expand
Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms
TLDR
A gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), is introduced, and its performance by dynamic regrets in terms of the temporal variation of the environment and the prediction errors is analyzed. Expand
Chasing Convex Bodies Optimally
TLDR
The functional Steiner point of a convex function is defined and applied to the work function to obtain the algorithm achieving competitive ratio d for arbitrary normed spaces, which is exactly tight for $\ell^{\infty}$. Expand
Chasing Convex Bodies with Linear Competitive Ratio
TLDR
An algorithm is given that is O(\min(d, \sqrt{d \log T}))-competitive for any sequence of length $T$. Expand
Beyond No-Regret: Competitive Control via Online Optimization with Memory
TLDR
A novel reduction from online control of a class of controllable systems to online convex optimization with memory is provided and a new algorithm is designed that has a constant, dimension-free competitive ratio, leading to a new constant-competitive approach for online control. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 33 REFERENCES
Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent
TLDR
OBD is the first algorithm to achieve a dimension-free competitive ratio, 3 + O(1/\alpha)$, for locally polyhedral costs, where $\alpha$ measures the "steepness" of the costs. Expand
A Tight Lower Bound for Online Convex Optimization with Switching Costs
TLDR
This work investigates online convex optimization with switching costs (OCO), a natural online problem arising when rightsizing data centers, and designs competitive algorithms based on the sum of costs of all steps. Expand
Competitive ratio vs regret minimization: achieving the best of both worlds
TLDR
An expert algorithm is obtained that can combine a “base” online algorithm, having a guaranteed competitive ratio, with a range of online algorithms that guarantee a small regret over any interval of time. Expand
A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs
TLDR
This work considers a natural online optimization problem set on the real line, where at each integer time, a convex function arrives online and the online algorithm picks a new location, and gives a 2-competitive algorithm for this problem. Expand
An Online Algorithm for Smoothed Regression and LQR Control
TLDR
The generality of the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control. Expand
Better Bounds for Online Line Chasing
TLDR
This work significantly improves the lower bound on the competitive ratio, from $1.412$ to £1.5358, by providing a~$3$-competitive algorithm for any dimension $d$. Expand
Using Predictions in Online Optimization with Switching Costs: A Fast Algorithm and A Fundamental Limit
TLDR
A computationally efficient algorithm, Receding Horizon Gradient Descent (RHGD), which only requires a finite number of gradient evaluations at each time is proposed, and it is shown that both the dynamic regret and the competitive ratio of the algorithm decay exponentially fast with the length of the prediction window. Expand
Online convex optimization with ramp constraints
TLDR
It is proved that AFHC achieves the asymptotically optimal achievable competitive difference within a general class of “forward looking” online algorithms. Expand
Online Convex Optimization Using Predictions
TLDR
It is proved that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sub linear regret and Constant competitive ratio in expectation using only a constant-sized prediction window. Expand
A polylog(n)-competitive algorithm for metrical task systems
We present a randomized on-line algorithm for the Metrical Task System problem that achieves a competitive ratio of O(log6 n) for arbitrary metric spaces, against an oblivious adversary. This is theExpand
...
1
2
3
4
...