Corpus ID: 170078573

# Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization

@inproceedings{Goel2019BeyondOB,
title={Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization},
author={Gautam Goel and Yiheng Lin and Haoyuan Sun and Adam Wierman},
booktitle={NeurIPS},
year={2019}
}
We study online convex optimization in a setting where the learner seeks to minimize the sum of a per-round hitting cost and a movement cost which is incurred when changing decisions between rounds. We prove a new lower bound on the competitive ratio of any online algorithm in the setting where the costs are $m$-strongly convex and the movement costs are the squared $\ell_2$ norm. This lower bound shows that no algorithm can achieve a competitive ratio that is $o(m^{-1/2})$ as $m$ tends to zero… Expand
Revisiting Smoothed Online Learning
• Computer Science, Mathematics
• ArXiv
• 2021
The proposed algorithm, named as Smoothed Ader, attains an optimal O( √ T (1 + PT )) bound for dynamic regret with switching cost, where PT is the path-length of the comparator sequence. Expand
Online Optimization with Predictions and Non-convex Losses
• Computer Science, Mathematics
• Proc. ACM Meas. Anal. Comput. Syst.
• 2020
This work gives two general sufficient conditions that specify a relationship between the hitting and movement costs which guarantees that a new algorithm, Synchronized Fixed Horizon Control (SFHC), achieves a 1+O(1/w) competitive ratio, where w is the number of predictions available to the learner. Expand
Online Convex Optimization with Continuous Switching Constraint
• Guanghui Wang
• Computer Science, Mathematics
• ArXiv
• 2021
The essential idea is to carefully design an adaptive adversary, who can adjust the loss function according to the cumulative switching cost of the player incurred so far based on the orthogonal technique, and develop a simple gradient-based algorithm which enjoys the minimax optimal regret bound. Expand
Scale-Free Allocation, Amortized Convexity, and Myopic Weighted Paging
• Computer Science, Mathematics
• ArXiv
• 2020
A natural myopic model for weighted paging in which an algorithm has access to the relative ordering of all pages with respect to the time of their next arrival is considered, which provides an $\ell$-competitive deterministic and an $O(\log \ell)-competitive randomized algorithm, where$ell$is the number of distinct weight classes. Expand Dimension-Free Bounds on Chasing Convex Functions • Computer Science, Mathematics • COLT • 2020 The problem of chasing convex functions, where functions arrive over time, is considered, and an algorithm is given that achieves an$O(\sqrt \kappa)$-competitiveness, when the functions are supported on$k$-dimensional affine subspaces. Expand Power of Hints for Online Learning with Movement Costs • Computer Science • AISTATS • 2021 This work studies the stability of simple algorithms that obtain the optimal √ T regret, and provides matching upper and lower bounds showing that incorporating movement costs results in intricate tradeoffs between log T when ≥ 1 and √T regret when = 0. Expand Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms • Computer Science, Engineering • NeurIPS • 2020 A gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), is introduced, and its performance by dynamic regrets in terms of the temporal variation of the environment and the prediction errors is analyzed. Expand Chasing Convex Bodies Optimally The functional Steiner point of a convex function is defined and applied to the work function to obtain the algorithm achieving competitive ratio d for arbitrary normed spaces, which is exactly tight for$\ell^{\infty}$. Expand Chasing Convex Bodies with Linear Competitive Ratio • Mathematics, Computer Science • SODA • 2020 An algorithm is given that is O(\min(d, \sqrt{d \log T}))-competitive for any sequence of length$T$. Expand Beyond No-Regret: Competitive Control via Online Optimization with Memory • Computer Science, Engineering • ArXiv • 2020 A novel reduction from online control of a class of controllable systems to online convex optimization with memory is provided and a new algorithm is designed that has a constant, dimension-free competitive ratio, leading to a new constant-competitive approach for online control. Expand #### References SHOWING 1-10 OF 33 REFERENCES Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent • Computer Science, Mathematics • COLT • 2018 OBD is the first algorithm to achieve a dimension-free competitive ratio, 3 + O(1/\alpha)$, for locally polyhedral costs, where $\alpha$ measures the "steepness" of the costs. Expand
A Tight Lower Bound for Online Convex Optimization with Switching Costs
• Mathematics, Computer Science
• WAOA
• 2017
This work investigates online convex optimization with switching costs (OCO), a natural online problem arising when rightsizing data centers, and designs competitive algorithms based on the sum of costs of all steps. Expand
Competitive ratio vs regret minimization: achieving the best of both worlds
• Computer Science
• ALT
• 2019
An expert algorithm is obtained that can combine a “base” online algorithm, having a guaranteed competitive ratio, with a range of online algorithms that guarantee a small regret over any interval of time. Expand
A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs
• Computer Science, Mathematics
• APPROX-RANDOM
• 2015
This work considers a natural online optimization problem set on the real line, where at each integer time, a convex function arrives online and the online algorithm picks a new location, and gives a 2-competitive algorithm for this problem. Expand
An Online Algorithm for Smoothed Regression and LQR Control
• Computer Science
• AISTATS
• 2019
The generality of the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control. Expand
Better Bounds for Online Line Chasing
• Mathematics, Computer Science
• MFCS
• 2019
This work significantly improves the lower bound on the competitive ratio, from $1.412$ to £1.5358, by providing a~$3$-competitive algorithm for any dimension $d$. Expand
Using Predictions in Online Optimization with Switching Costs: A Fast Algorithm and A Fundamental Limit
• Computer Science
• 2018 Annual American Control Conference (ACC)
• 2018
A computationally efficient algorithm, Receding Horizon Gradient Descent (RHGD), which only requires a finite number of gradient evaluations at each time is proposed, and it is shown that both the dynamic regret and the competitive ratio of the algorithm decay exponentially fast with the length of the prediction window. Expand
Online convex optimization with ramp constraints
• Mathematics, Computer Science
• 2015 54th IEEE Conference on Decision and Control (CDC)
• 2015
It is proved that AFHC achieves the asymptotically optimal achievable competitive difference within a general class of “forward looking” online algorithms. Expand
Online Convex Optimization Using Predictions
• Computer Science
• SIGMETRICS 2015
• 2015
It is proved that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sub linear regret and Constant competitive ratio in expectation using only a constant-sized prediction window. Expand
A polylog(n)-competitive algorithm for metrical task systems
• Mathematics, Computer Science
• STOC '97
• 1997
We present a randomized on-line algorithm for the Metrical Task System problem that achieves a competitive ratio of O(log6 n) for arbitrary metric spaces, against an oblivious adversary. This is theExpand