Online optimization in dynamic environments: Improved regret rates for strongly convex problems

@article{Mokhtari2016OnlineOI,
  title={Online optimization in dynamic environments: Improved regret rates for strongly convex problems},
  author={Aryan Mokhtari and Shahin Shahrampour and Ali Jadbabaie and Alejandro Ribeiro},
  journal={2016 IEEE 55th Conference on Decision and Control (CDC)},
  year={2016},
  pages={7195-7201}
}
In this paper, we address tracking of a time-varying parameter with unknown dynamics. We formalize the problem as an instance of online optimization in a dynamic setting. Using online gradient descent, we propose a method that sequentially predicts the value of the parameter and in turn suffers a loss. The objective is to minimize the accumulation of losses over the time horizon, a notion that is termed dynamic regret. While existing methods focus on convex loss functions, we consider strongly… 

Figures and Tables from this paper

Distributed Online Convex Optimization with Improved Dynamic Regret

This paper proposes a novel distributed online gradient descent algorithm that relies on an online adaptation of the gradient tracking technique used in static optimization and shows that the dynamic regret bound of this algorithm has no explicit dependence on the time horizon and can be tighter than existing bounds especially for problems with long horizons.

Dynamic Regret of Online Mirror Descent for Relatively Smooth Convex Cost Functions

This letter shows that it is possible to bound the dynamic regret, even when neither Lipschitz continuity nor uniform smoothness is present, and adopts the notion of relative smoothness with respect to some user-defined regularization function, which is a much milder requirement on the cost functions.

Dynamic Regret Bounds for Online Nonconvex Optimization

This paper introduces two algorithms and proves that the dynamic regret for each algorithm is bounded by a function of the temporal variation in the optimal decision, and defines time-varying target sets, which contain the global solution and exhibit desirable properties under the projected gradient descent algorithm.

Dynamic Regret of Convex and Smooth Functions

Novel online algorithms are proposed that are capable of leveraging smoothness and replace the dependence on $T$ in the dynamic regret by problem-dependent quantities: the variation in gradients of loss functions, and the cumulative loss of the comparator sequence.

A Distributed Online Convex Optimization Algorithm with Improved Dynamic Regret

This work proposes a gradient tracking algorithm where agents jointly communicate and descend based on corrected gradient steps and shows that this dependence on the number of time steps can be removed assuming that the local objective functions are strongly convex.

Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond

This work answers an open problem in (Baby and Wang, 2021) by showing that in a proper learning setup, Strongly Adaptive algorithms can achieve the near optimal dynamic regret of Õ(dnTV[u1:n] ∨ d) against any comparator sequence u1, . . . , un simultaneously, where n is the time horizon and TV[ u1: n] is the Total Variation of comparator.

Adaptive Online Optimization with Predictions: Static and Dynamic Environments

New step-size rules and OCO algorithms that simultaneously exploit gradient predictions, function predictions and dynamics, features particularly pertinent to control applications are proposed.

Online Convex Optimization With Time-Varying Constraints and Bandit Feedback

It is shown that the algorithm possesses sublinear regret with respect to the dynamic benchmark sequence and sublinear constraint violations, as long as the drift of the benchmark sequence is sublinear, or in other words, the underlying dynamic optimization problems do not vary too drastically.

Projection Free Dynamic Online Learning

A projection-free scheme based on Frank-Wolfe is proposed, where instead of online gradient steps, the algorithm’s required information is relaxed to only noisy gradient estimates, i.e., partial feedback and the dynamic regret bounds are derived.

Predictive Online Convex Optimization

...

References

SHOWING 1-10 OF 22 REFERENCES

Non-Stationary Stochastic Optimization

Tight bounds on the minimax regret allow us to quantify the “price of non-stationarity,” which mathematically captures the added complexity embedded in a temporally changing environment versus a stationary one.

Online Optimization with Gradual Variations

It is shown that for the linear and general smooth convex loss functions, an online algorithm modified from the gradient descend algorithm can achieve a regret which only scales as the square root of the deviation, and as an application, this can also have such a logarithmic regret for the portfolio management problem.

Optimization, Learning, and Games with Predictable Sequences

It is proved that a version of Optimistic Mirror Descent can be used by two strongly-uncoupled players in a finite zero-sum matrix game to converge to the minimax equilibrium at the rate of O((log T)/T).

Logarithmic regret algorithms for online convex optimization

Several algorithms achieving logarithmic regret are proposed, which besides being more general are also much more efficient to implement, and give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field.

Target tracking with dynamic convex optimization

This work develops a framework for trajectory tracking in dynamic settings, where an autonomous system is charged with the task of remaining close to an object of interest whose position varies continuously in time, and proposes approximate gradient trajectory (AGT) and approximate Newton trajectory tracking (ANT) as prediction-correction algorithms.

Adaptive Algorithms for Online Decision Problems

An algorithm for the tree update problem that is statically optimal for every sufficiently long contiguous subsequence of accesses is given, which combines techniques from data streaming algorithms, composition of learning algorithms, and a twist on the standard experts framework.

A new look at shifting regret

This work uses weight sharing to simultaneously prove new shifting regret bounds for online convex optimization on the simplex in terms of the total variation distance as well as new bounds for the related setting of adaptive regret.

Online Optimization : Competing with Dynamic Comparators

A fully adaptive method is presented that competes with dynamic benchmarks in which regret guarantee scales with regularity of the sequence of cost functions and comparators, and adapts to the smaller complexity measure in the problem environment.

Online Convex Programming and Generalized Infinitesimal Gradient Ascent

An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent.

Online Convex Optimization in Dynamic Environments

A dynamic mirror descent framework is described which addresses the challenge of adapting to nonstationary environments arising in real-world problems, yielding low theoretical regret bounds and accurate, adaptive, and computationally efficient algorithms which are applicable to broad classes of problems.