• Corpus ID: 231740821

Stochastic Online Convex Optimization; Application to probabilistic time series forecasting

@article{Wintenberger2021StochasticOC,
  title={Stochastic Online Convex Optimization; Application to probabilistic time series forecasting},
  author={Olivier Wintenberger},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.00729}
}
Stochastic regret bounds for online algorithms are usually derived from an"online to batch"conversion. Inverting the reasoning, we start our analyze by a"batch to online"conversion that applies in any Stochastic Online Convex Optimization problem under stochastic exp-concavity condition. We obtain fast rate stochastic regret bounds with high probability for non-convex loss functions. Based on this approach, we provide prediction and probabilistic forecasting methods for non-stationary unbounded… 

Online PAC-Bayes Learning

New PAC-Bayesian bounds are proved in this online learning framework, leveraging an updated definition of regret, and classical PAC- Bayesian results are revisited with a batch-to-online conversion, extending their remit to the case of dependent data.

Learning from time-dependent streaming data with online stochastic algorithms

This heuristic provides new insights into choosing the optimal learning rate, which can help increase the stability of SG-based methods; these investigations suggest large streaming batches with slow decaying learning rates for highly dependent data sources.

References

SHOWING 1-10 OF 30 REFERENCES

Learning Theory and Algorithms for Forecasting Non-stationary Time Series

Data-dependent learning bounds for the general scenario of non-stationary non-mixing stochastic processes are presented in terms of a data-dependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions.

Online Learning for Time Series Prediction

This work develops eective online learning algorithms for the prediction problem of predicting a time series using the ARMA (autoregressive moving average) model, without assuming that the noise terms are Gaussian, identically distributed or even independent.

Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization

High probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions are derived, indicating that the obtained upper bound is optimal up to a logarithmic factor.

The Generalization Ability of Online Algorithms for Dependent Data

It is shown that the generalization error of any stable online algorithm concentrates around its regret-an easily computable statistic of the online performance of the algorithm-when the underlying ergodic process is β- or φ -mixing.

Optimal learning with Bernstein online aggregation

This work introduces a new recursive aggregation procedure called Bernstein Online Aggregation (BOA), which is optimal for the model selection aggregation problem in the bounded iid setting for the square loss and is the first online algorithm that satisfies the fast rate of convergence.

Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning

This work considers online learning algorithms that guarantee worst-case regret rates in adversarial environments, yet adapt optimally to favorable stochastic environments (so they will perform well in a variety of settings of practical importance), and quantifies the friendliness of stoChastic environments by means of the well-known Bernstein condition.

Stochastic Online Optimization using Kalman Recursion

This work proves that the Extended Kalman Filter enters a local phase where the algorithm enters a small region around the optimum, and provides explicit bounds with high probability on this convergence time on the cumulative excess risk in an unconstrained setting.

Efficient online algorithms for fast-rate regret bounds under sparsity

New risk bounds are established that are adaptive to the sparsity of the problem and to the regularity of the risk (ranging from a rate 1 / $\sqrt T$ for general convex risk to 1 /T for strongly convexrisk) and generalize previous works on sparse online learning.

Online learning with the Continuous Ranked Probability Score for ensemble forecasting

This study generalizes results about the bias of the CRPS computed with ensemble forecasts and proposes a new scheme to achieve fair CRPS minimization, without any assumption about the distribution.

Sparse Accelerated Exponential Weights

Under the strong convexity of the risk, the stochastic optimization problem is considered and the optimal rate of convergence for approximating sparse parameters in $\mathbb{R}^d$ is achieved.