• Corpus ID: 202763377

Dynamic Local Regret for Non-convex Online Forecasting

@inproceedings{Aydre2019DynamicLR,
  title={Dynamic Local Regret for Non-convex Online Forecasting},
  author={Serg{\"u}l Ayd{\"o}re and Tianhao Zhu and Dean P. Foster},
  booktitle={NeurIPS},
  year={2019}
}
We consider online forecasting problems for non-convex machine learning models. Forecasting introduces several challenges such as (i) frequent updates are necessary to deal with concept drift issues since the dynamics of the environment change over time, and (ii) the state of the art models are non-convex models. We address these challenges with a novel regret framework. Standard regret measures commonly do not consider both dynamic environment and non-convex models. We introduce a local regret… 

Figures and Tables from this paper

Dynamic Regret Analysis for Online Meta-Learning

TLDR
This work builds off of a generalized version of the adaptive gradient methods that covers both ADAM and ADAGRAD to learn meta-learners in the outer level and proves a logarithmic local dynamic regret which depends explicitly on the total number of iterations T and parameters of the learner.

Online Bilevel Optimization: Regret Analysis of Online Alternating Gradient Methods

TLDR
New notions of bilevel regret are introduced, an online alternating time-averaged gradient method is developed that is capable of leveraging smoothness, and regret bounds are extended in terms of the path-length of the inner and outer minimizer sequences.

Non-stationary neural network for stock return prediction

TLDR
The online early stopping algorithm is proposed and it is shown that a neural network trained using this algorithm can track a function changing with unknown dynamics, and prominent factors, such as the size effect and momentum, exhibit time varying stock return predictiveness.

Learning Fast and Slow for Online Time Series Forecasting

TLDR
Fast and Slow learning Networks (FSNet) is proposed, a holistic framework for online time-series forecasting to simultaneously deal with abrupt changing and repeating patterns and improves the slowly-learned backbone by dynamically balancing fast adaptation to recent changes and retrieving similar old knowledge.

Time-varying neural network for stock return prediction

TLDR
The proposed online early stopping algorithm is applied to the stock return prediction problem studied in Gu et al. (2019) and achieved mean rank correlation of 4.69%, almost twice as high as the expanding window approach.

Single Loop Gaussian Homotopy Method for Non-convex Optimization

TLDR
In experiments that included artificial highly non-convex examples and black-box adversarial attacks, it is demonstrated that the algorithms converge much faster than an existing double loop GH method while outperforming gradient descent-based methods in terms of finding a better solution.

References

SHOWING 1-10 OF 23 REFERENCES

Dynamic Regret of Strongly Adaptive Methods

TLDR
This paper shows that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation, which implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regrets.

Minimizing Adaptive Regret with One Gradient per Iteration

TLDR
A series of computationally efficient algorithms for minimizing the adaptive regret of general convex, strongly convex and exponentially concave functions respectively are proposed, which replace each loss function with a carefully designed surrogate loss, which bounds the original loss function from below.

Efficient learning algorithms for changing environments

TLDR
A different performance metric is proposed which strengthens the standard metric of regret and measures performance with respect to a changing comparator and can be applied to various learning scenarios, i.e. online portfolio selection, for which there are experimental results showing the advantage of adaptivity.

Time series prediction and online learning

TLDR
The first generalization bounds for a hypothesis derived by online-to-batch conversion of the sequence of hypotheses output by an online algorithm are proved, in the general setting of a non-stationary non-mixing stochastic process.

Online ARIMA Algorithms for Time Series Prediction

TLDR
This paper proposes online learning algorithms for estimating ARIMA models under relaxed assumptions on the noise terms, which is suitable to a wider range of applications and enjoys high computational efficiency.

Online Forecasting Matrix Factorization

TLDR
A recursive minimum mean square error estimator is derived based on an autoregressive model based on low-rank matrix factorization techniques that can learn low-dimensional embeddings effectively in an online manner.

Minimax Time Series Prediction

TLDR
The minimax strategy is derived from an adversarial formulation of the problem of predicting a time series with square loss and it is shown that the regret grows as T/√ λT, where T is the length of the game andλT is an increasing limit on comparator smoothness.

On the importance of initialization and momentum in deep learning

TLDR
It is shown that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs to levels of performance that were previously achievable only with Hessian-Free optimization.

Online Learning for Time Series Prediction

TLDR
This work develops eective online learning algorithms for the prediction problem of predicting a time series using the ARMA (autoregressive moving average) model, without assuming that the noise terms are Gaussian, identically distributed or even independent.