• Corpus ID: 231855386

Meta-Learning for Koopman Spectral Analysis with Short Time-series

@article{Iwata2021MetaLearningFK,
  title={Meta-Learning for Koopman Spectral Analysis with Short Time-series},
  author={Tomoharu Iwata and Yoshinobu Kawahara},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.04683}
}
Koopman spectral analysis has attracted attention for nonlinear dynamical systems since we can analyze nonlinear dynamics with a linear regime by embedding data into a Koopman space by a nonlinear function. For the analysis, we need to find appropriate embedding functions. Although several neural network-based methods have been proposed for learning embedding functions, existing methods require long time-series for training neural networks. This limitation prohibits performing Koopman spectral… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 64 REFERENCES
Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition
TLDR
This paper proposes minimization of the residual sum of squares of linear least-squares regression to estimate a set of functions that transforms data into a form in which the linear regression fits well.
Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems
TLDR
This paper introduces a computational framework for learning Koopman operators of nonlinear dynamical systems using deep learning and shows that this novel method automatically selects efficient deep dictionaries, requiring much lower dimensional dictionaries while outperforming state-of-the-art methods.
Dynamic Mode Decomposition with Reproducing Kernels for Koopman Spectral Analysis
TLDR
A modal decomposition algorithm to perform the analysis of the Koopman operator in a reproducing kernel Hilbert space using finite-length data sequences generated from a nonlinear system is proposed.
Forecasting Sequential Data using Consistent Koopman Autoencoders
TLDR
This work proposes a novel Consistent Koopman Autoencoder model which leverages the forward and backward dynamics, and achieves accurate estimates for significant prediction horizons, while also being robust to noise.
Few-shot Learning for Time-series Forecasting
TLDR
A few-shot learning method that forecasts a future value of a time- series in a target task given a few time-series in the target task by minimizing an expected test error of forecasting next timestep values.
N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
TLDR
The proposed deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train.
Meta-learning framework with applications to zero-shot time-series forecasting
TLDR
The empirical results emphasize the importance of meta-learning for successful zero-shot forecasting to new sources of TS, supporting the claim that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining, resulting in performance that is at least as good as that of state-of-practice univariate forecasting models.
Long Short-Term Memory
TLDR
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Cross-domain Meta-learning for Time-series Forecasting
Deep learning for universal linear embeddings of nonlinear dynamics
TLDR
It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control, so the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.
...
...