• Corpus ID: 246015685

Fractional SDE-Net: Generation of Time Series Data with Long-term Memory

@article{Hayashi2022FractionalSG,
  title={Fractional SDE-Net: Generation of Time Series Data with Long-term Memory},
  author={Kohei Hayashi and Kei Nakagawa},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.05974}
}
—In this paper, we focus on the generation of time- series data using neural networks. It is often the case that input time-series data have only one realized (and usually irregularly sampled) path, which makes it difficult to extract time-series characteristics, and its noise structure is more complicated than i.i.d. type. Time series data, especially from hydrology, telecommunications, economics, and finance, exhibit long-term memory also called long-range dependency (LRD). The main purpose of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 39 REFERENCES

Neural SDEs as Infinite-Dimensional GANs

TLDR
This work shows that the current classical approach to fitting SDEs may be approached as a special case of (Wasserstein) GANs, and in doing so the neural and classical regimes may be brought together.

Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering

TLDR
The Neural Jump ODE (NJ-ODE) is introduced that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process and it is shown that the output of the model converges to the L 2 -optimal prediction.

Latent ODEs for Irregularly-Sampled Time Series

TLDR
This work generalizes RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model they are called ODE-RNNs, which outperform their RNN-based counterparts on irregularly-sampled data.

Neural Jump Stochastic Differential Equations

TLDR
This work introduces Neural Jump Stochastic Differential Equations that provide a data-driven approach to learn continuous and discrete dynamic behavior, i.e., hybrid systems that both flow and jump.

SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates

TLDR
A new method for quantifying uncertainties of DNNs from a dynamical system perspective and introduces a Brownian motion term for capturing epistemic uncertainty, which can outperform existing uncertainty estimation methods across a series of tasks where uncertainty plays a fundamental role.

Quant GANs: deep generation of financial time series

TLDR
Quant GANs is introduced, a data-driven model which is inspired by the recent success of generative adversarial networks (GANs), and results highlight that distributional properties for small and large lags are in an excellent agreement and dependence properties such as volatility clusters, leverage effects, and serial autocorrelations can be generated by the generator function of Quant GAns, demonstrably in high fidelity.

Neural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit

TLDR
This work develops a variational inference framework for deep latent Gaussian models via stochastic automatic differentiation in Wiener space, where the variational approximations to the posterior are obtained by Girsanov (mean-shift) transformation of the standard Wiener process and the computation of gradients is based on the theory of Stochastic flows.

Neural Controlled Differential Equations for Irregular Time Series

TLDR
The resultingural controlled differential equation model is directly applicable to the general setting of partially-observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efficient adjoint-based backpropagation even across observations.

Long memory in continuous‐time stochastic volatility models

This paper studies a classical extension of the Black and Scholes model for option pricing, often known as the Hull and White model. Our specification is that the volatility process is assumed not

Time-series Generative Adversarial Networks

TLDR
A novel framework for generating realistic time-series data that combines the flexibility of the unsupervised paradigm with the control afforded by supervised training is proposed, which consistently and significantly outperforms state-of-the-art benchmarks with respect to measures of similarity and predictive ability.