• Corpus ID: 235623791

Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting

@article{Wu2021AutoformerDT,
  title={Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting},
  author={Haixu Wu and Jiehui Xu and Jianmin Wang and Mingsheng Long},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.13008}
}
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformerbased models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse… 
ChunkFormer: Learning Long Time Series with Multi-stage Chunked Transformer
TLDR
A novel architecture, ChunkFormer, is proposed that improves the existing Transformer framework to handle the challenges while dealing with long time series and gradually learns both local and global information without changing the total length of the input sequences.
From Known to Unknown: Knowledge-guided Transformer for Time-Series Sales Forecasting in Alibaba
TLDR
Aliformer is proposed, a knowledge-guided selfattention layer that uses known knowledge’s consistency to guide the transmission of timing information and the futureemphasized training strategy is proposed to make the model focus more on the utilization of future knowledge.
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy
TLDR
It is found that in each time series, each time point can also be described by its associations with all time points, presenting as a point-wise distribution that is more expressive for temporal modeling.
Lifelong Property Price Prediction: A Case Study for the Toronto Real Estate Market
TLDR
The first life-long predictive model for automated property valuation, Luce not only significantly outperforms prior property valuation methods but also often reaches and sometimes exceeds the valuation accuracy given by independent experts when using the actual realization price as the ground truth.
Translating Human Mobility Forecasting through Natural Language Generation
TLDR
The paper aims to address the human mobility forecasting problem as a language translation task in a sequence-to-sequence manner and proposes a novel forecasting through a language generation pipeline.
Emergency Vehicles Audio Detection and Localization in AutonomousDriving
  • Hongyi Sun, Xinyi Liu, Kecheng Xu, Jinghao Miao, Qi Luo
  • Computer Science, Engineering
    ArXiv
  • 2021
TLDR
This work presents a novel system from collecting the real-world siren data to the deployment of models using only two cost-efficient microphones, and benchmark various machine learning approaches that can determine the siren existence and sound source localization which includes direction and distance simultaneously within 50ms of latency.
L\'evy Induced Stochastic Differential Equation Equipped with Neural Network for Time Series Forecasting
  • Luxuan Yang, Ting Gao, Yubin Lu, Jinqiao Duan, Tao Liu
  • Computer Science, Economics
  • 2021
With the fast development of modern deep learning techniques, the study of dynamic systems and neural networks is increasingly benefiting each other in a lot of different ways. Since uncertainties
Time Series Visualization using Transformer for Prediction of Natural Catastrophe
  • 2021

References

SHOWING 1-10 OF 60 REFERENCES
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting
TLDR
First, convolutional self-attention is proposed by producing queries and keys with causal convolution so that local context can be better incorporated into attention mechanism, and LogSparse Transformer is proposed, improving forecasting accuracy for time series with fine granularity and strong long-term dependencies under constrained memory budget.
Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks
TLDR
A novel deep learning framework, namely Long- and Short-term Time-series network (LSTNet), to address this open challenge of multivariate time series forecasting, using the Convolution Neural Network and the Recurrent Neural Network to extract short-term local dependency patterns among variables and to discover long-term patterns for time series trends.
Temporal pattern attention for multivariate time series forecasting
TLDR
This paper proposes using a set of filters to extract time-invariant temporal patterns, similar to transforming time series data into its “frequency domain”, and proposes a novel attention mechanism to select relevant time series, and uses its frequency domain information for multivariate forecasting.
Long-term Forecasting using Tensor-Train RNNs
TLDR
Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics, and decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance.
N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
TLDR
The proposed deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train.
Conditional Time Series Forecasting with Convolutional Neural Networks
TLDR
This paper compares the performance of the WaveNet model to a state-of-the-art fully convolutional network (FCN), and an autoregressive model popular in econometrics and shows that the model is much better able to learn important dependencies in between financial time series resulting in a more robust and accurate forecast.
Adversarial Sparse Transformer for Time Series Forecasting
TLDR
Adversarial Sparse Transformer (AST) is proposed, a new time series forecasting model based on Generative Adversarial Networks (GANs), which adopts a SparseTransformer as the generator to learn a sparse attention map forTime series forecasting, and uses a discriminator to improve the prediction performance at a sequence level.
A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
TLDR
A dual-stage attention-based recurrent neural network (DA-RNN) to address the long-term temporal dependencies of the Nonlinear autoregressive exogenous model and can outperform state-of-the-art methods for time series prediction.
DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks
TLDR
DeepAR is proposed, a methodology for producing accurate probabilistic forecasts, based on training an auto regressive recurrent network model on a large number of related time series, with accuracy improvements of around 15% compared to state-of-the-art methods.
A spatio-temporal decomposition based deep neural network for time series forecasting
TLDR
A deep learning framework for spatio-temporal forecasting problems, which explicitly design the neural network architecture for capturing various types of spatial and temporal patterns, and the model is robust to missing data.
...
1
2
3
4
5
...