• Corpus ID: 231880050

Self-supervised learning for fast and scalable time series hyper-parameter tuning

@article{Zhang2021SelfsupervisedLF,
  title={Self-supervised learning for fast and scalable time series hyper-parameter tuning},
  author={Peiyi Zhang and Xiaodong Jiang and Ginger m Holt and Nikolay Pavlovich Laptev and Caner Komurlu and Peng Gao and Yang Yu},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.05740}
}
Hyper-parameters of time series models play an important role in time series analysis. Slight differences in hyper-parameters might lead to very different forecast results for a given model, and therefore, selecting good hyper-parameter values is indispensable. Most of the existing generic hyper-parameter tuning methods, such as Grid Search, Random Search, Bayesian Optimal Search, are based on one key component search, and thus they are computationally expensive and cannot be applied to fast… 
1 Citations
MOSPAT: AutoML based Model Selection and Parameter Tuning for Time Series Anomaly Detection
TLDR
MOSPAT is explored, an end-to-end automated machine learning based approach for model and parameter selection, combined with a generative model to produce labeled data that allows individual users in large organizations to tailor time-series monitoring to their specific use case and data characteristics.

References

SHOWING 1-10 OF 21 REFERENCES
Meta-learning how to forecast time series
TLDR
A random forest is used to identify the best forecasting method using only time series features and is shown to yield accurate forecasts comparable to several benchmarks and other commonly used automated approaches of time series forecasting.
Time-series Extreme Event Forecasting with Neural Networks at Uber
TLDR
A novel endto-end recurrent neural network architecture is proposed that outperforms the current state of the art event forecasting methods on Uber data and generalizes well to a public M3 dataset used for time-series forecasting competitions.
Forecasting at Scale
TLDR
A practical approach to forecasting “at scale” that combines configurable models with analyst-in-the-loop performance analysis, and a modular regression model with interpretable parameters that can be intuitively adjusted by analysts with domain knowledge about the time series are described.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
TLDR
This work presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT, and uses a self-supervised loss that focuses on modeling inter-sentence coherence.
LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection
TLDR
This work proposes a Long Short Term Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD) that learns to reconstruct 'normal' time-series behavior, and thereafter uses reconstruction error to detect anomalies.
An Overview of Multi-task Learning
TLDR
Many areas, including computer vision, bioinformatics, health informatics, speech, natural language processing, web applications and ubiquitous computing, use MTL to improve the performance of the applications involved and some representative works are reviewed.
Demand forecasting and smoothing capacity planning for products with high random demand volatility
For those products that are heavily competitive in the marketplace, demand volatility and unpredictability have been growing. This has resulted in a sizeable deviation in demand forecasts when using
Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey
TLDR
An extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos as a subset of unsupervised learning methods to learn general image and video features from large-scale unlabeled data without using any human-annotated labels is provided.
...
...