• Corpus ID: 235293778

Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding

@article{Tonekaboni2021UnsupervisedRL,
  title={Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding},
  author={Sana Tonekaboni and Danny Eytan and Anna Goldenberg},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.00750}
}
Time series are often complex and rich in information but sparsely labeled and therefore challenging to model. In this paper, we propose a self-supervised framework for learning generalizable representations for non-stationary time series. Our approach, called Temporal Neighborhood Coding (TNC), takes advantage of the local smoothness of a signal’s generative process to define neighborhoods in time with stationary properties. Using a debiased contrastive objective, our framework learns time… 

Contrastive Learning for Time Series on Dynamic Graphs

This paper proposes a framework called GraphTNC for unsupervised learning of joint representations of the graph and the time-series using a contrastive learning strategy, and shows that it can prove beneficial for the classification task with real-world datasets.

Iterative Bilinear Temporal-Spectral Fusion for Unsupervised Time-Series Representation Learning

This paper proposes a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF), which firstly utilizes the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies and devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.

Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion

A novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs, and iteratively re- encode representations in a fusion-and-squeeze manner with Spectrum-to-Time (S2T) and Time- to-Spectrum (T2S) Aggregation modules.

Cross Reconstruction Transformer for Self-Supervised Time Series Representation Learning

This paper aims at learning representations for time series from a new perspective and proposes Cross Reconstruction Transformer (CRT) to solve the aforementioned problems in a unified way and shows that CRT consistently achieves the best performance over existing methods.

Contrastive Learning for Unsupervised Domain Adaptation of Time Series

A novel framework for UDA of time series data, called CLUDA, which proposes a contrastive learning framework to learn contextual representations in multivariate time series, so that these preserve label information for the prediction task.

Decoupling Local and Global Representations of Time Series

This paper proposes a novel generative approach for learning representations for the global and local factors of variation in time series, and introduces counterfactual regularization that minimizes the mutual information between the two variables.

Utilizing Expert Features for Contrastive Learning of Time-Series Representations

The proposed ExpCLR is a novel contrastive learning approach built on an objective that utilizes expert features to encourage both properties for the learned representation and surpasses several state-of-the-art methods for both unsupervised and semi-supervised representation learning.

TS2Vec: Towards Universal Representation of Time Series

TS2Vec performs contrastive learning in a hierarchical way over augmented context views, which enables a robust contextual representation for each timestamp, which achieves significant improvement over existing SOTAs of unsupervised time series representation.

Learning Timestamp-Level Representations for Time Series with Hierarchical Contrastive Loss

This paper presents TS2Vec, a universal framework for learning timestamplevel representations of time series that performs timestamp-wise discrimination, which learns a contextual representation vector directly for each timestamp.

Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

A decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation is proposed, motivated by TF-C.

References

SHOWING 1-10 OF 43 REFERENCES

Learning Representations for Time Series Clustering

A novel unsupervised temporal representation learning model, named Deep Temporal Clustering Representation (DTCR), is proposed, which integrates the temporal reconstruction and K-means objective into the seq2seq model, which leads to improved cluster structures and thus obtains cluster-specific temporal representations.

Unsupervised Scalable Representation Learning for Multivariate Time Series

This paper combines an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.

Representation Learning with Contrastive Predictive Coding

This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.

TimeNet: Pre-trained deep recurrent neural network for time series classification

TimeNet: a deep recurrent neural network trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously.

Similarity Preserving Representation Learning for Time Series Clustering

An efficient representation learning framework that is able to convert a set of time series with various lengths to an instance-feature matrix that guarantees that the pairwise similarities between time series are well preserved after the transformation, thus the learned feature representation is particularly suitable for the time series clustering task.

Improving Clinical Predictions through Unsupervised Time Series Representation Learning

This work experiments with using sequence-to-sequence (Seq2Seq) models in two different ways, as an autoencoder and as a forecaster, and shows that the best performance is achieved by a forecasting Seq2 Seq model with an integrated attention mechanism.

Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA

This work proposes a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data, and shows how TCL can be related to a nonlinear ICA model, when ICA is redefined to include temporal nonstationarities.

Representation Learning: A Review and New Perspectives

Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.

A review of unsupervised feature learning and deep learning for time-series modeling

Deep learning for time series classification: a review

This article proposes the most exhaustive study of DNNs for TSC by training 8730 deep learning models on 97 time series datasets and provides an open source deep learning framework to the TSC community.