Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

@article{Zhang2022SelfSupervisedCP,
  title={Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency},
  author={Xiang Zhang and Ziyuan Zhao and Theodoros Tsiligkaridis and Marinka Zitnik},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.08496}
}
Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 87 REFERENCES
Self-Supervised Pre-training for Time Series Classification
TLDR
Empirical results show that the time series model augmented with the proposed self-supervised pretext tasks achieves state-of-the-art / highly competitive results.
Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift
TLDR
A novel supervised domain adaptation based on two steps that search for an optimal class-dependent transformation from the source to the target domain from a few samples and uses embedding similarity techniques to select the corresponding transformation at inference.
Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification
  • Yuan Yuan, Lei Lin
  • Computer Science
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • 2021
TLDR
A novel self-supervised pretraining scheme to initialize a transformer-based network by utilizing large-scale unlabeled data to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics.
A Survey of Unsupervised Deep Domain Adaptation
TLDR
A survey will compare single-source and typically homogeneous unsupervised deep domain adaptation approaches, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially costly target data labels.
Set Functions for Time Series
TLDR
This paper proposes a novel approach for classifying irregularly-sampled time series with unaligned measurements, focusing on high scalability and data efficiency, and is based on recent advances in differentiable set function learning, extremely parallelizable with a beneficial memory footprint.
Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation
TLDR
An Adversarial Spectral Kernel Matching (AdvSKM) method, where a hybrid spectral kernel network is specifically designed as inner kernel to reform the Maximum Mean Discrepancy (MMD) metric for UTSDA.
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
TLDR
It is demonstrated that multi-condition pre-trained SPIRAL models are more robust to noisy speech (9.0% 13.3% relative word error rate reduction on real noisy test data), compared to applying multi- condition training solely in the fine-tuning stage.
A Transformer-based Framework for Multivariate Time Series Representation Learning
TLDR
A novel framework for multivariate time series representation learning based on the transformer encoder architecture, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples.
CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation
TLDR
A simple Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap between the labeled and unlabeled target distributions and the inter- domain gap between source and unl Isabeled target distribution in SSDA.
...
...