Utilizing Expert Features for Contrastive Learning of Time-Series Representations

@inproceedings{Nonnenmacher2022UtilizingEF,
  title={Utilizing Expert Features for Contrastive Learning of Time-Series Representations},
  author={Manuel Nonnenmacher and Lukas Oldenburg and Ingo Steinwart and David Reeb},
  booktitle={ICML},
  year={2022}
}
We present an approach that incorporates expert knowledge for time-series representation learning. Our method employs expert features to replace the commonly used data transformations in previous contrastive learning approaches. We do this since time-series data frequently stems from the industrial or medical field where expert features are often available from domain experts, while transformations are generally elusive for time-series data. We start by proposing two properties that useful time… 

Supervised Contrastive Learning with Hard Negative Samples

TLDR
This paper numerically showed that H-SCL outperforms other contrastive learning methods, and theoretically proves that, under certain conditions, the objective function of H- SCL can be bounded by the objectivefunction ofH-UCL but not by the Objective function of UCL, which can act as a proxy to minimize the H-U CL loss while minimizing UCL loss cannot.

References

SHOWING 1-10 OF 37 REFERENCES

Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding

TLDR
A self-supervised framework for learning generalizable representations for non-stationary time series, called Temporal Neighborhood Coding (TNC), takes advantage of the local smoothness of a signal’s generative process to define neighborhoods in time with stationary properties.

Time-Series Representation Learning via Temporal and Contextual Contrasting

TLDR
An unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data with high efficiency in few-labeled data and transfer learning scenarios is proposed.

Contrastive Neural Processes for Self-Supervised Learning

TLDR
A novel self-supervised learning framework that combines contrastive learning with neural processes that relies on recent advances in neural processes to perform time series forecasting and is able to outperform state-of-the-art techniques across industrial, medical and audio datasets.

Representation Learning with Contrastive Predictive Coding

TLDR
This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.

Unsupervised Scalable Representation Learning for Multivariate Time Series

TLDR
This paper combines an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.

Contrastive Representation Learning: A Framework and Review

TLDR
A general Contrastive Representation Learning framework is proposed that simplifies and unifies many different contrastive learning methods and a taxonomy for each of the components is provided in order to summarise and distinguish it from other forms of machine learning.

An empirical survey of data augmentation for time series classification with neural networks

TLDR
A taxonomy is proposed and outline the four families in time series data augmentation, including transformation-based methods, pattern mixing, generative models, and decomposition methods, and their application to time series classification with neural networks.

Task Programming: Learning Data Efficient Behavior Representations

TLDR
TREBA is presented: a method to learn annotation-sample efficient trajectory embedding for behavior analysis, based on multi-task self-supervised learning and task programming, which uses programs to explicitly encode structured knowledge from domain experts.

KINN: Incorporating Expert Knowledge in Neural Networks

TLDR
KINN employs a novel residual knowledge incorporation scheme, which can automatically determine the quality of the predictions made by the expert and rectify it accordingly by learning the trends/patterns from data.

Weakly Supervised Contrastive Learning

  • Mingkai ZhengFei Wang Chang Xu
  • Computer Science
    2021 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2021
TLDR
A weakly supervised contrastive learning framework (WCL) based on two projection heads, one of which will perform the regular instance discrimination task, and the other head will use a graph-based method to explore similar samples and generate a weak label to pull the similar images closer.