• Corpus ID: 235376998

Neighborhood Contrastive Learning Applied to Online Patient Monitoring

  title={Neighborhood Contrastive Learning Applied to Online Patient Monitoring},
  author={Hugo Yeche and Gideon Dresdner and Francesco Locatello and Matthias Huser and Gunnar Ratsch},
Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients. In machine learning, online monitoring is often formulated as a supervised learning problem. Recently, contrastive learning approaches have demonstrated promising improvements over competitive supervised benchmarks. These methods rely on well-understood data augmentation techniques developed for image data which do not apply to online monitoring. In… 
HiRID-ICU-Benchmark - A Comprehensive Machine Learning Benchmark on High-resolution ICU Data
This work provides a benchmark covering a large spectrum of ICU-related tasks using the HiRID dataset, and provides an in-depth analysis of current state-of-the-art sequence modeling methods, highlighting some limitations of deep learning approaches for this type of data.
Contrastive Learning for Unsupervised Domain Adaptation of Time Series
This paper proposes a contrastive learning framework to learn domain-invariant semantics in multivariate time series, so that these preserve label information for the prediction task and is the first framework to learning domain- Invariant semantic information for UDA of time series data.
Deep Normed Embeddings for Patient Representation
This work shows how the learned embedding can be used for online patient monitoring, supplement clinicians and improve performance of downstream machine learning tasks, and introduces a novel contrastive representation learning objective and a training scheme for clinical time series.
Domain-guided Self-supervision of EEG Data Improves Downstream Classification Performance and Generalizability
A domain-guided approach for learning representations of scalp-electroencephalograms (EEGs) without relying on expert annotations is presented and evidence that an encoder pretrained using the proposed SSL tasks shows strong predictive performance on multiple downstream classifications is presented.


CLOCS: Contrastive Learning of Cardiac Signals
A family of contrastive learning methods, CLOCS, is proposed that encourages representations across time, leads, and patients to be similar to one another and consistently outperforms the state-of-the-art approach, SimCLR, on both linear evaluation and fine-tuning downstream tasks.
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients
It is shown that CLOCS consistently outperforms the state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation of, and fine-tuning on, downstream tasks.
Improving Clinical Predictions through Unsupervised Time Series Representation Learning
This work experiments with using sequence-to-sequence (Seq2Seq) models in two different ways, as an autoencoder and as a forecaster, and shows that the best performance is achieved by a forecasting Seq2 Seq model with an integrated attention mechanism.
Multitask learning and benchmarking with clinical time series data
This work proposes four clinical prediction benchmarks using data derived from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) database, covering a range of clinical problems including modeling risk of mortality, forecasting length of stay, detecting physiologic decline, and phenotype classification.
Evaluating Progress on Machine Learning for Longitudinal Electronic Healthcare Data
A comprehensive review of benchmarks in medical machine learning for structured data is performed, identifying one based on the Medical Information Mart for Intensive Care (MIMIC-III) that allows the first direct comparison of predictive performance and thus the evaluation of progress on four clinical prediction tasks.
An Empirical Study of Representation Learning for Reinforcement Learning in Healthcare
It is found that sequentially formed state representations facilitate effective policy learning in batch settings, validating a more thoughtful approach to representation learning that remains faithful to the sequential and partial nature of healthcare data.
Uncovering the structure of clinical EEG signals with self-supervised learning
The results suggest that self-supervision may pave the way to a wider use of deep learning models on EEG data, and linear classifiers trained on SSL-learned features consistently outperformed purely supervised deep neural networks in low-labeled data regimes while reaching competitive performance when all labels were available.
Set Functions for Time Series
This paper proposes a novel approach for classifying irregularly-sampled time series with unaligned measurements, focusing on high scalability and data efficiency, and is based on recent advances in differentiable set function learning, extremely parallelizable with a beneficial memory footprint.
Subject-Aware Contrastive Learning for Biosignals
This work introduces subject-aware learning through a subject-specific contrastive loss, and develops an adversarial training to promote subject-invariance during the self-supervised learning to model biosignals with a reduced reliance on labeled data and with fewer subjects.
Unsupervised Representation for EHR Signals and Codes as Patient Status Vector
This work presents a two-step unsupervised representation learning scheme to summarize the multi-modal clinical time series consisting of signals and medical codes into a patient status vector and evaluates the usefulness of the representation on two downstream tasks: mortality and readmission.