Self-Supervised Learning of Echocardiogram Videos Enables Data-Efficient Clinical Diagnosis

@article{Holste2022SelfSupervisedLO,
  title={Self-Supervised Learning of Echocardiogram Videos Enables Data-Efficient Clinical Diagnosis},
  author={Greg Holste and Evangelos K. Oikonomou and Bobak J. Mortazavi and Zhangyang Wang and Rohan Khera},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.11581}
}
Given the difficulty of obtaining high-quality la-bels for medical image recognition tasks, there is a need for deep learning techniques that can be adequately fine-tuned on small labeled data sets. Recent advances in self-supervised learning techniques have shown that such an in-domain representation learning approach can provide a strong initialization for supervised fine-tuning, proving much more data-efficient than standard transfer learning from a supervised pretraining task. How-ever, these… 
1 Citations

Figures and Tables from this paper

Automated detection of severe aortic stenosis using single-view echocardiography: A self-supervised ensemble learning approach

An automated approach for severe AS detection using single-view 2D echocardiography, with implications for point-of-care screening, is proposed.

References

SHOWING 1-10 OF 22 REFERENCES

A New Semi-supervised Learning Benchmark for Classifying View and Diagnosing Aortic Stenosis from Echocardiograms

A benchmark dataset to assess semi-supervised approaches to two tasks relevant to cardiac ultrasound (echocardiogram) interpretation: view classification and disease severity classification is developed and it is found that a state-of-the-art method called MixMatch achieves promising gains in heldout accuracy on both tasks.

Self-Supervised Representation Learning for Ultrasound Video

This paper proposes a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation, and forces the model to address anatomy-aware tasks with free supervision from the data itself.

Video-based AI for beat-to-beat assessment of cardiac function

A video-based deep learning algorithm that surpasses the performance of human experts in the critical tasks of segmenting the left ventricle, estimating ejection fraction and assessing cardiomyopathy is presented.

Big Self-Supervised Models Advance Medical Image Classification

A novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case, when available, to construct more informative positive pairs for self-supervised learning is introduced.

Deep Learning Interpretation of Echocardiograms

It is shown that deep learning applied to echocardiography can identify local cardiac structures, estimate cardiac function, and predict systemic phenotypes that modify cardiovascular risk but not readily identifiable to human interpretation.

Fully Automated Echocardiogram Interpretation in Clinical Practice

Automated measurements are found to be comparable or superior to manual measurements across 11 internal consistency metrics (eg, the correlation of left atrial and ventricular volumes) and demonstrated applicability to serial monitoring of patients with breast cancer for trastuzumab cardiotoxicity.

Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology

The key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images wherein the different attention heads learn distinct morphological phenotypes.

Artificial Intelligence-Enabled, Fully Automated Detection of Cardiac Amyloidosis Using Electrocardiograms and Echocardiograms.

An automated multi-modality pipeline for cardiac amyloidosis detection using two neural-network models using electrocardiograms (ECG) and echocardiographic videos as input is developed and should serve as a generalizable strategy for other rare and intermediate frequency cardiac diseases with established or emerging therapies.

Video Contrastive Learning with Global Context

  • Haofei KuangYi Zhu Mu Li
  • Computer Science
    2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
  • 2021
This paper proposes a new video-level contrastive learning method based on segments to formulate positive pairs that is able to capture the global context in a video, thus robust to temporal content change and can outperform previous state-of-the-arts efforts.

A Simple Framework for Contrastive Learning of Visual Representations

It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.