• Corpus ID: 233324524

SSLM: Self-Supervised Learning for Medical Diagnosis from MR Video

@article{Manna2021SSLMSL,
  title={SSLM: Self-Supervised Learning for Medical Diagnosis from MR Video},
  author={Siladittya Manna and Saumik Bhattacharya and Umapada Pal},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.10481}
}
In medical image analysis, the cost of acquiring high-quality data and their annotation by experts is a barrier in many medical applications. Most of the techniques used are based on supervised learning framework and need a large amount of annotated data to achieve satisfactory performance. As an alternative, in this paper, we propose a self-supervised learning approach to learn the spatial anatomical representations from the frames of magnetic resonance (MR) video clips for the diagnosis of… 

References

SHOWING 1-10 OF 41 REFERENCES
Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet
TLDR
A deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams is developed and the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation is supported.
Semi-automated detection of anterior cruciate ligament injury from MRI
A Comprehensive Survey on Transfer Learning
TLDR
This survey attempts to connect and systematize the existing transfer learning research studies, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas.
MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models
TLDR
This study demonstrates that MoCo-pretraining provides high-quality representations and transferable initializations for chest X-ray interpretation and suggests that pretraining on unlabeled X-rays can provide transfer learning benefits for a target task.
A Simple Framework for Contrastive Learning of Visual Representations
TLDR
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.
EfficientDet: Scalable and Efficient Object Detection
TLDR
This paper systematically study neural network architecture design choices for object detection and proposes a weighted bi-directional feature pyramid network (BiFPN) and a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time.
Improved Baselines with Momentum Contrastive Learning
TLDR
With simple modifications to MoCo, this note establishes stronger baselines that outperform SimCLR and do not require large training batches, and hopes this will make state-of-the-art unsupervised learning research more accessible.
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a
Self-Supervised Learning of Pretext-Invariant Representations
TLDR
This work develops Pretext-Invariant Representation Learning (PIRL), a new state-of-the-art in self-supervised learning from images that learns invariant representations based on pretext tasks that substantially improves the semantic quality of the learned image representations.
Self-Supervised Representation Learning for Ultrasound Video
TLDR
This paper proposes a self-supervised learning approach to learn meaningful and transferable representations from medical imaging video without any type of human annotation, and forces the model to address anatomy-aware tasks with free supervision from the data itself.
...
1
2
3
4
5
...