Corpus ID: 235254013

About Explicit Variance Minimization: Training Neural Networks for Medical Imaging With Limited Data Annotations

@article{Shubin2021AboutEV,
  title={About Explicit Variance Minimization: Training Neural Networks for Medical Imaging With Limited Data Annotations},
  author={Dmitrii Shubin and Danny Eytan and Sebastian D. Goodfellow},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.14117}
}
Self-supervised learning methods for computer vision have demonstrated the effectiveness of pre-training feature representations, resulting in well-generalizing Deep Neural Networks, even if the annotated data are limited. However, representation learning techniques require a significant amount of time for model training, with most of the time spent on precise hyper-parameter optimization and selection of augmentation techniques. We hypothesized that if the annotated dataset has enough… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 48 REFERENCES
Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction
TLDR
This paper proposes a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions, and demonstrates that this seemingly simple task provides a strong signal for feature learning. Expand
Semi-supervised Learning for Network-Based Cardiac MR Image Segmentation
TLDR
A semi-supervised learning approach, in which a segmentation network is trained from both labelled and unlabelled data, which outperforms a state-of-the-art multi-atlas segmentation method by a large margin and the speed is substantially faster. Expand
Transferable Visual Words: Exploiting the Semantics of Anatomical Patterns for Self-Supervised Learning
TLDR
This paper shows that visual words associated with rich semantics about human anatomy can be automatically harvested according to anatomical consistency via self-discovery, and that the self-discovered visual words can serve as strong yet free supervision signals for deep models to learn semantics-enriched generic image representation through self-supervision (self-classification and self-restoration). Expand
Deep residual dense U-Net for resolution enhancement in accelerated MRI acquisition
TLDR
This work proposes a deep-learning approach, aiming at reconstructing high-quality images from accelerated MRI acquisition, using Convolutional Neural Network (CNN) to learn the differences between the aliased images and the original images, employing a U-Net-like architecture. Expand
FocalMix: Semi-Supervised Learning for 3D Medical Image Detection
TLDR
A novel method is proposed, called FocalMix, which is the first to leverage recent advances in semi-supervised learning (SSL) for 3D medical image detection, and can achieve a substantial improvement over state-of-the-art supervised learning approaches with 400 unlabeled CT scans. Expand
Self-supervised Learning for Spinal MRIs
TLDR
This work shows that longitudinal scans alone can be used as a form of “free” self-supervision for training a deep network, and shows that the performance of the pre-trained CNN on the supervised classification task is superior to that of a network trained from scratch. Expand
3-D Consistent and Robust Segmentation of Cardiac Images by Deep Learning With Spatial Propagation
TLDR
A method based on deep learning to perform cardiac segmentation on short axis Magnetic resonance imaging stacks iteratively from the top slice to the bottom slice iteratively using a novel variant of the U-net. Expand
Unsupervised Representation Learning by Predicting Image Rotations
TLDR
This work proposes to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input, and demonstrates both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. Expand
Learning Not to Learn: Training Deep Neural Networks With Biased Data
TLDR
A novel regularization algorithm to train deep neural networks, in which data at training time is severely biased, and an iterative algorithm to unlearn the bias information is proposed. Expand
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
...
1
2
3
4
5
...