Deep semi-supervised segmentation with weight-averaged consistency targets

@article{Perone2018DeepSS,
  title={Deep semi-supervised segmentation with weight-averaged consistency targets},
  author={Christian Samuel Perone and Julien Cohen-Adad},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.04657}
}
Recently proposed techniques for semi-supervised learning such as Temporal Ensembling and Mean Teacher have achieved state-of-the-art results in many important classification benchmarks. In this work, we expand the Mean Teacher approach to segmentation tasks and show that it can bring important improvements in a realistic small data regime using a publicly available multi-center dataset from the Magnetic Resonance Imaging (MRI) domain. We also devise a method to solve the problems that arise… 

Mixed-supervised segmentation: Confidence maximization helps knowledge distillation

TLDR
Results demonstrate that the method significantly outperforms other strategies for semantic segmentation within a mixed-supervision framework, as well as recent semi-supervised approaches, and also discusses an interesting link between Shannon-entropy minimization and standard pseudo-mask generation.

Uncertainty-aware Self-ensembling Model for Semi-supervised 3D Left Atrium Segmentation

TLDR
A novel uncertainty-aware semi-supervised framework for left atrium segmentation from 3D MR images that can effectively leverage the unlabeled data by encouraging consistent predictions of the same input under different perturbations.

Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation

TLDR
This article presents a new semisupervised method for medical image segmentation, where the network is optimized by a weighted combination of a common supervised loss only for the labeled inputs and a regularization loss for both the labeled and unlabeled data.

Mutual information deep regularization for semi-supervised segmentation

TLDR
Experimental results show the proposed clustering loss based on mutual information that explicitly enforces prediction consistency between nearby pixels in unlabeled images, and for random perturbation of these images, to outperform recently-proposed approaches for semi-supervised and yield a performance comparable to fully-super supervised training.

POPCORN: Progressive Pseudo-labeling with Consistency Regularization and Neighboring

TLDR
POPCORN is proposed, a novel method combining consistency regularization and pseudo-labeling designed for image segmentation designed for multiple sclerosis lesion segmentation that demonstrates competitive results compared to other state-of-the-art SSL strategies.

Boosting Semi-supervised Image Segmentation with Global and Local Mutual Information Regularization

TLDR
Experimental results show the method to outperform recently-proposed approaches for semi-supervised segmentation and provide an accuracy near to full supervision while training with very few annotated images.

Semi-Supervised Consistency Training for Image Segmentation in 3 D CT Scans Name :

TLDR
It is shown that outperformance can be extended to high data regimes by applying Stochastic Weight Averaging (SWA), which incurs zero additional training cost and also concludes that larger-than-realistic transformations are the most beneficial.

Efficient Combination of CNN and Transformer for Dual-Teacher Uncertainty-Aware Guided Semi-Supervised Medical Image Segmentation

TLDR
This method fuses CNN and transformer to design a new Teacher-Student semi-supervised learning optimization strategy, which greatly improves the utilization of a large number of unlabeled medical images and the effectiveness of model segmentation results.

Boundary-aware Information Maximization for Self-supervised Medical Image Segmentation

TLDR
Experimental results reveal the method’s effectiveness in improving segmentation performance when few annotated images are available and a novel unsupervised pretraining framework that avoids the drawback of contrastive learning is proposed.

References

SHOWING 1-10 OF 24 REFERENCES

Semi-supervised Deep Learning for Fully Convolutional Networks

TLDR
This work lifts the concept of auxiliary manifold embedding for semi-supervised learning to FCNs with the help of Random Feature Embedding and leverages the proposed framework for the purpose of domain adaptation.

Self-ensembling for domain adaptation

TLDR
This paper explores the use of self-ensembling with random image augmentation – a technique that has achieved impressive results in the area of semi-supervised learning – for visual domain adaptation problems with state of the art results when performing adaptation between pairs of datasets.

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

TLDR
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks, but it becomes unwieldy when learning large datasets, so Mean Teacher, a method that averages model weights instead of label predictions, is proposed.

Transferable Semi-supervised Semantic Segmentation

TLDR
A novel transferable semi-supervised semantic segmentation model that can transfer the learned segmentation knowledge from a few strong categories with pixel- level annotations to unseen weak categories with only image-level annotations is proposed, significantly broadening the applicable territory of deep segmentation models.

U-Net: Convolutional Networks for Biomedical Image Segmentation

TLDR
It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.

Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks

TLDR
This simple and efficient method of semi-supervised learning for deep neural networks is proposed, trained in a supervised fashion with labeled and unlabeled data simultaneously and favors a low-density separation between classes.

Spinal cord gray matter segmentation using deep dilated convolutions

TLDR
A modern, simple and end-to-end fully-automated human spinal cord gray matter segmentation method using Deep Learning, that works both on in vivo and ex vivo MRI acquisitions.

A survey on deep learning in medical image analysis

Semi-supervised Learning by Entropy Minimization

TLDR
This framework, which motivates minimum entropy regularization, enables to incorporate unlabeled data in the standard supervised learning, and includes other approaches to the semi-supervised problem as particular or limiting cases.

Improved Techniques for Training GANs

TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.