Corpus ID: 233423665

Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

@article{Alonso2021SemiSupervisedSS,
  title={Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank},
  author={I{\~n}igo Alonso and Alberto Sabater and David Ferstl and Luis Montesano and Ana Cristina Murillo},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.13415}
}
This work presents a novel approach for semi-supervised semantic segmentation. The key element of this approach is our contrastive learning module that enforces the segmentation network to yield similar pixel-level feature representations for same-class samples across the whole dataset. To achieve this, we maintain a memory bank continuously updated with relevant and high-quality feature vectors from labeled data. In an end-to-end training, the features from both labeled and unlabeled data are… Expand

Figures and Tables from this paper

Domain Adaptive Semantic Segmentation with Regional Contrastive Consistency Regularization
TLDR
This work proposes a novel and fully end-to-end trainable approach, called regional contrastive consistency regularization (RCCR) for domain adaptive semantic segmentation, which outperforms the state-of-the-art methods on two common UDA benchmarks. Expand
Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation
TLDR
This work proposes to explicitly learn the task feature correlation to strengthen the target semantic predictions with the help of target depth estimation, and uses the depth prediction discrepancy from source and target depth decoders to approximate the pixel-wise adaptation difficulty. Expand
Multi-dataset Pretraining: A Unified Model for Semantic Segmentation
TLDR
In this paper, a unified framework to take full advantage of the fragmented annotations of different datasets, termed as Multi-Dataset Pretraining, is proposed, which consistently outperforms the pretrained models over ImageNet by a considerable margin, while only using less than 10% samples for pretraining. Expand
Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast
TLDR
Weakly-supervised pixel-to-prototype contrast that can provide pixel-level supervisory signals to narrow the gap between classification and segmentation is proposed and seamlessly incorporated into existing WSSS models without any changes to the base networks and does not incur any extra inference burden. Expand

References

SHOWING 1-10 OF 52 REFERENCES
Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation Method for Semantic Segmentation
TLDR
A semi-supervised approach named Alleviating Semantic-level Shift (ASS) is proposed, which can promote the distribution consistency from both global and local views and can beat the oracle model trained on the whole target dataset. Expand
Dmt: Dynamic mutual training for semi-supervised learning
  • arXiv preprint arXiv:2004.08514,
  • 2020
ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning
TLDR
This work proposes a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples, by leveraging on the network’s predictions for respecting object boundaries, and attains state-of-the-art results. Expand
Exploring Simple Siamese Representation Learning
  • Xinlei Chen, Kaiming He
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Expand
Semi-Supervised Semantic Segmentation With High- and Low-Level Consistency
TLDR
This work proposes an approach for semi-supervised semantic segmentation that learns from limited pixel-wise annotated samples while exploiting additional annotation-free images, and achieves significant improvement over existing methods, especially when trained with very few labeled samples. Expand
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency
TLDR
This paper proposes to maintain the context-aware consistency between features of the same identity but with different contexts, making the representations robust to the varying environments, and presents the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner. Expand
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
TLDR
This work introduces Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning that performs on par or better than the current state of the art on both transfer and semi- supervised benchmarks. Expand
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
TLDR
This paper demonstrates the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling, and shows that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks. Expand
Semi-Supervised Semantic Segmentation via Dynamic Self-Training and Class-Balanced Curriculum
TLDR
The method, Dynamic Self-Training and Class-Balanced Curriculum (DST-CBC), exploits inter-model disagreement by prediction confidence to construct a dynamic loss robust against pseudo label noise, enabling it to extend pseudo labeling to a class-balanced curriculum learning process. Expand
Adversarial Learning for Semi-supervised Semantic Segmentation
TLDR
It is shown that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. Expand
...
1
2
3
4
5
...