Unbiased Subclass Regularization for Semi-Supervised Semantic Segmentation

  title={Unbiased Subclass Regularization for Semi-Supervised Semantic Segmentation},
  author={Dayan Guan and Jiaxing Huang and Aoran Xiao and Shijian Lu},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
Semi-supervised semantic segmentation learns from small amounts of labelled images and large amounts of unlabelled images, which has witnessed impressive progress with the recent advance of deep neural networks. However, it often suffers from severe class-bias problem while exploring the unlabelled images, largely due to the clear pixel-wise class imbalance in the labelled images. This paper presents an unbiased subclass regularization network (USRN) that alleviates the class imbalance issue by… 

Figures and Tables from this paper

Semi-supervised Semantic Segmentation with Prototype-based Consistency Regularization

A novel approach to regularize the distribution of within-class features to ease label propagation in semi-supervised semantic segmentation, which demonstrates superior performance over the state-of-the-art methods from extensive experimental evaluation on both Pascal VOC and Cityscapes benchmarks.

Semantic Segmentation with Active Semi-Supervised Representation Learning

This work extends the prior state-of-the-art S4AL algorithm by replacing its mean teacher approach for semi-supervised learning with a self-training approach that improves learning with noisy labels, resulting in the ability to train an effective semantic segmentation algorithm with significantly lesser labeled data.

Boosting Semi-Supervised Semantic Segmentation with Probabilistic Representations

A Probabilistic Representation Contrastive Learning (PRCL) framework is proposed that improves representation quality by taking its probability into consideration and can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels.

Multi-View Correlation Consistency for Semi-Supervised Semantic Segmentation

This paper proposes multi-view correlation consistency (MVCC) learning: it considers rich pairwise relationships in self-correlation matrices and matches them across views to provide robust supervision and proposes a view-coherent data augmentation strategy that guarantees pixel-pixel correspondence between different views.

Augmentation Matters: A Simple-yet-Effective Approach to Semi-supervised Semantic Segmentation

A standard teacher-student framework is followed and a simple and clean approach that focuses mainly on data perturbations to boost the SSS performance is proposed, arguing that various data augmentations should be adjusted to better adapt to the semi-supervised scenarios.

Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation

This work revisits the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, and presents a dual-stream perturbation technique, enabling two strong views to be simultaneously guided by a common weak view.

Domain Adaptive Video Segmentation via Temporal Pseudo Supervision

This work designs temporal pseudo supervision (TPS), a simple and effective method that explores the idea of consistency training for learning effective representations from unlabelled target videos by enforcing model consistency across augmented video frames which helps learn from more diverse target data.

Learning from Future: A Novel Self-Training Framework for Semantic Segmentation

A novel self-training strategy, which allows the model to learn from the future, and develops two variants of the future-self-training (FST) framework through peeping at the future both deeply and widely.

UGAN: Semi-supervised Medical Image Segmentation Using Generative Adversarial Network

  • Yuan ZhengBeizhan WangQingqi Hong
  • Computer Science
    2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)
  • 2022
This work proposes UGAN, i.e., generative adversarial network based on U-Net, which can adjust itself to different tasks based on the signature of the dataset and obtain good segmentation results.

Progressive Learning with Cross-Window Consistency for Semi-Supervised Semantic Segmentation

It is revealed that c ross-window c onsistency (CWC) is helpful in comprehensively extracting auxiliary supervision from unlabeled data, and a novel CWC-driven progressive learning framework is proposed to optimize the deep network by mining weak-to-strong constraints from massive unlabeling data.



Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning

A novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL), which adaptively balances the training of well and badly performed categories, with a confidence bank to dynamically track category-wise performance during training.

Semi-Supervised Semantic Segmentation With High- and Low-Level Consistency

This work proposes an approach for semi-supervised semantic segmentation that learns from limited pixel-wise annotated samples while exploiting additional annotation-free images, and achieves significant improvement over existing methods, especially when trained with very few labeled samples.

A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation*

It is demonstrated that the devil is in the details: a set of simple designs and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.

Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

The key element of this approach is the contrastive learning module that enforces the segmentation network to yield similar pixel-level feature representations for same-class samples across the whole dataset, maintaining a memory bank which is continuously updated with relevant and high-quality feature vectors from labeled data.

PseudoSeg: Designing Pseudo Labels for Semantic Segmentation

This work presents a simple and novel re-design of pseudo-labeling to generate well-calibrated structured pseudo labels for training with unlabeled or weakly-labeled data and demonstrates the effectiveness of the proposed pseudo- labeling strategy in both low-data and high-data regimes.

C3-SemiSeg: Contrastive Semi-supervised Segmentation via Cross-set Learning and Dynamic Class-balancing

This work introduces a novel C3-SemiSeg to improve consistency-based semi-supervised learning by exploiting better feature alignment under perturbations and enhancing the capability of discriminative feature cross images.

Semi-Supervised Semantic Image Segmentation With Self-Correcting Networks

This paper introduces a principled semi-supervised framework that only use a small set of fully supervised images (having semantic segmentation labels and box labels) and a set of images with only object bounding box labels (the authors call it the weak-set).

Semi-Supervised Semantic Segmentation With Cross-Consistency Training

This work observes that for semantic segmentation, the low-density regions are more apparent within the hidden representations than within the inputs, and proposes cross-consistency training, where an invariance of the predictions is enforced over different perturbations applied to the outputs of the encoder.

Semi-supervised Segmentation Based on Error-Correcting Supervision

This work augments supervised segmentation models by allowing them to learn from unlabeled data by proposing a loss function that incorporates both the pseudo-labels as well as the predictive certainty of the correction network.

Learning Deep Representation for Imbalanced Classification

The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution.