Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models

  title={Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models},
  author={Jose Luis G{\'o}mez and Gabriel Villalonga and Antonio M. L'opez},
  journal={Sensors (Basel, Switzerland)},
Semantic image segmentation is a core task for autonomous driving, which is performed by deep models. Since training these models draws to a curse of human-based image labeling, the use of synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies addressing an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic segmentation models. It… 

Figures and Tables from this paper



The Cityscapes Dataset for Semantic Urban Scene Understanding

This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity.

ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning

This work proposes a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples, by leveraging on the network’s predictions for respecting object boundaries, and attains state-of-the-art results.

Multiple Fusion Adaptation: A Strong Framework for Unsupervised Semantic Segmentation Adaptation

This paper challenges the cross-domain semantic segmentation task, aiming to improve the segmentation accuracy on the unlabeled target domain without incurring additional annotation, with a novel and effective Multiple Fusion Adaptation (MFA) method.

DSP: Dual Soft-Paste for Unsupervised Domain Adaptive Semantic Segmentation

A novel Dual Soft-Paste method that facilitates the model learning domain-invariant features from the intermediate domains, leading to faster convergence and better performance.

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers

SegFormer is presented, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders and shows excellent zero-shot robustness on Cityscapes-C.

Rethinking Ensemble-Distillation for Semantic Segmentation Based Unsupervised Domain Adaption

The experimental evidence validates that the proposed flexible ensemble-distillation framework for performing semantic segmentation based UDA tasks against contemporary baseline methods offer superior performance, robustness and flexibility.

Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches

This paper focuses on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors and assess the goodness of multi-modal co- training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D).

Semi-supervised Domain Adaptation based on Dual-level Domain Mixing for Semantic Segmentation

This work focuses on a more practical setting of semi-supervised domain adaptation (SSDA) where both a small set of labeled target data and large amounts of labeled source data are available and a novel framework based on dual-level domain mixing is proposed.

Multi-Source Domain Adaptation with Collaborative Learning for Semantic Segmentation

This paper proposes a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation that significantly outperforms all previous state-of-the-arts single-source and multi- source unsupervised domain adaptation methods.