Deep Co-Training with Task Decomposition for Semi-Supervised Domain Adaptation

@article{Yang2021DeepCW,
  title={Deep Co-Training with Task Decomposition for Semi-Supervised Domain Adaptation},
  author={Luyu Yang and Yan Wang and Mingfei Gao and Abhinav Shrivastava and Kilian Q. Weinberger and Wei-Lun Chao and Ser-Nam Lim},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={8886-8896}
}
  • Luyu Yang, Yan Wang, Ser-Nam Lim
  • Published 24 July 2020
  • Computer Science
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain, from which unlabeled data and a small set of labeled data are provided. Current methods that treat source and target supervision without distinction overlook their inherent discrepancy, resulting in a source-dominated model that has not effectively use the target supervision. In this paper, we argue that the labeled target data needs to be distinguished… 

Semi-Supervised Domain Adaptation by Similarity based Pseudo-label Injection

This work pseudo-labels the unlabeled target samples by comparing their feature representation to those of the labeled samples from both the source and target domains to mitigate challenges caused by the skewed label ratio and uses a supervised contrastive loss on the labeled and pseudo-labeled datasets to align the sources and target distributions.

Local Context-Aware Active Domain Adaptation

Extensive experiments show that LAS selects more informative samples than existing active selection strategies, and equipped with LMA, the full LADA method outperforms state-of-the-art ADA solutions on various benchmarks.

Learning Semantic Correspondence with Sparse Annotations

This paper proposes a teacher-student learning paradigm for generating dense pseudo-labels and develops two novel strategies for denoising pseudo- Labels, and uses spatial priors around the sparse annotations to suppress the noisy pseudo-Labels.

Combating Label Distribution Shift for Active Domain Adaptation

This work considers the problem of active domain adaptation to unlabeled target data, of which subset is actively selected and labeled given a budget constraint, and devise a method that substantially outperforms existing methods in every adaptation scenario.

Optimal transport meets noisy label robust loss and MixUp regularization for domain adaptation

This work proposes to couple the MixUp regularization with a loss that is robust to noisy labels in order to improve domain adaptation performance and shows in an extensive ablation study that a combination of the two techniques is critical to achieve improved performance.

Pick up the PACE: Fast and Simple Domain Adaptation via Ensemble Pseudo-Labeling

A fast and simple DA method consisting of three stages: domain alignment by covariance matching, pseudo-labeling, and ensembling that exceeds previous state-of-the-art methods on most benchmark adaptation tasks without training a neural network.

Multi-level Consistency Learning for Semi-supervised Domain Adaptation

The proposed Multi-level Consistency Learning (MCL) framework for SSDA regularizes the consistency of different views of target domain samples at three levels: at inter-domain level, it robustly and accurately align the source and target domains using a prototype-based optimal transport method.

Low-confidence Samples Matter for Domain Adaptation

This work proposes a novel contrastive learning method by processing low-confidence samples, which encourages the model to make use of the target data structure through the instance discrimination process, and combines cross-domain mixup to augment the proposed contrastive loss.

NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

A simple and efficient learning framework TLM that does not rely on large-scale pretraining and uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch.

Burn After Reading: Online Adaptation for Cross-domain Streaming Data

This paper proposes an online framework that “burns after reading”, i.e. each online sample is immediately deleted after it is processed, and proposes a novel algorithm that aims at the most fundamental challenge of the online adaptation setting–the lack of diverse source-target data pairs.

References

SHOWING 1-10 OF 89 REFERENCES

Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation

An SSDA framework that aims to align features via alleviation of the intra-domain discrepancy within the target domain is proposed and the incompatibility of the conventional adversarial perturbation methods with SSDA is demonstrated.

MixMatch: A Holistic Approach to Semi-Supervised Learning

This work unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeling data using MixUp.

Semi-Supervised Domain Adaptation via Minimax Entropy

A novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA) setting, setting a new state of the art for SSDA.

Moment Matching for Multi-Source Domain Adaptation

A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.

Deep Co-Training for Semi-Supervised Image Recognition

This paper presents Deep Co-Training, a deep learning based method inspired by the Co- Training framework, which outperforms the previous state-of-the-art methods by a large margin in semi-supervised image recognition.

Deep Hashing Network for Unsupervised Domain Adaptation

This is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem and proposes a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data.

Co-Training for Domain Adaptation

An algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident, and is named CODA (Co-training for domain adaptation).

Opposite Structure Learning for Semi-supervised Domain Adaptation

A novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA) is proposed, which progressively update the measurement of distance and the feature representation on both domains via an adversarial training paradigm.

A Closer Look at Memorization in Deep Networks

The analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.

Domain-Adversarial Training of Neural Networks

A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
...