• Corpus ID: 235485151

Gradual Domain Adaptation via Self-Training of Auxiliary Models

@article{Zhang2021GradualDA,
  title={Gradual Domain Adaptation via Self-Training of Auxiliary Models},
  author={Yabin Zhang and Bin Deng and Kui Jia and Lei Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.09890}
}
Domain adaptation becomes more challenging with increasing gaps between source and target domains. Motivated from an empirical analysis on the reliability of labeled source data for the use of distancing target domains, we propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains and gradually combats the distancing shifts across domains. We introduce evolving intermediate domains as combinations of decreasing proportion of source data and increasing… 

Figures and Tables from this paper

Gradual Domain Adaptation via Normalizing Flows

This work generates pseudo intermediate domains from normalizingows and then uses them for gradual domain adaptation, which mitigates the above explained problem and improves the classification performance.

Cost-effective Framework for Gradual Domain Adaptation with Multifidelity

To solve the trade-off between cost and accuracy, this work proposes a framework that combines multifidelity and active domain adaptation, and is evaluated by experiments with real-world datasets.

Introducing Intermediate Domains for Effective Self-Training during Test-Time

This work addresses two problems that exist when applying self-training in the setting of test-time adaptation: adapting a model to long test sequences that contain multiple domains can lead to error accumulation and creating intermediate domains that divide the current domain shift into a more gradual one.

Gradual Test-Time Adaptation by Self-Training and Style Transfer

This work shows the natural connection between gradual domain adaptation and test-time adaptation and proposes a new method GTTA that is based on self-training and style transfer that explicitly exploits gradual domain shifts and sets a new standard in this area.

References

SHOWING 1-10 OF 57 REFERENCES

Graph Adaptive Knowledge Transfer for Unsupervised Domain Adaptation

A novel Graph Adaptive Knowledge Transfer model is developed to jointly optimize target labels and domain-free features in a unified framework and hence the marginal and conditional disparities across different domains will be better alleviated.

Contrastive Adaptation Network for Unsupervised Domain Adaptation

This paper proposes Contrastive Adaptation Network optimizing a new metric which explicitly models the intra- class domain discrepancy and the inter-class domain discrepancy, and designs an alternating update strategy for training CAN in an end-to-end manner.

Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering

  • Hui TangKe ChenK. Jia
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This work describes the proposed method as Structurally Regularized Deep Clustering (SRDC), where it enhances target discrimination with clustering of intermediate network features, and enhance structural regularization with soft selection of less divergent source examples.

Blending-Target Domain Adaptation by Adversarial Meta-Adaptation Networks

Empirical results show that BTDA is a quite challenging transfer setup for most existing DA algorithms, yet AMEAN significantly outperforms these state-of-the-art baselines and effectively restrains the negative transfer effects in BTDA.

Understanding Self-Training for Gradual Domain Adaptation

It is proved the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.

Moment Matching for Multi-Source Domain Adaptation

A new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M3SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions.

Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation

An SSDA framework that aims to align features via alleviation of the intra-domain discrepancy within the target domain is proposed and the incompatibility of the conventional adversarial perturbation methods with SSDA is demonstrated.

Instance Adaptive Self-Training for Unsupervised Domain Adaptation

To effectively improve the quality of pseudo-labels, a novel pseudo-label generation strategy with an instance adaptive selector is developed and the region-guided regularization to smooth the pseudo- label region and sharpen the non-pseudo-label region is proposed.

Semi-supervised Domain Adaptation with Subspace Learning for visual recognition

A novel domain adaptation framework, named Semi-supervised Domain Adaptation with Subspace Learning (SDASL), which jointly explores invariant low-dimensional structures across domains to correct data distribution mismatch and leverages available unlabeled target examples to exploit the underlying intrinsic information in the target domain.

DLOW: Domain Flow for Adaptation and Generalization

A domain flow generation model to bridge two different domains by generating a continuous sequence of intermediate domains flowing from one domain to the other and demonstrating the effectiveness of the model for both cross-domain semantic segmentation and the style generalization tasks on benchmark datasets is presented.
...