Corpus ID: 222290968

Distributionally Robust Learning for Unsupervised Domain Adaptation

@article{Wang2020DistributionallyRL,
  title={Distributionally Robust Learning for Unsupervised Domain Adaptation},
  author={Haoxuan Wang and Anqi Liu and Zhiding Yu and Yisong Yue and Anima Anandkumar},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.05784}
}
  • Haoxuan Wang, Anqi Liu, +2 authors Anima Anandkumar
  • Published 2020
  • Computer Science
  • ArXiv
  • We propose a distributionally robust learning (DRL) method for unsupervised domain adaptation (UDA) that scales to modern computer vision benchmarks. DRL can be naturally formulated as a competitive two-player game between a predictor and an adversary that is allowed to corrupt the labels, subject to certain constraints, and reduces to incorporating a density ratio between the source and target domains (under the standard log loss). This formulation motivates the use of two neural networks that… CONTINUE READING

    Figures and Tables from this paper

    References

    SHOWING 1-10 OF 62 REFERENCES
    Regularized Learning for Domain Adaptation under Label Shifts
    • 57
    • PDF
    Confidence Regularized Self-Training
    • 97
    • PDF
    Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
    • 461
    • PDF
    Return of Frustratingly Easy Domain Adaptation
    • 679
    • PDF
    Domain-Adversarial Neural Networks
    • 138
    • PDF
    Unsupervised Domain Adaptation by Domain Invariant Projection
    • 297
    • PDF
    Understanding Self-Training for Gradual Domain Adaptation
    • 17
    • PDF
    Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-training
    • 271
    • PDF
    Visual Domain Adaptation with Manifold Embedded Distribution Alignment
    • 147
    • PDF
    Generate to Adapt: Aligning Domains Using Generative Adversarial Networks
    • 294
    • PDF