Localized Adversarial Domain Generalization

  title={Localized Adversarial Domain Generalization},
  author={Wei Zhu and Le Lu and Jing Xiao and Mei Han and Jiebo Luo and Adam P. Harrison},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  • Wei ZhuLe Lu Adam P. Harrison
  • Published 9 May 2022
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Deep learning methods can struggle to handle domain shifts not seen in training data, which can cause them to not generalize well to unseen domains. This has led to research attention on domain generalization (DG), which aims to the model's generalization ability to out-of-distribution. Adversarial domain generalization is a popular approach to DG, but conventional approaches (1) struggle to sufficiently align features so that local neighborhoods are mixed across domains; and (2) can suffer… 



Domain Generalization with Adversarial Feature Learning

This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.

Adversarial target-invariant representation learning for domain generalization

This paper proposes a process that enforces pair-wise domain invariance while training a feature extractor over a diverse set of domains, and shows that this process ensures invariance to any distribution that can be expressed as a mixture of the training domains.

Deep Domain-Adversarial Image Generation for Domain Generalisation

This paper proposes a novel DG approach based on Deep Domain-Adversarial Image Generation based on augmenting the source training data with the generated unseen domain data to make the label classifier more robust to unknown domain changes.

Domain Adversarial Neural Networks for Domain Generalization: When It Works and How to Improve

This investigation suggests that the application of DANN to domain generalization may not be as straightforward as it seems, and designs an algorithmic extension to DANN in thedomain generalization case.

Dual Mixup Regularized Learning for Adversarial Domain Adaptation

A dual mixup regularized learning (DMRL) method for UDA is proposed, which not only guides the classifier in enhancing consistent predictions in-between samples, but also enriches the intrinsic structures of the latent space.

Generalizing Across Domains via Cross-Gradient Training

Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.

Conditional Adversarial Domain Adaptation

Conditional adversarial domain adaptation is presented, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions to guarantee the transferability.

Deep Domain Generalization via Conditional Invariant Adversarial Networks

This work proposes an end-to-end conditional invariant deep domain generalization approach by leveraging deep neural networks for domain-invariant representation learning and proves the effectiveness of the proposed method.

Domain Generalization Using a Mixture of Multiple Latent Domains

This paper proposes a method that iteratively divides samples into latent domains via clustering, and which trains the domain-invariant feature extractor shared among the divided latent domains through adversarial learning, which outperforms conventional domain generalization methods, including those that utilize domain labels.

Deeper, Broader and Artier Domain Generalization

This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.