Localized Adversarial Domain Generalization

  title={Localized Adversarial Domain Generalization},
  author={Wei Zhu and Le Lu and Jing Xiao and Mei Han and Jiebo Luo and Adam P. Harrison},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  • Wei ZhuLe Lu Adam P. Harrison
  • Published 9 May 2022
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Deep learning methods can struggle to handle domain shifts not seen in training data, which can cause them to not generalize well to unseen domains. This has led to research attention on domain generalization (DG), which aims to the model's generalization ability to out-of-distribution. Adversarial domain generalization is a popular approach to DG, but conventional approaches (1) struggle to sufficiently align features so that local neighborhoods are mixed across domains; and (2) can suffer… 

Gradient Estimation for Unseen Domain Risk Minimization with Pre-Trained Models

  • Byunggyu LewDonghyun SonBuru Chang
  • Computer Science, Psychology
  • 2023
This work proposes a new domain generalization method that estimates unobservable gradients that reduce potential risks in unseen domains, using a large-scale pre-trained model, and allows the pre- trained model to learn task-specific knowledge further while preserving its generalization ability with the estimated gradients.



Adversarial target-invariant representation learning for domain generalization

This paper proposes a process that enforces pair-wise domain invariance while training a feature extractor over a diverse set of domains, and shows that this process ensures invariance to any distribution that can be expressed as a mixture of the training domains.

Deep Domain-Adversarial Image Generation for Domain Generalisation

This paper proposes a novel DG approach based on Deep Domain-Adversarial Image Generation based on augmenting the source training data with the generated unseen domain data to make the label classifier more robust to unknown domain changes.

Domain Adversarial Neural Networks for Domain Generalization: When It Works and How to Improve

This investigation suggests that the application of DANN to domain generalization may not be as straightforward as it seems, and designs an algorithmic extension to DANN in thedomain generalization case.

Dual Mixup Regularized Learning for Adversarial Domain Adaptation

A dual mixup regularized learning (DMRL) method for UDA is proposed, which not only guides the classifier in enhancing consistent predictions in-between samples, but also enriches the intrinsic structures of the latent space.

Generalizing Across Domains via Cross-Gradient Training

Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.

Conditional Adversarial Domain Adaptation

Conditional adversarial domain adaptation is presented, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions to guarantee the transferability.

Deep Domain Generalization via Conditional Invariant Adversarial Networks

This work proposes an end-to-end conditional invariant deep domain generalization approach by leveraging deep neural networks for domain-invariant representation learning and proves the effectiveness of the proposed method.

Domain-Adversarial Training of Neural Networks

A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.

Deeper, Broader and Artier Domain Generalization

This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.

Bridging Theory and Algorithm for Domain Adaptation

Margin Disparity Discrepancy is introduced, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.