Domain Generalization via Gradient Surgery
@article{Mansilla2021DomainGV, title={Domain Generalization via Gradient Surgery}, author={Lucas Mansilla and Rodrigo Echeveste and D.H. Milone and Enzo Ferrante}, journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021}, pages={6610-6618} }
In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains. When the aim is to make predictions on distributions different from those seen at training, we incur in a domain generalization problem. Methods to address this issue learn a model using data from multiple source domains, and then apply this model to the unseen target domain. Our hypothesis is that when training with multiple domains…Â
20 Citations
CNN Feature Map Augmentation for Single-Source Domain Generalization
- 2023
Computer Science
ArXiv
This work focuses on producing a model which is able to remain robust under data distribution shift and proposes an alternative regularization technique for convolutional neural network architectures in the single-source DG image classification setting and proposes augmenting intermediate feature maps of CNNs.
Gradient Estimation for Unseen Domain Risk Minimization with Pre-Trained Models
- 2023
Computer Science, Psychology
ArXiv
This work proposes a new domain generalization method that estimates unobservable gradients that reduce potential risks in unseen domains, using a large-scale pre-trained model, and allows the pre- trained model to learn task-specific knowledge further while preserving its generalization ability with the estimated gradients.
PGrad: Learning Principal Gradients For Domain Generalization
- 2023
Computer Science
ArXiv
This work develops a novel DG training strategy, PGrad, to learn a robust gradient direction, improving models' generalization ability on unseen domains by aggregating the principal directions of a sampled roll-out optimization trajectory that measures the training dynamics across all training domains.
Fishr: Invariant Gradient Variances for Out-of-distribution Generalization
- 2022
Computer Science
ICML
This paper introduces a new regularization - named Fishr - that enforces domain invariance in the space of the gradients of the loss: specifically, the domain-level variances of gradients are matched across training domains.
Learning Gradient-based Mixup towards Flatter Minima for Domain Generalization
- 2022
Computer Science
ArXiv
This work carefully design a policy to generate the instance weights, named Flatness-aware Gradient-based Mixup (FGMix), which employs a gradient-based similarity to assign greater weights to instances that carry more invariant information, and learns the similarity function towards flatter minima for better generalization.
FIXED: Frustratingly Easy Domain Generalization with Mixup
- 2022
Computer Science
ArXiv
This work proposes a simple yet effective enhancement for Mixup-based DG, namely domain-invariant Feature mIXup (FIX), which learns domain- Invariant representations for Mix up and significantly outperforms nine state-of-the-art related methods.
Learning to Learn and Remember Super Long Multi-Domain Task Sequence
- 2022
Computer Science
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
This work proposes a simple yet effective learning to learn approach, i.e., meta optimizer, to mitigate the CF problem in SDML and constructs a challenging and large-scale benchmark consisting of 10 heterogeneous domains with a super long task sequence consisting of 100K tasks.
Semi-Supervised Domain Generalization with Evolving Intermediate Domain
- 2021
Computer Science
A novel paradigm of DG is introduced, termed as Semi-Supervised Domain Generalization (SSDG), to explore how the labeled and unlabeled source domains can interact, and two settings are established, including the close-set and open-set SSDG.
Semi-Supervised Domain Generalization in Real World: New Benchmark and Strong Baseline
- 2021
Computer Science
ArXiv
This paper introduces a novel paradigm of DG, termed as semi-supervised domain generalization, to study how to interact the labeled and unlabeled domains, and establishes two benchmarks including the close-set and open-set SSDG.
Domain-general Crowd Counting in Unseen Scenarios
- 2022
Computer Science
ArXiv
This paper introduces a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains such that it can initiate a meta-learning framework for domain generalization and designs the domain-invariant and -specific crowd memory modules to re-encode image features.
36 References
Best Sources Forward: Domain Generalization through Source-Specific Nets
- 2018
Computer Science
2018 25th IEEE International Conference on Image Processing (ICIP)
This work designs a deep network with multiple domain-specific classifiers, each associated to a source domain, and introduced a domain agnostic component supporting the final classifier.
Generalizing Across Domains via Cross-Gradient Training
- 2018
Computer Science
ICLR
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.
Domain Generalization with Adversarial Feature Learning
- 2018
Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.
Improve Unsupervised Domain Adaptation with Mixup Training
- 2020
Computer Science
ArXiv
This work proposes to enforce training constraints across domains using mixup formulation to directly address the generalization performance for target data and proposes a feature-level consistency regularizer to facilitate the inter-domain constraint.
Generalizing to Unseen Domains via Adversarial Data Augmentation
- 2018
Computer Science, Mathematics
NeurIPS
This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration.
Learning to Generalize: Meta-Learning for Domain Generalization
- 2018
Computer Science
AAAI
A novel meta-learning method for domain generalization that trains models with good generalization ability to novel domains and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Deeper, Broader and Artier Domain Generalization
- 2017
Computer Science
2017 IEEE International Conference on Computer Vision (ICCV)
This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.
Domain Generalization via Model-Agnostic Learning of Semantic Features
- 2019
Computer Science
NeurIPS
This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift.
Domain Generalization for Object Recognition with Multi-task Autoencoders
- 2015
Computer Science
2015 IEEE International Conference on Computer Vision (ICCV)
This work proposes a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition and evaluates the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets.
In Search of Lost Domain Generalization
- 2021
Computer Science
ICLR
This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.