• Corpus ID: 202768984

Domain Generalization via Model-Agnostic Learning of Semantic Features

@inproceedings{Dou2019DomainGV,
  title={Domain Generalization via Model-Agnostic Learning of Semantic Features},
  author={Qi Dou and Daniel Coelho de Castro and Konstantinos Kamnitsas and Ben Glocker},
  booktitle={NeurIPS},
  year={2019}
}
Generalization capability to unseen domains is crucial for machine learning models when deploying to real-world conditions. We investigate the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics. We adopt a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift. Further, we introduce two… 

Figures and Tables from this paper

Domain Generalizer: A Few-shot Meta Learning Framework for Domain Generalization in Medical Imaging
TLDR
The method learns a domain-agnostic feature representation to improve generalization of models to the unseen test distribution and employs few-shot learning, i.e. training the generalized model using very few examples from the unseen domain, to quickly adapt the model to new unseen data distribution.
Learning Transferrable and Interpretable Representations for Domain Generalization
TLDR
This paper aims to learn a domain transformation space via a domain transformer network (DTN) which explicitly mines the relationship among multiple domains and constructs transferable feature representations for down-stream tasks by interpreting each feature as a semantically weighted combination of multiple domain-specific features.
Generalizable Semantic Segmentation via Model-agnostic Learning and Target-specific Normalization
TLDR
A novel domain generalization framework for the generalizable semantic segmentation task is proposed, which enhances the generalization ability of the model from two different views, including the training paradigm and the data-distribution discrepancy.
Open Domain Generalization with Domain-Augmented Meta-Learning
TLDR
Experimental results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
Robust Domain-Free Domain Generalization with Class-Aware Alignment
TLDR
The proposed DomainFree Domain Generalization (DFDG), a model-agnostic method to achieve better generalization performance on the unseen test domain without the need for source domain labels, is proposed.
COLUMBUS: Automated Discovery of New Multi-Level Features for Domain Generalization via Knowledge Corruption
TLDR
This work proposes COLUMBUS, a method that enforces new feature discovery via a targeted corruption of the most relevant input and multi-level representations of the data that achieves new state-of-the-art results by outperforming 18 DG algorithms on multiple DG benchmark datasets in the DOMAINBED framework.
Compound Domain Generalization via Meta-Knowledge Encoding
TLDR
This paper presents CO mpound domain generalization via M eta-knowledge EN coding ( COMEN ) , a general approach to automatically discover and model latent domains in two steps, and introduces Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
Domain Generalization via Feature Variation Decorrelation
TLDR
This work proposes to linearly disentangle the variation out of sample in feature space and impose a novel class decorrelation regularization on the feature variation and achieves state-of-the-art performances over all of widely used domain generalization benchmarks.
Learning Generalizable Features Across Domains by Disentangling Representations
TLDR
The proposed Mutual-Information-based Disentangled Neural Networks (MIDNet) is developed as a semi-supervised learning paradigm to alleviate the dependency on labeled data and outperforms the state-of-the-art and achieve expected performance with sparsely labeled data.
Towards Data-Free Domain Generalization
TLDR
This work proposes DEKAN, an approach that extracts and fuses domain-specific knowledge from the available teacher models into a student model robust to domain shift, and achieves first state-of-the-art results in DFDG by significantly outperforming ensemble and data-free knowledge distillation baselines.
...
...

References

SHOWING 1-10 OF 56 REFERENCES
Feature-Critic Networks for Heterogeneous Domain Generalization
TLDR
This work considers a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off theshelf for novel data and novel categories.
Episodic Training for Domain Generalization
TLDR
Using the Visual Decathlon benchmark, it is demonstrated that the episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems, showing that DG training can benefit standard practice in computer vision.
Learning to Generalize: Meta-Learning for Domain Generalization
TLDR
A novel meta-learning method for domain generalization that trains models with good generalization ability to novel domains and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Generalizing Across Domains via Cross-Gradient Training
TLDR
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.
MetaReg: Towards Domain Generalization using Meta-Regularization
TLDR
Experimental validations on computer vision and natural language datasets indicate that the encoding of the notion of domain generalization using a novel regularization function using a Learning to Learn (or) meta-learning framework can learn regularizers that achieve good cross-domain generalization.
Domain Generalization with Adversarial Feature Learning
TLDR
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.
Generalizing to Unseen Domains via Adversarial Data Augmentation
TLDR
This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration.
Domain Generalization for Object Recognition with Multi-task Autoencoders
TLDR
This work proposes a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition and evaluates the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets.
Deeper, Broader and Artier Domain Generalization
TLDR
This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.
Unified Deep Supervised Domain Adaptation and Generalization
TLDR
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture.
...
...