Episodic Training for Domain Generalization

@article{Li2019EpisodicTF,
  title={Episodic Training for Domain Generalization},
  author={Da Li and Jian-shu Zhang and Yongxin Yang and Cong Liu and Yi-Zhe Song and Timothy M. Hospedales},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={1446-1455}
}
Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domains with different statistics than a set of known training domains. [] Key Method Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain.

Figures and Tables from this paper

Batch Normalization Embeddings for Deep Domain Generalization
TLDR
This work explicitly train domain-dependant representations by using ad-hoc batch normalization layers to collect independent domain's statistics and proposes to use these statistics to map domains in a shared latent space, where membership to a domain can be measured by means of a distance function.
Improving Multi-Domain Generalization through Domain Re-labeling
TLDR
A general approach for multi-domain generalization, MulDEns, is introduced that uses an ERM-based deep ensembling backbone and performs implicit domain re-labeling through a meta-optimization algorithm, and consistently outperforms ERM by significant margins.
Domain Adaptive Ensemble Learning
TLDR
Extensive experiments show that DAEL improves the state-of-the-art on both problems, often by significant margins.
Domain Generalization with MixStyle
TLDR
A novel approach based on probabilistically mixing instancelevel feature statistics of training samples across source domains, motivated by the observation that visual domain is closely related to image style, which results in novel domains being synthesized implicitly and hence the generalizability of the trained model.
Deep Domain Generalization with Feature-norm Network
TLDR
This paper introduces an end-to-end feature-norm network (FNN) which is robust to negative transfer as it does not need to match the feature distribution among the source domains and introduces a collaborative feature- norm network (CFNN) to further improve the generalization capability of FNN.
Learning to Generate Novel Domains for Domain Generalization
TLDR
This paper employs a data generator to synthesize data from pseudo-novel domains to augment the source domains, and outperforms current state-of-the-art DG methods on four benchmark datasets.
MixStyle Neural Networks for Domain Generalization and Adaptation
TLDR
This work addresses domain generalization with MixStyle, a plug-and-play, parameter-free module that is simply inserted to shallow CNN layers and requires no modification to training objectives, and probabilistically mixes feature statistics between instances.
Robust Domain-Free Domain Generalization with Class-Aware Alignment
TLDR
The proposed DomainFree Domain Generalization (DFDG), a model-agnostic method to achieve better generalization performance on the unseen test domain without the need for source domain labels, is proposed.
Mode-Guided Feature Augmentation for Domain Generalization
TLDR
This paper proposes a simple andcient DG approach to augment source domain(s) by hypothesizing the existence of favourable correlation between the source and target domain’s major modes of variation, and upon exploring those modes in the source domain the authors can realize meaningful alterations to background, appearance, pose and texture of object classes.
A Simple Feature Augmentation for Domain Generalization
TLDR
This work finds that an extremely simple technique of perturbing the feature embedding with Gaussian noise during training leads to a classifier with domain-generalization performance comparable to existing state of the art, and argues that feature augmentation is a more promising direction for DG.
...
...

References

SHOWING 1-10 OF 56 REFERENCES
Feature-Critic Networks for Heterogeneous Domain Generalization
TLDR
This work considers a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off theshelf for novel data and novel categories.
Best Sources Forward: Domain Generalization through Source-Specific Nets
TLDR
This work designs a deep network with multiple domain-specific classifiers, each associated to a source domain, and introduced a domain agnostic component supporting the final classifier.
Deeper, Broader and Artier Domain Generalization
TLDR
This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
Unsupervised Domain Adaptation by Backpropagation
TLDR
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.
Learning to Generalize: Meta-Learning for Domain Generalization
TLDR
A novel meta-learning method for domain generalization that trains models with good generalization ability to novel domains and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Unsupervised Domain Adaptation with Residual Transfer Networks
TLDR
Empirical evidence shows that the new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeledData in the target domain outperforms state of the art methods on standard domain adaptation benchmarks.
Domain Separation Networks
TLDR
The novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.
Generalizing Across Domains via Cross-Gradient Training
TLDR
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.
...
...