• Corpus ID: 246867139

Learning to Generalize across Domains on Single Test Samples

  title={Learning to Generalize across Domains on Single Test Samples},
  author={Zehao Xiao and Xiantong Zhen and Ling Shao and Cees G. M. Snoek},
We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single… 
Hierarchical Variational Memory for Few-shot Learning Across Domains
Learning to weigh prototypes in a data-driven way, which further improves generalization performance and the effectiveness of hierarchical variational memory in handling both the domain shift and few-shot learning problems is proposed.
  • 2022


Learning to Generalize: Meta-Learning for Domain Generalization
A novel meta-learning method for domain generalization that trains models with good generalization ability to novel domains and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Learning to Generate Novel Domains for Domain Generalization
This paper employs a data generator to synthesize data from pseudo-novel domains to augment the source domains, and outperforms current state-of-the-art DG methods on four benchmark datasets.
Adaptive Methods for Real-World Domain Generalization
This work proposes a domain-adaptive approach consisting of two steps: a) the authors first learn a discriminative domain embedding from unsupervised training examples, and b) use thisdomain embedding as supplementary information to build adomain- Adaptive model, that takes both the input as well as its domain into account while making predictions.
Episodic Training for Domain Generalization
Using the Visual Decathlon benchmark, it is demonstrated that the episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems, showing that DG training can benefit standard practice in computer vision.
Generalizing Across Domains via Cross-Gradient Training
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.
Learning to Learn Single Domain Generalization
A new method named adversarial domain augmentation is proposed to solve the Out-of-Distribution (OOD) generalization problem by leveraging adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees.
Learning to Learn with Variational Information Bottleneck for Domain Generalization
A probabilistic meta-learning model for domain generalization is introduced, in which classifier parameters shared across domains are modeled as distributions, which enables better handling of prediction uncertainty on unseen domains.
Learning to Generalize One Sample at a Time with Self-Supervision
This paper proposes to use self-supervised learning to achieve domain generalization and adaptation, and considers learning regularities from non annotated data as an auxiliary task, and cast the problem within an Auxiliary Learning principled framework.
Domain Generalization via Model-Agnostic Learning of Semantic Features
This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift.
Generalizing to Unseen Domains via Adversarial Data Augmentation
This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration.