• Corpus ID: 246867139

Learning to Generalize across Domains on Single Test Samples

@article{Xiao2022LearningTG,
  title={Learning to Generalize across Domains on Single Test Samples},
  author={Zehao Xiao and Xiantong Zhen and Ling Shao and Cees G. M. Snoek},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.08045}
}
We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single… 
Hierarchical Variational Memory for Few-shot Learning Across Domains
TLDR
Learning to weigh prototypes in a data-driven way, which further improves generalization performance and the effectiveness of hierarchical variational memory in handling both the domain shift and few-shot learning problems is proposed.
FEW-SHOT LEARNING ACROSS DOMAINS
  • 2022

References

SHOWING 1-10 OF 60 REFERENCES
Learning to Generalize: Meta-Learning for Domain Generalization
TLDR
A novel meta-learning procedure that trains models with good generalization ability to novel domains for domain generalization and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Learning to Generate Novel Domains for Domain Generalization
TLDR
This paper employs a data generator to synthesize data from pseudo-novel domains to augment the source domains, and outperforms current state-of-the-art DG methods on four benchmark datasets.
Adaptive Methods for Real-World Domain Generalization
TLDR
This work proposes a domain-adaptive approach consisting of two steps: a) the authors first learn a discriminative domain embedding from unsupervised training examples, and b) use thisdomain embedding as supplementary information to build adomain- Adaptive model, that takes both the input as well as its domain into account while making predictions.
Episodic Training for Domain Generalization
TLDR
Using the Visual Decathlon benchmark, it is demonstrated that the episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems, showing that DG training can benefit standard practice in computer vision.
Generalizing Across Domains via Cross-Gradient Training
TLDR
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.
Learning to Learn Single Domain Generalization
TLDR
A new method named adversarial domain augmentation is proposed to solve the Out-of-Distribution (OOD) generalization problem by leveraging adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees.
Learning to Balance Specificity and Invariance for In and Out of Domain Generalization
TLDR
This work introduces Domain-specific Masks for Generalization, a model for improving both in-domain and out-of-domain generalization performance and encourages the masks to learn a balance of domain-invariant and domain-specific features, thus enabling a model which can benefit from the predictive power of specialized features while retaining the universal applicability of Domain-Invariant features.
Learning to Learn with Variational Information Bottleneck for Domain Generalization
TLDR
A probabilistic meta-learning model for domain generalization is introduced, in which classifier parameters shared across domains are modeled as distributions, which enables better handling of prediction uncertainty on unseen domains.
Learning to Generalize One Sample at a Time with Self-Supervision
TLDR
This paper proposes to use self-supervised learning to achieve domain generalization and adaptation, and considers learning regularities from non annotated data as an auxiliary task, and cast the problem within an Auxiliary Learning principled framework.
Domain Generalization via Model-Agnostic Learning of Semantic Features
TLDR
This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift.
...
...