• Corpus ID: 232092505

Domain Generalization via Inference-time Label-Preserving Target Projections

  title={Domain Generalization via Inference-time Label-Preserving Target Projections},
  author={Prashant Pandey and Mrigank Raman and Sumanth Varambally and A. P. Prathosh},
Generalization of machine learning models trained on a set of source domains on unseen target domains with different statistics, is a challenging problem. While many approaches have been proposed to solve this problem, they only utilize source data during training but do not take advantage of the fact that a single target example is available at the time of inference. Motivated by this, we propose a method that effectively uses the target sample during inference beyond mere classification. Our… 
Equivariance Allows Handling Multiple Nuisance Variables When Analyzing Pooled Neuroimaging Datasets
This paper shows how bringing recent results on equivariant representation learning (for studying symmetries in neural networks) in-stantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
Domain Generalization in Vision: A Survey
A comprehensive literature review is provided to summarize the developments in DG for computer vision over the past decade and conducts a thorough review into existing methods and presents a categorization based on their methodologies and motivations.
Domain Generalization: A Survey
For the first time, a comprehensive literature review in DG is provided to summarize the developments over the past decade and conduct a thorough review into existing methods and theories.
Generalizing to Unseen Domains: A Survey on Domain Generalization
This paper provides a formal definition of domain generalization and discusses several related fields, and categorizes recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy, each of which contains several popular algorithms.
Improving the Generalization of Meta-learning on Unseen Domains via Adversarial Shift
This work proposes a model-agnostic shift layer to learn how to simulate the domain shift and generate pseudo tasks, and develops a new adversarial learning-to-learn mechanism to train it.


Auto-Encoding Variational Bayes
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias
This work proposes Unbiased Metric Learning (UML), a metric learning approach that learns a set of less biased candidate distance metrics on training examples from multiple biased datasets, based on structural SVM.
Deep Domain-Adversarial Image Generation for Domain Generalisation
This paper proposes a novel DG approach based on Deep Domain-Adversarial Image Generation based on augmenting the source training data with the generated unseen domain data to make the label classifier more robust to unknown domain changes.
Deep Hashing Network for Unsupervised Domain Adaptation
This is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem and proposes a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Analysis of Representations for Domain Adaptation
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.
Learning to Learn Single Domain Generalization
A new method named adversarial domain augmentation is proposed to solve the Out-of-Distribution (OOD) generalization problem by leveraging adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees.
Deeper, Broader and Artier Domain Generalization
This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.
Domain Generalization Using a Mixture of Multiple Latent Domains
This paper proposes a method that iteratively divides samples into latent domains via clustering, and which trains the domain-invariant feature extractor shared among the divided latent domains through adversarial learning, which outperforms conventional domain generalization methods, including those that utilize domain labels.
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
It is shown that CSD either matches or beats state of the art approaches for domain generalization based on domain erasure, domain perturbed data augmentation, and meta-learning.