Corpus ID: 219687652

Domain Generalization using Causal Matching

@article{Mahajan2021DomainGU,
  title={Domain Generalization using Causal Matching},
  author={Divyat Mahajan and Shruti Tople and Amit Sharma},
  journal={ArXiv},
  year={2021},
  volume={abs/2006.07500}
}
Learning invariant representations has been proposed as a key technique for addressing the domain generalization problem. However, the question of identifying the right conditions for invariance remains unanswered. In this work, we propose a causal interpretation of domain generalization that defines domains as interventions under a data-generating process. Based on a general causal model for data from multiple domains, we show that prior methods for learning an invariant representation… Expand
Hierarchical Variational Auto-Encoding for Unsupervised Domain Generalization
TLDR
A new generative model is proposed that solves domain generalization problems in an interpretable manner without requiring domain labels during training and is able to learn representations that disentangle domain-specific information from class-label specific information even in complex settings where domain structure is not observed during training. Expand
Causal-based Time Series Domain Generalization for Vehicle Intention Prediction
  • Yeping Hu, Xiaogang Jia, Masayoshi Tomizuka, W. Zhan
  • Computer Science, Mathematics
  • 2021
Accurately predicting possible behaviors of traffic participants is an essential capability for autonomous vehicles. Since autonomous vehicles need to navigate in dynamically changing environments,Expand
Does Learning Stable Features Provide Privacy Benefits for Machine Learning Models?
Privacy attacks such as membership and attribute inference are a serious concern when using machine learning models, and more so when these models are used over data distributions different thanExpand
Generalizing to Unseen Domains: A Survey on Domain Generalization
TLDR
This paper provides a formal definition of domain generalization and discusses several related fields, and categorizes recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy. Expand
Predictive Modeling in the Presence of Nuisance-Induced Spurious Correlations
TLDR
To build predictive models that perform well regardless of the nuisance-label relationship, Nuisance-Randomized Distillation (NURD) is developed and it is proved that the representations in this set always perform better than chance, while representations outside of this set may not. Expand
A Framework for Self-Supervised Federated Domain Adaptation
TLDR
A multi-domain model generalization balance (MDMGB) is proposed to aggregate the models from multiple source domains in each round of communication to solve the distributed multi-source domain adaptation problem. Expand
A SAMPLE COMPLEXITY PERSPECTIVE
Recently, invariant risk minimization (IRM) was proposed as a promising solution to address out-of-distribution (OOD) generalization. However, it is unclear when IRM should be preferred over theExpand
Adaptive Methods for Real-World Domain Generalization
TLDR
This work proposes a domain-adaptive approach consisting of two steps: a) the authors first learn a discriminative domain embedding from unsupervised training examples, and b) use thisdomain embedding as supplementary information to build adomain- Adaptive model, that takes both the input as well as its domain into account while making predictions. Expand
An Information-theoretic Approach to Distribution Shifts
TLDR
The problem of data shift from an novel information-theoretic perspective is described by identifying and describing the different sources of error, and comparing some of the most promising objectives explored in the recent domain generalization and fair classification literature. Expand
COLUMBUS: Automated Discovery of New Multi-Level Features for Domain Generalization via Knowledge Corruption
TLDR
This work proposes COLUMBUS, a method that enforces new feature discovery via a targeted corruption of the most relevant input and multi-level representations of the data that achieves new state-of-the-art results by outperforming 18 DG algorithms on multiple DG benchmark datasets in the DOMAINBED framework. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 67 REFERENCES
Deep CORAL: Correlation Alignment for Deep Domain Adaptation
TLDR
This paper extends CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL), and shows state-of-the-art performance on standard benchmark datasets. Expand
Conditional variance penalties and domain shift robustness
TLDR
Using a causal framework, this conditional variance regularization (CoRe) is shown to protect asymptotically against shifts in the distribution of the style variables and improves predictive accuracy substantially in settings where domain changes occur in terms of image quality, brightness and color. Expand
In Search of Lost Domain Generalization
TLDR
This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets. Expand
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
TLDR
It is shown that CSD either matches or beats state of the art approaches for domain generalization based on domain erasure, domain perturbed data augmentation, and meta-learning. Expand
The Pitfalls of Simplicity Bias in Neural Networks
TLDR
It is demonstrated that common approaches for improving generalization and robustness---ensembles and adversarial training---do not mitigate SB and its shortcomings, and a collection of piecewise-linear and image-based datasets that naturally incorporate a precise notion of simplicity and capture the subtleties of neural networks trained on real datasets are introduced. Expand
Domain Generalization via Model-Agnostic Learning of Semantic Features
TLDR
This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift. Expand
Invariant Risk Minimization
TLDR
This work introduces Invariant Risk Minimization, a learning paradigm to estimate invariant correlations across multiple training distributions and shows how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization. Expand
Generalizing Across Domains via Cross-Gradient Training
TLDR
Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training. Expand
MetaReg: Towards Domain Generalization using Meta-Regularization
TLDR
Experimental validations on computer vision and natural language datasets indicate that the encoding of the notion of domain generalization using a novel regularization function using a Learning to Learn (or) meta-learning framework can learn regularizers that achieve good cross-domain generalization. Expand
Unified Deep Supervised Domain Adaptation and Generalization
TLDR
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture. Expand
...
1
2
3
4
5
...