Corpus ID: 234336778

A Bit More Bayesian: Domain-Invariant Learning with Uncertainty

  title={A Bit More Bayesian: Domain-Invariant Learning with Uncertainty},
  author={Zehao Xiao and Jiayi Shen and Xiantong Zhen and Ling Shao and Cees Snoek},
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data. In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference, by incorporating uncertainty into neural network weights. We couple domain invariance in a probabilistic formula with the variational Bayesian inference. This enables us to explore domain-invariant learning in a principled way. Specifically, we derive… Expand

Figures and Tables from this paper


Learning to Learn with Variational Information Bottleneck for Domain Generalization
A probabilistic meta-learning model for domain generalization is introduced, in which classifier parameters shared across domains are modeled as distributions, which enables better handling of prediction uncertainty on unseen domains. Expand
Domain Generalization via Entropy Regularization
An entropy regularization term is proposed that measures the dependency between the learned features and the class labels and thus can learn classifiers with better generalization capabilities and is guaranteed to learn conditional-invariant features across all source domains. Expand
Domain Generalization via Model-Agnostic Learning of Semantic Features
This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift. Expand
Learning Priors for Invariance
The proposed method is akin to posterior variational inference: it chooses a parametric family and optimize to find the member of the family that makes the model robust to a given transformation, and demonstrates the method’s utility for dropout and rotation transformations. Expand
Auto-Encoding Variational Bayes
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Domain Generalization via Invariant Feature Representation
Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables is proposed. Expand
Learning to Optimize Domain Specific Normalization for Domain Generalization
The state-of-the-art accuracy of the algorithm in the standard domain generalization benchmarks is demonstrated, as well as viability to further tasks such as multi-source domain adaptation anddomain generalization in the presence of label noise. Expand
Unified Deep Supervised Domain Adaptation and Generalization
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture. Expand
Learning to Generalize: Meta-Learning for Domain Generalization
A novel meta-learning procedure that trains models with good generalization ability to novel domains for domain generalization and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks. Expand
Domain Generalization with Adversarial Feature Learning
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework. Expand