Self-Balanced Learning for Domain Generalization

@article{Kim2021SelfBalancedLF,
  title={Self-Balanced Learning for Domain Generalization},
  author={Jin Young Kim and Jiyoung Lee and Jungin Park and Dongbo Min and Kwanghoon Sohn},
  journal={2021 IEEE International Conference on Image Processing (ICIP)},
  year={2021},
  pages={779-783}
}
  • J. Kim, Jiyoung Lee, K. Sohn
  • Published 31 August 2021
  • Computer Science
  • 2021 IEEE International Conference on Image Processing (ICIP)
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics. Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class. However, real-world training data collected with different composition biases often exhibits severe distribution gaps for domain and class, leading to substantial performance degradation. In this… 
2 Citations

Figures and Tables from this paper

Generalizing to Unseen Domains: A Survey on Domain Generalization

TLDR
This paper provides a formal definition of domain generalization and discusses several related fields, and categorizes recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy, each of which contains several popular algorithms.

Pin the Memory: Learning to Generalize Semantic Segmentation

TLDR
Second-order gradient flow of this method is described and details of experiments are described, and additional ablation study is provided for analysis of memory update.

References

SHOWING 1-10 OF 26 REFERENCES

Learning to Generalize: Meta-Learning for Domain Generalization

TLDR
A novel meta-learning method for domain generalization that trains models with good generalization ability to novel domains and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.

Domain Generalization via Model-Agnostic Learning of Semantic Features

TLDR
This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift.

Analysis of Representations for Domain Adaptation

TLDR
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.

Domain Generalization Using a Mixture of Multiple Latent Domains

TLDR
This paper proposes a method that iteratively divides samples into latent domains via clustering, and which trains the domain-invariant feature extractor shared among the divided latent domains through adversarial learning, which outperforms conventional domain generalization methods, including those that utilize domain labels.

Episodic Training for Domain Generalization

TLDR
Using the Visual Decathlon benchmark, it is demonstrated that the episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems, showing that DG training can benefit standard practice in computer vision.

Learning to Optimize Domain Specific Normalization for Domain Generalization

TLDR
The state-of-the-art accuracy of the algorithm in the standard domain generalization benchmarks is demonstrated, as well as viability to further tasks such as multi-source domain adaptation anddomain generalization in the presence of label noise.

Generalizing to Unseen Domains via Adversarial Data Augmentation

TLDR
This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration.

Domain Generalization with Adversarial Feature Learning

TLDR
This paper presents a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization, and proposed an algorithm to jointly train different components of the proposed framework.

Unified Deep Supervised Domain Adaptation and Generalization

TLDR
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models by reverting to point-wise surrogates of distribution distances and similarities by exploiting the Siamese architecture.

Deeper, Broader and Artier Domain Generalization

TLDR
This paper builds upon the favorable domain shift-robust properties of deep learning methods, and develops a low-rank parameterized CNN model for end-to-end DG learning that outperforms existing DG alternatives.