Discovery of New Multi-Level Features for Domain Generalization via Knowledge Corruption

  title={Discovery of New Multi-Level Features for Domain Generalization via Knowledge Corruption},
  author={A. Frikha and Denis Krompass and Volker Tresp},
  journal={2022 26th International Conference on Pattern Recognition (ICPR)},
Machine learning models that can generalize to unseen domains are essential when applied in real-world scenarios involving strong domain shifts. We address the challenging domain generalization (DG) problem, where a model trained on a set of source domains is expected to generalize well in unseen domains without any exposure to their data. The main challenge of DG is that the features learned from the source domains are not necessarily present in the unseen target domains, leading to… 

Towards Data-Free Domain Generalization

DKAN is proposed, an approach that extracts and fuses domain-specific knowledge from the available teacher models into a student model robust to domain shift, and achieves the first state-of-the-art results in DFDG by outperforming data-free knowledge distillation and ensemble baselines.

FRAug: Tackling Federated Learning with Non-IID Features via Representation Augmentation

This work addresses the recently proposed feature shift problem where the clients have different feature distri- butions, while the label distribution is the same, and proposes Federated Representation Augmentation (FRAug) to tackle this practical and challenging problem.



Learning to Generate Novel Domains for Domain Generalization

This paper employs a data generator to synthesize data from pseudo-novel domains to augment the source domains, and outperforms current state-of-the-art DG methods on four benchmark datasets.

Domain Generalization via Model-Agnostic Learning of Semantic Features

This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift.

Domain Generalization by Marginal Transfer Learning

This work lays the learning theoretic foundations of domain generalization, building on the earlier conference paper where the problem of DG was introduced, and presents two formal models of data generation, corresponding notions of risk, and distribution-free generalization error analysis.

Generalizing to Unseen Domains via Adversarial Data Augmentation

This work proposes an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model, and shows that the method is an adaptive data augmentation method where the authors append adversarial examples at each iteration.

In Search of Lost Domain Generalization

This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.

Generalizing Across Domains via Cross-Gradient Training

Empirical evaluation on three different applications establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbations methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.

A Survey of Unsupervised Deep Domain Adaptation

A survey will compare single-source and typically homogeneous unsupervised deep domain adaptation approaches, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially costly target data labels.

SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of Invariances in Domain Generalization

A masking strategy, which determines a continuous weight based on the agreement of gradients that flow in each edge of network, in order to control the amount of update received by the edge in each step of optimization, which significantly improves the state-of-the-art accuracy on the Colored MNIST dataset.

Domain Generalization: A Survey

A comprehensive literature review in DG is provided to summarize the developments over the past decade and cover the background by formally defining DG and relating it to other relevant fields like domain adaptation and transfer learning.

Generalizing to unseen domains via distribution matching

This work focuses on domain generalization: a formalization where the data generating process at test time may yield samples from never-before-seen domains (distributions), and relies on a simple lemma to derive a generalization bound for this setting.