Tackling Long-Tailed Category Distribution Under Domain Shifts

@inproceedings{Gu2022TacklingLC,
  title={Tackling Long-Tailed Category Distribution Under Domain Shifts},
  author={Xiao Gu and Yao Guo and Zeju Li and Jianing Qiu and Qianming Dou and Yuxuan Liu and Benny P. L. Lo and Guangxu Yang},
  booktitle={European Conference on Computer Vision},
  year={2022}
}
. Machine learning models fail to perform well on real-world applications when 1) the category distribution P ( Y ) of the training dataset suffers from long-tailed distribution and 2) the test data is drawn from different conditional distributions P ( X | Y ). Existing approaches cannot handle the scenario where both issues exist, which however is common for real-world applications. In this study, we took a step for-ward and looked into the problem of long-tailed classification under domain… 

Figures and Tables from this paper

Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition

This paper revisits the use of self-supervised contrastive learning and explores three core strategies to enforce expression-specific representations and to minimize the interference from other facial attributes, such as identity and face styling.

References

SHOWING 1-10 OF 45 REFERENCES

MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

This paper addresses the issue of imbalance in real-world training data by augmenting minority classes with a recently proposed implicit semantic data augmentation (ISDA) algorithm, which produces diversified augmented samples by translating deep features along many semantically meaningful directions.

Decoupling Representation and Classifier for Long-Tailed Recognition

It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

From generalized zero-shot learning to long-tail with class descriptors

Dragon, a late-fusion architecture for long-tail learning with class descriptors, learns to correct the bias towards head classes on a sample- by-sample basis and fuse information from class- descriptions to improve the tail-class accuracy.

Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition From a Domain Adaptation Perspective

This work connects existing class-balanced methods for long-tailed classification to target shift to reveal that these methods implicitly assume that the training data and test data share the same class-conditioned distribution, which does not hold in general and especially for the tail classes.

Zero-Shot Domain Generalization

This work proposes a simple strategy which effectively exploits semantic information of classes, to adapt existing DG methods to meet the demands of Zero-Shot Domain Generalization, and evaluates the proposed methods on CIFAR-10, CIFar-100, F-MNIST and PACS datasets.

Open Domain Generalization with Domain-Augmented Meta-Learning

Experimental results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.

Domain Generalization via Model-Agnostic Learning of Semantic Features

This work investigates the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics, and adopts a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift.

Disentangling Label Distribution for Long-tailed Visual Recognition

A novel method, LAbel distribution DisEntangling (LADE) loss based on the optimal bound of Donsker-Varadhan representation achieves state-of-the-art performance on benchmark datasets and out-performs existing methods on various shifted target label distributions.

Generalizing to Unseen Domains: A Survey on Domain Generalization

This paper provides a formal definition of domain generalization and discusses several related fields, and categorizes recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy, each of which contains several popular algorithms.

Episodic Training for Domain Generalization

Using the Visual Decathlon benchmark, it is demonstrated that the episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training a feature for robustness to novel problems, showing that DG training can benefit standard practice in computer vision.