Deep Representation Learning on Long-Tailed Data: A Learnable Embedding Augmentation Perspective

@article{Liu2020DeepRL,
  title={Deep Representation Learning on Long-Tailed Data: A Learnable Embedding Augmentation Perspective},
  author={Jialun Liu and Yifan Sun and Chuchu Han and Zhaopeng Dou and Wenhui Li},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={2967-2976}
}
  • Jialun LiuYifan Sun Wenhui Li
  • Published 25 February 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
This paper considers learning deep features from long-tailed data. We observe that in the deep feature space, the head classes and the tail classes present different distribution patterns. The head classes have a relatively large spatial span, while the tail classes have a significantly small spatial span, due to the lack of intra-class diversity. This uneven distribution between head and tail classes distorts the overall feature space, which compromises the discriminative ability of the… 

Figures and Tables from this paper

Feature Cloud: Improving Deep Visual Recognition With Probabilistic Feature Augmentation

The proposed feature cloud effectively transfers the within-class diversity from the head classes onto the tail classes, maintaining an effect of probabilistic feature augmentation, and is capable to bring general improvement to long-tailed visual recognition on two fundamental tasks.

Feature generation for long-tail classification

This paper creates calibrated distributions to sample additional features that are subsequently used to train the classifier and establishes a new state-of-the-art approach that attempts to generate meaningful features by estimating the tail category's distribution.

Deep Long-Tailed Learning: A Survey

A comprehensive survey on recent advances in deep long-tailed learning is provided, highlighting important applications of deepLongtailed learning and identifying several promising directions for future research.

Improving Tail-Class Representation with Centroid Contrastive Learning

Interpolative centroid contrastive learning (ICCL) is proposed to improve longtailed representation learning and shows a significant accuracy gain on the iNaturalist 2018 dataset with a real-world long-tailed distribution.

Distributional Robustness Loss for Long-tail Learning

This work proposes a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes and finds that training with robustness increases recognition accuracy of tail classes while largely maintaining the accuracy of head classes.

A Survey on Long-Tailed Visual Recognition

This survey focuses on the problems caused by long-tailed data distribution, sort out the representative long-tails visual recognition datasets and summarize some mainstream long-tail studies, and quantitatively study 20 widely-used and large-scale visual datasets proposed in the last decade.

W HERE ARE THE BOTTLENECKS IN LONG - TAILED CLASSIFICATION ?

  • Computer Science
  • 2021
It is shown that the long-tailed representations are volatile and brittle with respect to the true data distribution, and an explanation for why data augmentation helps long-tails classification despite leaving the dataset imbalance unchanged is provided.

Long-Tailed Classification with Gradual Balanced Loss and Adaptive Feature Generation

. The real-world data distribution is essentially long-tailed, which poses great challenge to the deep model. In this work, we propose a new method, Gradual Balanced Loss and Adaptive Feature

Class-Balanced Distillation for Long-Tailed Visual Recognition

This work introduces a new training method, referred to as Class-Balanced Distillation (CBD), that leverages knowledge distillation to enhance feature representations and consistently outperforms the state of the art on long-tailed recognition benchmarks such as ImageNet-LT, iNaturalist17 and i naturalist18.
...

References

SHOWING 1-10 OF 47 REFERENCES

Unequal-Training for Deep Face Recognition With Long-Tailed Noisy Data

A training strategy that treats the head data and the tail data in an unequal way, accompanying with noise-robust loss functions, to take full advantage of their respective characteristics and achieve the best result on MegaFace Challenge 2 given a large-scale noisy training data set is proposed.

A Discriminative Feature Learning Approach for Deep Face Recognition

This paper proposes a new supervision signal, called center loss, for face recognition task, which simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers.

Deep Learning Face Representation from Predicting 10,000 Classes

It is argued that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set.

SphereFace: Deep Hypersphere Embedding for Face Recognition

This paper proposes the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features in deep face recognition (FR) problem under open-set protocol.

Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks

A novel Generative Adversarial Network is designed to model the latent distribution of each novel class given its related base counterparts, leading to substantial improvements on the ImageNet benchmark over the state of the art.

ArcFace: Additive Angular Margin Loss for Deep Face Recognition

This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.

Joint Discriminative and Generative Learning for Person Re-Identification

This paper proposes a joint learning framework that couples re-id learning and data generation end-to-end and renders significant improvement over the baseline without using generated data, leading to the state-of-the-art performance on several benchmark datasets.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

CosFace: Large Margin Cosine Loss for Deep Face Recognition

  • H. WangYitong Wang Wei Liu
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
This paper reformulates the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which acosine margin term is introduced to further maximize the decision margin in the angular space, and achieves minimum intra-class variance and maximum inter- class variance by virtue of normalization and cosine decision margin maximization.

Exploring the Limits of Weakly Supervised Pretraining

This paper presents a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images and shows improvements on several image classification and object detection tasks, and reports the highest ImageNet-1k single-crop, top-1 accuracy to date.