Open Long-Tailed Recognition in a Dynamic World

  title={Open Long-Tailed Recognition in a Dynamic World},
  author={Ziwei Liu and Zhongqi Miao and Xiaohang Zhan and Jiayun Wang and Boqing Gong and Stella X. Yu},
  journal={IEEE transactions on pattern analysis and machine intelligence},
Real world data often exhibits a long-tailed and open-ended (i.e., with unseen classes) distribution. A practical recognition system must balance between majority (head) and minority (tail) classes, generalize across the distribution, and acknowledge novelty upon the instances of unseen classes (open classes). We define Open Long-Tailed Recognition++ (OLTR++) as learning from such naturally distributed data and optimizing for the classification accuracy over a balanced test set which includes… 

Data Augmentation by Selecting Mixed Classes Considering Distance Between Classes

This study proposes a data augmentation method that calculates the distance between classes based on class probabilities and can select data from suitable classes to be mixed in the training process and improves recognition performance on general and long-tailed image recognition datasets.



Decoupling Representation and Classifier for Long-Tailed Recognition

It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

Long-tailed Recognition by Routing Diverse Distribution-Aware Experts

RIDE aims to reduce both the bias and the variance of a long-tailed classifier by RoutIng Diverse Experts (RIDE), which significantly outperforms the state-of-the-art methods by 5% to 7% on all the benchmarks including CIFAR100-LT, ImageNet-LT and iNaturalist.

BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition

A unified Bilateral-Branch Network (BBN) is proposed to take care of both representation learning and classifier learning simultaneously, where each branch does perform its own duty separately.

Learning to Model the Tail

Results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.

Learning Deep Representation for Imbalanced Classification

The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution.

Towards Open Set Deep Networks

  • Abhijit BendaleT. Boult
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities, and it is proved that the OpenMax concept provides bounded open space risk, thereby formally providing anopen set recognition solution.

Learning and the Unknown: Surveying Steps toward Open World Recognition

This paper summarizes the state of the art, core ideas, and results and explains why, despite the efforts to date, the current techniques are genuinely insufficient for handling unknown inputs, especially for deep networks.

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling.

Long-tail learning via logit adjustment

These techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training, to encourage a large relative margin between logits of rare versus dominant labels.

Factors in Finetuning Deep Model for Object Detection with Long-Tail Distribution

This paper investigates many factors that influence the performance in finetuning for object detection and proposes a hierarchical feature learning scheme that cluster objects into visually similar class groups and learn deep representations for these groups separately.