BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition

  title={BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition},
  author={Boyan Zhou and Quan Cui and Xiu-Shen Wei and Zhao-Min Chen},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  • Boyan ZhouQuan Cui Zhao-Min Chen
  • Published 5 December 2019
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Our work focuses on tackling the challenging but natural visual recognition task of long-tailed data distribution (i.e., a few classes occupy most of the data, while most classes have rarely few samples). In the literature, class re-balancing strategies (e.g., re-weighting and re-sampling) are the prominent and effective methods proposed to alleviate the extreme imbalance for dealing with long-tailed problems. In this paper, we firstly discover that these re-balancing methods achieving… 

Figures and Tables from this paper

ResLT: Residual Learning for Long-tailed Recognition

This work designs the effective residual fusion mechanism -- with one main branch optimized to recognize images from all classes, another two residual branches are gradually fused and optimized to enhance images from medium+tail classes and tail classes respectively.

Self Supervision to Distillation for Long-Tailed Visual Recognition

It is shown that soft label can serve as a powerful solution to incorporate label correlation into a multi-stage training scheme for long-tailed recognition, as well as a new distillation label generation module guided by self-supervision.

Attentive Feature Augmentation for Long-Tailed Visual Recognition

Experimental results show that the proposed Long-Tailed Visual Recognition framework achieves superior performance over the state-of-the-art methods when trained with imbalanced datasets.

Feature-Balanced Loss for Long-Tailed Visual Recognition

This paper addresses the long-tailed problem from feature space and proposes the feature-balanced loss, which encourages larger feature norms of tail classes by giving them relatively stronger stimuli.

Bag of Tricks for Long-Tailed Visual Recognition with Deep Convolutional Neural Networks

This paper collects existing tricks in long-tailed visual recognition and performs extensive and systematic experiments in order to give a detailed experimental guideline and obtain an effective combination of these tricks, and proposes a novel data augmentation approach based on class activation maps for long-tail recognition.

DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation for Long-Tailed Visual Recognition

An effective data augmentation method, referred to as bilateral mixup augmentation, which can improve the performance of long-tailed visual recognition and reduces the class-wise temperature scaling, which scales the logits differently per class in the training phase.

Leveraging Angular Information Between Feature and Classifier for Long-tailed Learning: A Prediction Reformulation Approach

Deep neural networks still struggle on long-tailed image datasets, and one of the reasons is that the imbalance of training data across categories leads to the imbalance of trained model parameters.

Balanced Meta-Softmax for Long-Tailed Visual Recognition

Balanced Softmax is presented, an elegant unbiased extension of Softmax, to accommodate the label distribution shift between training and testing, and it is demonstrated that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and instance segmentation tasks.

Balanced Contrastive Learning for Long-Tailed Visual Recognition

To correct the optimization behavior of SCL and further improve the performance of long-tailed visual recognition, a novel loss for balanced contrastive learning (BCL) is proposed that satisfies the condition of forming a regular simplex and assists the optimization of cross-entropy.



Unequal-Training for Deep Face Recognition With Long-Tailed Noisy Data

A training strategy that treats the head data and the tail data in an unequal way, accompanying with noise-robust loss functions, to take full advantage of their respective characteristics and achieve the best result on MegaFace Challenge 2 given a large-scale noisy training data set is proposed.

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling.

Learning Deep Representation for Imbalanced Classification

The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution.

The Devil is in the Tails: Fine-grained Classification in the Wild

This work analyzes the context of eBird, a large fine-grained classification dataset, and a state-of-the-art deep network classification algorithm, and finds that peak classification performance on well-represented categories is excellent and transfer learning is virtually absent in current methods.

Factors in Finetuning Deep Model for Object Detection with Long-Tail Distribution

This paper investigates many factors that influence the performance in finetuning for object detection and proposes a hierarchical feature learning scheme that cluster objects into visually similar class groups and learn deep representations for these groups separately.

Piecewise Classifier Mappings: Learning Fine-Grained Learners for Novel Categories With Few Examples

An end-to-end trainable deep network inspired by the state-of-the-art fine-grained recognition model and is tailored for the FSFG task is proposed, which generates the decision boundary via learning a set of more attainable sub-classifiers in a more parameter-economic way.

Learning to Model the Tail

Results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.

Range Loss for Deep Face Recognition with Long-Tailed Training Data

This paper investigated how long-tailed data impact the training of face CNNs and develop a novel loss function, called range loss, to effectively utilize the tailed data in training process, and demonstrates the effectiveness of the proposed range loss in overcoming the long tail effect.

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning

This work proposes a measure to estimate domain similarity via Earth Mover's Distance and demonstrates that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure.

Learning to Reweight Examples for Robust Deep Learning

This work proposes a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions that can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.