Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning

@article{Xie2022NeuralCI,
  title={Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning},
  author={Liang Xie and Yibo Yang and Deng Cai and Xiaofei He},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.08735}
}
Class imbalance distribution widely exists in real-world engineering. However, the mainstream optimization algorithms that seek to minimize error will trap the deep learning model in sub-optimums when facing extreme class imbalance. It seriously harms the classification precision, especially on the minor classes. The essential reason is that the gradients of the classifier weights are imbalanced among the components from different classes. In this paper, we propose Attraction-Repulsion-Balanced… 

Figures and Tables from this paper

Imbalance Trouble: Revisiting Neural-Collapse Geometry

TLDR
This work adopts the unconstrained-features model (UFM) and introduces Simplex-Encoded-Labels Interpolation (SELI) as an invariant characterization of the neural collapse phenomenon, and proves for the UFM with cross-entropy loss and vanishing regularization that, irrespective of class imbalances, the embeddings and classifiers always interpolate a simplex-encoded label matrix.

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

TLDR
This work theoretically justify the neural collapse phenomenon for normalized features, and simplifies the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classi⬁ers over the sphere.

References

SHOWING 1-10 OF 42 REFERENCES

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

TLDR
A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling.

Learning Deep Representation for Imbalanced Classification

TLDR
The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution.

BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition

TLDR
A unified Bilateral-Branch Network (BBN) is proposed to take care of both representation learning and classifier learning simultaneously, where each branch does perform its own duty separately.

Remix: Rebalanced Mixup

TLDR
This work proposes a new regularization technique, Remix, that relaxes Mixup's formulation and enables the mixing factors of features and labels to be disentangled, and significantly outperforms state-of-the-arts regularization techniques under class-imbalanced regime.

Improving Calibration for Long-Tailed Recognition

TLDR
Motivated by the fact that predicted probability distributions of classes are highly related to the numbers of class instances, this work proposes label-aware smoothing to deal with different degrees of over-confidence for classes and improve classifier learning.

Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data

TLDR
This paper proposes a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes, and shows that the proposed approach significantly outperforms the baseline algorithms.

Decoupling Representation and Classifier for Long-Tailed Recognition

TLDR
It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.

Equalization Loss for Long-Tailed Object Recognition

TLDR
This work proposes a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories by simply ignoring those gradients for rare categories, and wins the 1st place in the LVIS Challenge 2019.

A systematic study of the class imbalance problem in convolutional neural networks

mixup: Beyond Empirical Risk Minimization

TLDR
This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.