Distributional Robustness Loss for Long-tail Learning

@article{Samuel2021DistributionalRL,
  title={Distributional Robustness Loss for Long-tail Learning},
  author={Dvir Samuel and Gal Chechik},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={9475-9484}
}
  • Dvir Samuel, Gal Chechik
  • Published 7 April 2021
  • Computer Science
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes. To address unbalanced data, most studies try balancing the data, the loss, or the classifier to reduce classification bias towards head classes. Far less attention has been given to the latent representations learned with unbalanced data. We show that the feature extractor part of deep networks suffers greatly from this bias. We propose a new loss based on… 
Long-Tailed Recognition via Weight Balancing
TLDR
An orthogonal direction, weight balancing, is explored by the empirical observation that the naively trained classifier has “artificially” larger weights in norm for common classes (because there exists abundant data to train them, unlike the rare classes) and achieves the state-ofthe-art accuracy on five standard benchmarks.
Robust Long-Tailed Learning under Label Noise
TLDR
A new prototypical noise detection method is established by designing a distance-based metric that is resistant to label noise, and a robust framework is proposed, ROLT, that realizes noise detection for long-tailed learning, followed by soft pseudo-labeling via both label smoothing and diverse label guessing.
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition
TLDR
This work presents a visuallinguistic long-tailed recognition framework, termed VLLTR, and conducts empirical studies on the benefits of introducing text modality for long-tails recognition (LTR) and sets the new state-of-theart performance on widely-used LTR benchmarks.
Deep Long-Tailed Learning: A Survey
TLDR
A comprehensive survey on recent advances in deep long-tailed learning is provided, highlighting important applications of deepLongtailed learning and identifying several promising directions for future research.
Rebalanced Siamese Contrastive Mining for Long-Tailed Recognition
TLDR
It is claimed that supervised contrastive learning suffers a dual class-imbalance problem at both the original batch and Siamese batch levels, which is more serious than long-tailed classification learning.
A Simple Long-Tailed Recognition Baseline via Vision-Language Model
TLDR
This work proposes BALLAD, a simple and effective approach to leverage contrastive vision-language models for long-tailed recognition that sets the new state-of-the-art performances and outperforms competitive baselines with a large margin.
Discovering Objects that Can Move
TLDR
This paper simplifies the recent auto-encoder based frameworks for unsupervised object discovery, and augment the resulting model with a weak learning signal from general motion segmentation algorithms, which is enough to generalize to segment both moving and static instances of dynamic objects.
ELM: Embedding and Logit Margins for Long-Tail Learning
TLDR
Embedding and Logit Margins (ELM) is presented, a unified approach to enforce margins in logit space, and regularize the distribution of embeddings that connects losses for long-tail learning to proposals in the literature on metric embedding, and contrastive learning.
Long-tail Recognition via Compositional Knowledge Transfer
TLDR
A novel strategy for long-tail recognition that addresses the tail classes’ few-shot problem via training-free knowledge transfer and can achieve significant performance boosts on rare classes while maintaining robust common class performance, outperforming directly comparable state-of-the-art models.

References

SHOWING 1-10 OF 47 REFERENCES
Decoupling Representation and Classifier for Long-Tailed Recognition
TLDR
It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.
Feature Space Augmentation for Long-Tailed Data
TLDR
This work presents a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples, and decomposes the features of each class into a class-generic component and aclass-specific component using class activation maps.
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
TLDR
A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling.
BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition
TLDR
A unified Bilateral-Branch Network (BBN) is proposed to take care of both representation learning and classifier learning simultaneously, where each branch does perform its own duty separately.
Deep Representation Learning on Long-Tailed Data: A Learnable Embedding Augmentation Perspective
TLDR
This paper proposes to augment each instance of the tail classes with certain disturbances in the deep feature space to alleviate the distortion of the learned feature space, and improves deep representation learning on long tailed data.
Learning to Segment the Tail
TLDR
This work proposes a “divide&conquer” strategy for the challenging LVIS task: divide the whole data into balanced parts and then apply incremental learning to conquer each one, which derives a novel learning paradigm: class-incremental few-shot learning, which is especially effective for the challenge evolving over time.
Striking the Right Balance With Uncertainty
TLDR
This paper demonstrates that the Bayesian uncertainty estimates directly correlate with the rarity of classes and the difficulty level of individual samples, and presents a novel framework for uncertainty based class imbalance learning that efficiently utilizes sample and class uncertainty information to learn robust features and more generalizable classifiers.
Large-Scale Long-Tailed Recognition in an Open World
TLDR
An integrated OLTR algorithm is developed that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world.
Class-Balanced Loss Based on Effective Number of Samples
TLDR
This work designs a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss and introduces a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point.
Anchor Loss: Modulating Loss Scale Based on Prediction Difficulty
TLDR
A novel loss function that dynamically re-scales the cross entropy based on prediction difficulty regarding a sample based on a relative property coming from the confidence score gap between positive and negative labels is proposed.
...
1
2
3
4
5
...