Corpus ID: 236428622

Parametric Contrastive Learning

@article{Cui2021ParametricCL,
  title={Parametric Contrastive Learning},
  author={Jiequan Cui and Zhisheng Zhong and Shu Liu and Bei Yu and Jiaya Jia},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12028}
}
In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the… Expand
1 Citations
Deep Long-Tailed Learning: A Survey
  • Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, Jiashi Feng
  • Computer Science
  • 2021
TLDR
A comprehensive survey on recent advances in deep long-tailed learning is provided, highlighting important applications of deepLongtailed learning and identifying several promising directions for future research. Expand

References

SHOWING 1-10 OF 57 REFERENCES
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
TLDR
A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Expand
ResLT: Residual Learning for Long-tailed Recognition
TLDR
This work designs the effective residual fusion mechanism – with one main branch optimized to recognize images from all classes, another two residual branches are gradually fused and optimized to enhance images from medium+tail classes and tail classes respectively. Expand
Improving Calibration for Long-Tailed Recognition
TLDR
Motivated by the fact that predicted probability distributions of classes are highly related to the numbers of class instances, this work proposes label-aware smoothing to deal with different degrees of over-confidence for classes and improve classifier learning. Expand
Equalization Loss for Long-Tailed Object Recognition
TLDR
This work proposes a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories by simply ignoring those gradients for rare categories, and wins the 1st place in the LVIS Challenge 2019. Expand
Learning Deep Representation for Imbalanced Classification
TLDR
The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution. Expand
BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition
TLDR
A unified Bilateral-Branch Network (BBN) is proposed to take care of both representation learning and classifier learning simultaneously, where each branch does perform its own duty separately. Expand
Decoupling Representation and Classifier for Long-Tailed Recognition
TLDR
It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Expand
Deep Imbalanced Learning for Face Recognition and Attribute Prediction
TLDR
Cluster-based Large Margin Local Embedding (CLMLE), when combined with a simple k-nearest cluster algorithm, shows significant improvements in accuracy over existing methods on both face recognition and face attribute prediction tasks that exhibit imbalanced class distribution. Expand
Improved Baselines with Momentum Contrastive Learning
TLDR
With simple modifications to MoCo, this note establishes stronger baselines that outperform SimCLR and do not require large training batches, and hopes this will make state-of-the-art unsupervised learning research more accessible. Expand
Piecewise Classifier Mappings: Learning Fine-Grained Learners for Novel Categories With Few Examples
TLDR
An end-to-end trainable deep network inspired by the state-of-the-art fine-grained recognition model and is tailored for the FSFG task is proposed, which generates the decision boundary via learning a set of more attainable sub-classifiers in a more parameter-economic way. Expand
...
1
2
3
4
5
...