Adaptive Discriminative Regularization for Visual Classification
@inproceedings{Zhao2022AdaptiveDR, title={Adaptive Discriminative Regularization for Visual Classification}, author={Qingsong Zhao and Yi Wang and Shuguang Dou and Chen Gong and Yin Wang and Cairong Zhao}, year={2022} }
How to improve discriminative feature learning is central in classification. Existing works address this problem by explicitly increasing inter-class separability and intra-class similarity, whether by constructing positive and negative pairs for contrastive learning or posing tighter class separating margins. These methods do not exploit the similarity between different classes as they adhere to i.i.d. assumption in data. In this paper, we embrace the real-world data distribution setting that…
Figures and Tables from this paper
References
SHOWING 1-10 OF 67 REFERENCES
Deep Discriminative CNN with Temporal Ensembling for Ambiguously-Labeled Image Classification
- Computer ScienceAAAI
- 2020
This paper innovatively employs the deep convolutional neural networks for ambiguously-labeled image classification, in which the well-known ResNet is adopted as the authors' backbone, and designs an entropy-based regularizer to enhance the discrimination ability.
A Deep Learning Approach to Clustering Visual Arts
- Computer ScienceInternational Journal of Computer Vision
- 2022
DELIUS is a DEep learning approach to cLustering vIsUal artS that uses a pre-trained convolutional network to extract features and then feeds these features into a deep embedded clustering model, where the task of mapping the input data to a latent space is jointly optimized with thetask of finding a set of cluster centroids in this latent space.
ArcFace: Additive Angular Margin Loss for Deep Face Recognition
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.
SphereFace: Deep Hypersphere Embedding for Face Recognition
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
This paper proposes the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features in deep face recognition (FR) problem under open-set protocol.
Symmetric Cross Entropy for Robust Learning With Noisy Labels
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods.
Large-Margin Softmax Loss for Convolutional Neural Networks
- Computer ScienceICML
- 2016
A generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features and which not only can adjust the desired margin but also can avoid overfitting is proposed.
Delving Deep Into Label Smoothing
- Computer ScienceIEEE Transactions on Image Processing
- 2021
An Online Label Smoothing (OLS) strategy is presented, which generates soft labels based on the statistics of the model prediction for the target category, which can significantly improve the robustness of DNN models to noisy labels compared to current label smoothing approaches.
Probabilistic Face Embeddings
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
The proposed Probabilistic Face Embeddings (PFEs), which represent each face image as a Gaussian distribution in the latent space, can improve the face recognition performance of deterministic embeddings by converting them into PFEs.
Distribution of Classification Margins: Are All Data Equal?
- Computer ScienceArXiv
- 2021
It is shown that it is possible to dynamically reduce the training set by more than 99% without significant loss of performance and the resulting subset of “high capacity” features is not consistent across different training runs, which is consistent with the theoretical claim that all training points should converge to the same asymptotic margin under SGD.
Focal Loss for Dense Object Detection
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2020
This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.