• Corpus ID: 255394078

Adaptive Discriminative Regularization for Visual Classification

@inproceedings{Zhao2022AdaptiveDR,
  title={Adaptive Discriminative Regularization for Visual Classification},
  author={Qingsong Zhao and Yi Wang and Shuguang Dou and Chen Gong and Yin Wang and Cairong Zhao},
  year={2022}
}
How to improve discriminative feature learning is central in classification. Existing works address this problem by explicitly increasing inter-class separability and intra-class similarity, whether by constructing positive and negative pairs for contrastive learning or posing tighter class separating margins. These methods do not exploit the similarity between different classes as they adhere to i.i.d. assumption in data. In this paper, we embrace the real-world data distribution setting that… 

References

SHOWING 1-10 OF 67 REFERENCES

Deep Discriminative CNN with Temporal Ensembling for Ambiguously-Labeled Image Classification

This paper innovatively employs the deep convolutional neural networks for ambiguously-labeled image classification, in which the well-known ResNet is adopted as the authors' backbone, and designs an entropy-based regularizer to enhance the discrimination ability.

A Deep Learning Approach to Clustering Visual Arts

DELIUS is a DEep learning approach to cLustering vIsUal artS that uses a pre-trained convolutional network to extract features and then feeds these features into a deep embedded clustering model, where the task of mapping the input data to a latent space is jointly optimized with thetask of finding a set of cluster centroids in this latent space.

ArcFace: Additive Angular Margin Loss for Deep Face Recognition

This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.

SphereFace: Deep Hypersphere Embedding for Face Recognition

This paper proposes the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features in deep face recognition (FR) problem under open-set protocol.

Symmetric Cross Entropy for Robust Learning With Noisy Labels

The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods.

Large-Margin Softmax Loss for Convolutional Neural Networks

A generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features and which not only can adjust the desired margin but also can avoid overfitting is proposed.

Delving Deep Into Label Smoothing

An Online Label Smoothing (OLS) strategy is presented, which generates soft labels based on the statistics of the model prediction for the target category, which can significantly improve the robustness of DNN models to noisy labels compared to current label smoothing approaches.

Probabilistic Face Embeddings

The proposed Probabilistic Face Embeddings (PFEs), which represent each face image as a Gaussian distribution in the latent space, can improve the face recognition performance of deterministic embeddings by converting them into PFEs.

Distribution of Classification Margins: Are All Data Equal?

It is shown that it is possible to dynamically reduce the training set by more than 99% without significant loss of performance and the resulting subset of “high capacity” features is not consistent across different training runs, which is consistent with the theoretical claim that all training points should converge to the same asymptotic margin under SGD.

Focal Loss for Dense Object Detection

This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.
...