On Background Bias in Deep Metric Learning

@article{Kobs2022OnBB,
  title={On Background Bias in Deep Metric Learning},
  author={Konstantin Kobs and Andreas Hotho},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.01615}
}
Deep Metric Learning trains a neural network to map input images to a lower-dimensional embedding space such that similar images are closer together than dissimilar images. When used for item retrieval, a query image is embedded using the trained model and the closest items from a database storing their respective embeddings are returned as the most similar items for the query. Especially in product retrieval, where a user searches for a certain product by taking a photo of it, the image… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 32 REFERENCES

Classification is a Strong Baseline for Deep Metric Learning

This paper evaluates on several standard retrieval datasets such as CAR-196, CUB-200-2011, Stanford Online Product, and In-Shop datasets for image retrieval and clustering, and establishes that the classification-based approach is competitive across different feature dimensions and base feature networks.

Deep Metric Learning via Lifted Structured Feature Embedding

An algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of Pairwise distances enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem.

Do Different Deep Metric Learning Losses Lead to Similar Learned Features?

A two-step analysis is conducted to extract and compare the learned visual features of the same model architecture trained with different loss functions and shows that some seemingly irrelevant properties can have significant influence on the resulting embedding.

Deep Metric Learning: A Survey

This article is considered to be important, as it is the first comprehensive study in which sampling strategy, appropriate distance metric, and the structure of the network are systematically analyzed and evaluated as a whole and supported by comparing the quantitative results of the methods.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.

Checking Robustness of Representations Learned by Deep Neural Networks

The method allows indicating which classes are recognized by the deep model using non-robust representations, ie.

SphereFace: Deep Hypersphere Embedding for Face Recognition

This paper proposes the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features in deep face recognition (FR) problem under open-set protocol.

Multi-Similarity Loss With General Pair Weighting for Deep Metric Learning

A General Pair Weighting framework is established, which casts the sampling problem of deep metric learning into a unified view of pair weighting through gradient analysis, providing a powerful tool for understanding recent pair-based loss functions.

NormFace: L2 Hypersphere Embedding for Face Verification

This work identifies and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings, and proposes two strategies for training using normalized features.

ArcFace: Additive Angular Margin Loss for Deep Face Recognition

This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.