Do Different Deep Metric Learning Losses Lead to Similar Learned Features?

@article{Kobs2021DoDD,
  title={Do Different Deep Metric Learning Losses Lead to Similar Learned Features?},
  author={Konstantin Kobs and Michael Steininger and Andrzej Dulny and Andreas Hotho},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={10624-10634}
}
Recent studies have shown that many deep metric learning loss functions perform very similarly under the same experimental conditions. One potential reason for this unexpected result is that all losses let the network focus on similar image regions or properties. In this paper, we investigate this by conducting a two-step analysis to extract and compare the learned visual features of the same model architecture trained with different loss functions: First, we compare the learned features on the… 

Figures and Tables from this paper

On Background Bias in Deep Metric Learning

It is shown that Deep Metric Learning networks are prone to so-called background bias, which can lead to a severe decrease in retrieval performance when changing the image background during inference, and that replacing the background of images during training with random background images alleviates this issue.

InDiReCT: Language-Guided Zero-Shot Deep Metric Learning for Images

An analysis reveals that InDiReCT learns to focus on regions of the image that correlate with the desired similarity notion, which makes it a fast to train and easy to use method to create custom embedding spaces only using natural language.

References

SHOWING 1-10 OF 51 REFERENCES

Sampling Matters in Deep Embedding Learning

This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.

Deep Metric Learning via Lifted Structured Feature Embedding

An algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of Pairwise distances enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem.

Visual Explanation for Deep Metric Learning

This work proposes an intuitive idea to show where contributes the most to the overall similarity of two input images by decomposing the final activation by generating point-to-point activation intensity between two images so that the relationship between different regions is uncovered.

Revisiting Training Strategies and Generalization Performance in Deep Metric Learning

A simple, yet effective, training regularization is proposed to reliably boost the performance of ranking-based DML models on various standard benchmark datasets.

Improved Deep Metric Learning with Multi-class N-pair Loss Objective

This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples.

MIC: Mining Interclass Characteristics for Improved Metric Learning

This work proposes a novel surrogate task to learn visual characteristics shared across classes with a separate encoder, trained jointly with the encoder for class information by reducing their mutual information.

Deep Metric Learning: A Survey

This article is considered to be important, as it is the first comprehensive study in which sampling strategy, appropriate distance metric, and the structure of the network are systematically analyzed and evaluated as a whole and supported by comparing the quantitative results of the methods.

SoftTriple Loss: Deep Metric Learning Without Triplet Sampling

The SoftTriple loss is proposed to extend the SoftMax loss with multiple centers for each class, equivalent to a smoothed triplet loss where each class has a single center.

Signal-To-Noise Ratio: A Robust Distance Metric for Deep Metric Learning

This paper proposes a robust SNR distance metric based on Signal-to-Noise Ratio (SNR) for measuring the similarity of image pairs for deep metric learning and proposes Deep SNR-based Metric Learning (DSML) to generate discriminative feature embeddings.

Classification is a Strong Baseline for Deep Metric Learning

This paper evaluates on several standard retrieval datasets such as CAR-196, CUB-200-2011, Stanford Online Product, and In-Shop datasets for image retrieval and clustering, and establishes that the classification-based approach is competitive across different feature dimensions and base feature networks.
...