A Simple and Effective Framework for Pairwise Deep Metric Learning

@article{Qi2020ASA,
  title={A Simple and Effective Framework for Pairwise Deep Metric Learning},
  author={Qi Qi and Yan Yan and Zixuan Wu and Xiaoyu Wang and Tianbao Yang},
  journal={ArXiv},
  year={2020},
  volume={abs/1912.11194}
}
Deep metric learning (DML) has received much attention in deep learning due to its wide applications in computer vision. Previous studies have focused on designing complicated losses and hard example mining methods, which are mostly heuristic and lack of theoretical understanding. In this paper, we cast DML as a simple pairwise binary classification problem that classifies a pair of examples as similar or dissimilar. It identifies the most critical issue in this problem--imbalanced data pairs… 
Towards Visually Explaining Similarity Models
TLDR
This work presents a method to generate gradient-based visual attention for image similarity predictors by relying solely on the learned feature embedding, and shows that the approach can be applied to any kind of CNN-based similarity architecture, an important step towards generic visual explainability.
Attentional Biased Stochastic Gradient for Imbalanced Classification
TLDR
The method is a simple modification to momentum SGD where an attentional mechanism to assign an individual importance weight to each gradient in the mini-batch where the scaling factor is interpreted as the regularization parameter in the framework of information-regularized distributionally robust optimization.
Introspective Deep Metric Learning
TLDR
An introspective deep metric learning (IDML) framework for uncertainty-aware comparisons of images that attains state-of-the-art performance on the widely used CUB-200-2011, Cars196, and Stanford Online Products datasets for image retrieval.
L EARNING D ISTRIBUTIONALLY R OBUST M ODELS AT S CALE VIA C OMPOSITE O PTIMIZATION
TLDR
This paper shows how different variants of DRO are simply instances of a finite-sum composite optimization for which they provide scalable methods and provides empirical results that demonstrate the effectiveness of the proposed algorithm with respect to the prior art in order to learn robust models from very large datasets.
Learning Distributionally Robust Models at Scale via Composite Optimization
TLDR
This paper shows how different variants of DRO are simply instances of a finite-sum composite optimization for which they provide scalable methods and provides empirical results that demonstrate the effectiveness of the proposed algorithm with respect to the prior art in order to learn robust models from very large datasets.
An Online Method for A Class of Distributionally Robust Optimization with Non-convex Objectives
TLDR
A class of DRO with an KL divergence regularization on the dual variables is considered, the minmax problem is transformed into a compositional minimization problem, and a practical duality-free online stochastic methods without requiring a large mini-batch size are proposed.
Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels
TLDR
This paper proposes a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not, and changes loss computation on top of model prediction into a pairwise manner.
Distributionally Robust Optimization for Deep Kernel Multiple Instance Learning
TLDR
A general GP mixture framework that simultaneously considers multiple instances through a latent mixture model and augment the GP kernel with fixed basis functions by using a deep neural network to learn adaptive basis functions so that the covariance structure of high-dimensional data can be accurately captured.
Ēvalds Urtāns FUNCTION SHAPING IN DEEP LEARNING Summary of the Doctoral Thesis
  • Computer Science
  • 2021
TLDR
This work describes the importance of loss functions and related methods for deep reinforcement learning and deep metric learning, and presents a novel UNet-RNN-Skip model to improve the performance of the value function for path planning tasks.
An Online Method for Distributionally Deep Robust Optimization
TLDR
This paper transforms the min-max formulation into a minimization formulation and proposes a practical duality-free online stochastic method for solving deep DRO with KL divergence regularization, which resembles the practical stochastics Nesterovs method in several perspectives that are widely used for learning deep neural networks.
...
1
2
...

References

SHOWING 1-10 OF 35 REFERENCES
Improved Deep Metric Learning with Multi-class N-pair Loss Objective
TLDR
This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples.
Smart Mining for Deep Metric Learning
TLDR
This paper proposes a novel deep metric learning method that combines the triplet model and the global structure of the embedding space and relies on a smart mining procedure that produces effective training samples for a low computational cost.
Large-Scale Distance Metric Learning with Uncertainty
TLDR
This work proposes the margin preserving metric learning framework to learn the distance metric and latent examples simultaneously and shows that the metric is learned from latent examples only, but it can preserve the large margin property even for the original data.
Multi-Similarity Loss With General Pair Weighting for Deep Metric Learning
TLDR
A General Pair Weighting framework is established, which casts the sampling problem of deep metric learning into a unified view of pair weighting through gradient analysis, providing a powerful tool for understanding recent pair-based loss functions.
Sampling Matters in Deep Embedding Learning
TLDR
This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.
Deep Metric Learning with BIER: Boosting Independent Embeddings Robustly
TLDR
This work divides the last embedding layer of a deep network into an embedding ensemble and forms the task of training this ensemble as an online gradient boosting problem, and proposes two loss functions which increase the diversity in this ensemble.
Attention-based Ensemble for Deep Metric Learning
TLDR
An attention-based ensemble, which uses multiple attention masks so that each learner can attend to different parts of the object, which outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.
Deep Metric Learning with Hierarchical Triplet Loss
TLDR
A novel hierarchical triplet loss capable of automatically collecting informative training samples (triplets) via a defined hierarchical tree that encodes global context information that encourages the model to learn more discriminative features from visual similar classes, leading to faster convergence and better performance.
Hybrid-Attention Based Decoupled Metric Learning for Zero-Shot Image Retrieval
  • Binghui Chen, Weihong Deng
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper first emphasizes the importance of learning visual discriminative metric and preventing the partial/selective learning behavior of learner in ZSIR, and then proposes the Decoupled Metric Learning (DeML) framework to achieve these individually.
...
1
2
3
4
...