Corpus ID: 232404315

Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning

@article{Gu2021ProxySL,
  title={Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning},
  author={Geonmo Gu and ByungSoo Ko and Han-Gyu Kim},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.15454}
}
One of the main purposes of deep metric learning is to construct an embedding space that has well-generalized embeddings on both seen (training) classes and unseen (test) classes. Most existing works have tried to achieve this using different types of metric objectives and hard sample mining strategies with given training data. However, learning with only the training data can be overfitted to the seen classes, leading to the lack of generalization capability on unseen classes. To address this… Expand

Figures and Tables from this paper

It Takes Two to Tango: Mixup for Deep Metric Learning
TLDR
A generalized formulation is developed that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing Metric Mix, or Metrix, which shows that mixing inputs, intermediate representations or embeddings along with target labels significantly improves representations and outperforms state-of-the-art metric learning methods on four benchmark datasets. Expand
Recall@k Surrogate Loss with Large Batches and Similarity Mixup
TLDR
A differentiable surrogate loss for the recall is proposed, using an implementation that sidesteps the hardware constraints of the GPU memory, and the method achieves state-of-the-art results in several image retrieval benchmarks. Expand

References

SHOWING 1-10 OF 61 REFERENCES
Symmetrical Synthesis for Deep Metric Learning
TLDR
The proposed method is hyper-parameter free and plug-and-play for existing metric learning losses without network modification, and demonstrates the superiority of the proposed method over existing methods for a variety of loss functions on clustering and image retrieval tasks. Expand
Proxy Anchor Loss for Deep Metric Learning
TLDR
This paper presents a new proxy-based loss that takes advantages of both pair- and proxy- based methods and overcomes their limitations, and allows embedding vectors of data to interact with each other in its gradients to exploit data-to-data relations. Expand
Embedding Expansion: Augmentation in Embedding Space for Deep Metric Learning
  • ByungSoo Ko, Geonmo Gu
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
The proposed method generates synthetic points containing augmented information by a combination of feature points and performs hard negative pair mining to learn with the most informative feature representations and can be used for existing metric learning losses without affecting model size, training speed, or optimization difficulty. Expand
SoftTriple Loss: Deep Metric Learning Without Triplet Sampling
TLDR
The SoftTriple loss is proposed to extend the SoftMax loss with multiple centers for each class, equivalent to a smoothed triplet loss where each class has a single center. Expand
Deep Metric Learning with BIER: Boosting Independent Embeddings Robustly
TLDR
This work divides the last embedding layer of a deep network into an embedding ensemble and forms the task of training this ensemble as an online gradient boosting problem, and proposes two loss functions which increase the diversity in this ensemble. Expand
Deep Metric Learning via Lifted Structured Feature Embedding
TLDR
An algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of Pairwise distances enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Expand
Improved Deep Metric Learning with Multi-class N-pair Loss Objective
TLDR
This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples. Expand
Hardness-Aware Deep Metric Learning
TLDR
This paper performs linear interpolation on embeddings to adaptively manipulate their hard levels and generate corresponding label-preserving synthetics for recycled training, so that information buried in all samples can be fully exploited and the metric is always challenged with proper difficulty. Expand
Unbiased Evaluation of Deep Metric Learning Algorithms
TLDR
An unbiased comparison of the most popular DML baseline methods under same conditions and more importantly, not obfuscating any hyper parameter tuning or adjustment needed to favor a particular method is performed. Expand
An Adversarial Approach to Hard Triplet Generation
TLDR
This work proposes an adversarial network for Hard Triplet Generation (HTG) to optimize the network ability in distinguishing similar examples of different categories as well as grouping varied examples of the same categories. Expand
...
1
2
3
4
5
...