Symmetrical Synthesis for Deep Metric Learning

@article{Gu2020SymmetricalSF,
  title={Symmetrical Synthesis for Deep Metric Learning},
  author={Geonmo Gu and ByungSoo Ko},
  journal={ArXiv},
  year={2020},
  volume={abs/2001.11658},
  url={https://api.semanticscholar.org/CorpusID:211003890}
}
The proposed method is hyper-parameter free and plug-and-play for existing metric learning losses without network modification, and demonstrates the superiority of the proposed method over existing methods for a variety of loss functions on clustering and image retrieval tasks.

Figures and Tables from this paper

Intra-Class Adaptive Augmentation With Neighbor Correction for Deep Metric Learning

A novel intra-class adaptive augmentation (IAA) framework for deep metric learning is proposed and this method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.

Proxy Synthesis: Learning with Synthetic Classes for Deep Metric Learning

This work proposes a simple regularizer called Proxy Synthesis that exploits synthetic classes for stronger generalization in deep metric learning and derives an embedding space considering class relations and smooth decision boundaries for robustness on unseen classes.

Learning to Generate Novel Classes for Deep Metric Learning

This work introduces a new data augmentation approach that synthesizes novel classes and their embedding vectors by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class.

Self-Supervised Synthesis Ranking for Deep Metric Learning

A novel self-supervised synthesis ranking auxiliary framework, which captures intra- class characteristics as well as inter-class characteristics for better metric learning and significantly improves and outperforms the state-of-the-art methods on the performances of both retrieval and ranking.

GradML: A Gradient-based Loss for Deep Metric Learning

This work analyzes the gradients of various ML loss functions and proposes a gradient-based loss for ML (GradML), which has a simple formulation and lowers the computational cost as compared to other methods.

It Takes Two to Tango: Mixup for Deep Metric Learning

This work develops a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing Metric Mix, or Metrix, and introduces a new metric - utilization to demonstrate that by mixing examples during training, by exploring areas of the embedding space beyond the training classes, thereby improving representations.

A framework to enhance generalization of deep metric learning methods using general discriminative feature learning and class adversarial neural networks

This work proposes a framework to enhance the generalization power of existing DML methods in a Zero-Shot Learning (ZSL) setting by general yet discriminative representation learning and employing a class adversarial neural network.

Recall@k Surrogate Loss with Large Batches and Similarity Mixup

This work focuses on learning deep visual representation models for retrieval by exploring the interplay between a new loss function, the batch size, and a new regularization approach by proposing a differentiable surrogate loss for the recall.

Semi-Supervised Metric Learning: A Deep Resurrection

A stochastic, graph-based approach that first propagates the affinities between the pairs of examples from labeled data, to that of the unlabeled pairs, and imposes orthogonality constraint on the metric parameters, as it leads to a better performance by avoiding a model collapse.

Learning with Memory-based Virtual Classes for Deep Metric Learning

This work presents a novel training strategy for DML called MemVir, which embeds the idea of curriculum learning by slowly adding virtual classes for a gradual increase in learning difficulty, which improves the learning stability as well as the final performance.

Deep Metric Learning via Lifted Structured Feature Embedding

An algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of Pairwise distances enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem.

An Adversarial Approach to Hard Triplet Generation

This work proposes an adversarial network for Hard Triplet Generation (HTG) to optimize the network ability in distinguishing similar examples of different categories as well as grouping varied examples of the same categories.

Improved Deep Metric Learning with Multi-class N-pair Loss Objective

This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples.

Deep Adversarial Metric Learning

This paper proposes a deep adversarial metric learning (DAML) framework to generate synthetic hard negatives from the observed negative samples, which is widely applicable to supervised deep metric learning methods.

Hardness-Aware Deep Metric Learning

This paper performs linear interpolation on embeddings to adaptively manipulate their hard levels and generate corresponding label-preserving synthetics for recycled training, so that information buried in all samples can be fully exploited and the metric is always challenged with proper difficulty.

Sampling Matters in Deep Embedding Learning

This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions.

Deep Metric Learning with Angular Loss

This paper proposes a novel angular loss, which takes angle relationship into account, for learning better similarity metric, and aims at constraining the angle at the negative point of triplet triangles.

Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop

Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods.

Hard-Aware Point-to-Set Deep Metric for Person Re-identification

In addition to person re-ID, the proposed Hard-Aware Point-to-Set (HAP2S) loss with a soft hard-mining scheme applies to generic deep metric learning benchmarks including CUB-200-2011 and Cars196, and achieves state-of-the-art results.

FaceNet: A unified embedding for face recognition and clustering

A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.