Hard-Aware Deeply Cascaded Embedding
@article{Yuan2017HardAwareDC, title={Hard-Aware Deeply Cascaded Embedding}, author={Yuhui Yuan and Kuiyuan Yang and Chao Zhang}, journal={2017 IEEE International Conference on Computer Vision (ICCV)}, year={2017}, pages={814-823} }
Riding on the waves of deep neural networks, deep metric learning has achieved promising results in various tasks by using triplet network or Siamese network. Though the basic goal of making images from the same category closer than the ones from different categories is intuitive, it is hard to optimize the objective directly due to the quadratic or cubic sample size. Hard example mining is widely used to solve the problem, which spends the expensive computation on a subset of samples that are…
Figures and Tables from this paper
220 Citations
LoOp: Looking for Optimal Hard Negative Embeddings for Deep Metric Learning
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
This work proposes a novel approach that looks for optimal hard negatives (LoOp) in the embedding space, taking full advantage of each tuple by calculating the minimum distance between a pair of positives and a Pair of negatives.
Assignment Problem Based Deep Embedding
- Computer SciencePRCV
- 2019
This paper proposes a novel linear assignment problem based hard sample mining strategy for contrastive loss to learn feature embeddings and can obtain the state-of-the-art performance on the CUB-200-2011, Cars196, and In-shop datasets with the GoogLeNet network.
An Adversarial Approach to Hard Triplet Generation
- Computer ScienceECCV
- 2018
This work proposes an adversarial network for Hard Triplet Generation (HTG) to optimize the network ability in distinguishing similar examples of different categories as well as grouping varied examples of the same categories.
The Group Loss for Deep Metric Learning
- Computer ScienceECCV
- 2020
Group Loss is proposed, a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group while promoting, at the same time, low-density regions amongst data points belonging to different groups.
Deep Metric Learning by Online Soft Mining and Class-Aware Attention
- Computer ScienceAAAI
- 2019
This work proposes a novel sample mining method, called Online Soft Mining (OSM), which assigns one continuous score to each sample to make use of all samples in the mini-batch, and introduces Class-Aware Attention (CAA) that assigns little attention to abnormal data samples.
SoftTriple Loss: Deep Metric Learning Without Triplet Sampling
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
The SoftTriple loss is proposed to extend the SoftMax loss with multiple centers for each class, equivalent to a smoothed triplet loss where each class has a single center.
Learning Intra-Batch Connections for Deep Metric Learning
- Computer ScienceICML
- 2021
This work proposes an approach based on message passing networks that takes into account all the relations in a mini-batch of samples into account, and refine embedding vectors by exchanging messages among all samples in a given batch allowing the training process to be aware of the overall structure.
Smart Mining for Deep Metric Learning
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This paper proposes a novel deep metric learning method that combines the triplet model and the global structure of the embedding space and relies on a smart mining procedure that produces effective training samples for a low computational cost.
Hard Example Mining with Auxiliary Embeddings
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2018
The experiments on the challenging Disguised Faces in the Wild dataset show that hard example mining with auxiliary embeddings improves the discriminative power of learned representations.
The Group Loss++: A deeper look into group loss for deep metric learning
- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2022
Group Loss is proposed, a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group while promoting, at the same time, low-density regions amongst data points belonging to different groups.
References
SHOWING 1-10 OF 53 REFERENCES
Deep Metric Learning via Lifted Structured Feature Embedding
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
An algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of Pairwise distances enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem.
Improved Deep Metric Learning with Multi-class N-pair Loss Objective
- Computer ScienceNIPS
- 2016
This paper proposes a new metric learning objective called multi-class N-pair loss, which generalizes triplet loss by allowing joint comparison among more than one negative examples and reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples.
Embedding Label Structures for Fine-Grained Feature Representation
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
The proposed multitask learning framework significantly outperforms previous fine-grained feature representations for image retrieval at different levels of relevance and to model the multi-level relevance, label structures such as hierarchy or shared attributes are seamlessly embedded into the framework by generalizing the triplet loss.
Deeply-Supervised Nets
- Computer ScienceAISTATS
- 2015
The proposed deeply-supervised nets (DSN) method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent, and extends techniques from stochastic gradient methods to analyze the algorithm.
BranchyNet: Fast inference via early exiting from deep neural networks
- Computer Science2016 23rd International Conference on Pattern Recognition (ICPR)
- 2016
The BranchyNet architecture is presented, a novel deep network architecture that is augmented with additional side branch classifiers that can both improve accuracy and significantly reduce the inference time of the network.
Zero-Shot Learning via Joint Latent Similarity Embedding
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
A joint discriminative learning framework based on dictionary learning is developed to jointly learn the parameters of the model for both domains, which ultimately leads to a class-independent classifier that shows 4.90% improvement over the state-of-the-art in accuracy averaged across four benchmark datasets.
Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods.
Learning Deep Embeddings with Histogram Loss
- Computer ScienceNIPS
- 2016
It is shown that these operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations, which makes the proposed loss suitable for learning deep embeddings using stochastic optimization.
Learning Local Image Descriptors with Deep Siamese and Triplet Convolutional Networks by Minimizing Global Loss Functions
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
A combination of the triplet and global losses produces the best embedding in the field, using this triplet network, and it is demonstrated that the use of the central-surround siamese network trained with the global loss producing the best result of the field on the UBC dataset.
Local Similarity-Aware Deep Feature Embedding
- Computer ScienceNIPS
- 2016
This paper introduces a Position-Dependent Deep Metric (PDDM) unit, which is capable of learning a similarity metric adaptive to local feature structure that can be used to select genuinely hard samples in a local neighborhood to guide the deep embedding learning in an online and robust manner.