Improving Generalization via Scalable Neighborhood Component Analysis

@article{Wu2018ImprovingGV,
  title={Improving Generalization via Scalable Neighborhood Component Analysis},
  author={Zhirong Wu and Alexei A. Efros and Stella X. Yu},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.04699}
}
Current major approaches to visual recognition follow an end-to-end formulation that classifies an input image into one of the pre-determined set of semantic categories. Parametric softmax classifiers are a common choice for such a closed world with fixed categories, especially when big labeled data is available during training. However, this becomes problematic for open-set scenarios where new categories are encountered with very few examples for learning a generalizable parametric classifier… Expand
Making Classification Competitive for Deep Metric Learning
TLDR
It is demonstrated that a standard classification network can be transformed into a variant of proxy-based metric learning that is competitive against non-parametric approaches across a wide variety of image retrieval tasks and can learn high-dimensional binary embeddings that achieve new state-of-the-art performance. Expand
Local Aggregation for Unsupervised Learning of Visual Embeddings
TLDR
This work describes a method that trains an embedding function to maximize a metric of local aggregation, causing similar data instances to move together in the embedding space, while allowing dissimilar instances to separate. Expand
Revisiting Metric Learning for Few-Shot Image Classification
TLDR
This work revisits the classical triplet network from deep metric learning, and extends it into a deep K-tuplet network for few-shot learning, utilizing the relationship among the input samples to learn a general representation learning via episode-training. Expand
Classification is a Strong Baseline for Deep Metric Learning
TLDR
This paper evaluates on several standard retrieval datasets such as CAR-196, CUB-200-2011, Stanford Online Product, and In-Shop datasets for image retrieval and clustering, and establishes that the classification-based approach is competitive across different feature dimensions and base feature networks. Expand
CvS: Classification via Segmentation For Small Datasets
TLDR
CvS, a costeffective classifier for small datasets that derives the classification labels from predicting the segmentation maps is presented and the label propagation method is employed to achieve a fully segmented dataset with only a handful of manually segmented data. Expand
Deep Metric Learning Based on Scalable Neighborhood Components for Remote Sensing Scene Characterization
TLDR
An extensive experimental comparison has been conducted to validate the effectiveness of the proposed deep metric learning approach, which overcomes the limitation on the class discrimination by means of two different components: 1) scalable neighborhood component analysis (SNCA) that aims at discovering the neighborhood structure in the metric space and 2) the cross-entropy loss that aiming at preserving the classdiscrimination capability based on the learned class prototypes. Expand
Deep Metric Transfer for Label Propagation with Limited Annotated Data
TLDR
This paper proposes a generic framework that utilize unlabeled data to aid generalization for all three tasks of object recognition and shows such a label propagation scheme can be highly effective when the similarity metric used for propagation is transferred from other related domains. Expand
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
TLDR
This paper finds that using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification using ResNet-50 under the linear evaluation protocol, and demonstrates empirically that the method is less reliant on complex data augmentations. Expand
Constrained Mean Shift for Representation Learning
TLDR
The main idea is to generalize the mean-shift algorithm by constraining the search space of nearest neighbors, resulting in semantically purer representations, and it is shown that the noisy constraint across modalities to train self-supervised video models. Expand
Unsupervised Learning From Video With Deep Neural Embeddings
TLDR
The Video Instance Embedding framework, which trains deep nonlinear embeddings on video sequence inputs, is presented, showing that a two-pathway model with both static and dynamic processing paths is optimal, and the results suggest that deep neuralembeddings are a promising approach to unsupervised video learning for a wide variety of task domains. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 54 REFERENCES
Unsupervised Feature Learning via Non-parametric Instance Discrimination
TLDR
This work forms this intuition as a non-parametric classification problem at the instance-level, and uses noise-contrastive estimation to tackle the computational challenges imposed by the large number of instance classes. Expand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning. Expand
Sampling Matters in Deep Embedding Learning
TLDR
This paper proposes distance weighted sampling, which selects more informative and stable examples than traditional approaches, and shows that a simple margin based loss is sufficient to outperform all other loss functions. Expand
Distance-Based Image Classification: Generalizing to New Classes at Near-Zero Cost
TLDR
Two distance-based classifiers, the k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers are considered, and a new metric learning approach is introduced for the latter, and an extension of the NCM classifier is introduced to allow for richer class representations. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. Expand
Deep Metric Learning Using Triplet Network
TLDR
This paper proposes the triplet network model, which aims to learn useful representations by distance comparisons, and demonstrates using various datasets that this model learns a better representation than that of its immediate competitor, the Siamese network. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
...
1
2
3
4
5
...