Corpus ID: 126265099

Learning with Imprinted Weights

@article{Qi2017LearningWI,
  title={Learning with Imprinted Weights},
  author={Hang Qi and Matthew A. Brown and David G. Lowe},
  journal={ArXiv},
  year={2017},
  volume={abs/1712.07136}
}
Human vision is able to immediately recognize novel visual categories after seeing just one or a few training examples. We describe how to add a similar capability to ConvNet classifiers by directly setting the final layer weights from novel training examples during low-shot learning. We call this process weight imprinting as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example. The imprinting process… Expand
Dynamic Few-Shot Visual Learning Without Forgetting
TLDR
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. Expand
Learning to Learn for Small Sample Visual Recognition
TLDR
This dissertation aims to endow visual recognition systems with low-shot learning ability, so that they learn consistently well on data of different sample sizes, and progressively grow a convolutional neural network with increased model capacity, which significantly outperforms classic fine-tuning approaches. Expand
Dynamic Few-Shot Visual Learning without Forgetting Spyros Gidaris
TLDR
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. Expand
Learning Representations by Predicting Bags of Visual Words
TLDR
This work shows that the process of image discretization into visual words can provide the basis for very powerful self-supervised approaches in the image domain, thus allowing further connections to be made to related methods from the NLP domain that have been extremely successful so far. Expand
Adaptive Masked Weight Imprinting for Few-Shot Segmentation
TLDR
A novel method is proposed that constructs the new class weights from few labelled samples in the support set without back-propagation, while updating the previously learned classes, and enables the adaptation of previously learned weights. Expand
Adaptive Masked Proxies for Few-Shot Segmentation
TLDR
A novel adaptive masked proxies method that constructs the final segmentation layer weights from few labelled samples using multi-resolution average pooling on base embeddings masked with the label to act as a positive proxy for the new class, while fusing it with the previously learned class signatures. Expand
Fix Your Features: Stationary and Maximally Discriminative Embeddings using Regular Polytope (Fixed Classifier) Networks
TLDR
The approach improves and broadens the concept of a fixed classifier, recently proposed in Hoffer 2018fix, to a larger class of fixed classifiers models and shows that the stationarity of the embedding and its maximal discriminative representation can be theoretically justified. Expand
AMP: Adaptive Masked Proxies for Few-Shot Segmentation
TLDR
A novel adaptive masked proxies method that constructs the final segmentation layer weights from few labelled samples by utilizing multi-resolution average pooling on base embeddings masked with the label to act as a positive proxy for the new class, while fusing it with the previously learned class signatures. Expand
Exploit Clues From Views: Self-Supervised and Regularized Learning for Multiview Object Recognition
TLDR
Experiments shows that the recognition and retrieval results using VISPE outperform that of other self-supervised learning methods on seen and unseen data. Expand
Layer Importance Estimation with Imprinting for Neural Network Quantization
TLDR
This work proposes an accuracy-aware criterion to quantify the layer’s importance rank and applies imprinting per layer which acts as a proxy module for accuracy estimation in an efficient way to draw better interpretability to the selected bit-width configuration. Expand
...
1
2
...

References

SHOWING 1-10 OF 29 REFERENCES
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. Expand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. Expand
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Expand
Metric Learning with Adaptive Density Discrimination
TLDR
This work proposes a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms, which maintains an explicit model of the distributions of the different classes in representation space and employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. Expand
Few-Shot Image Recognition by Predicting Parameters from Activations
TLDR
A novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations is proposed, which achieves the state-of-the-art classification accuracy on Novel categories by a significant margin while keeping comparable performance on the large-scale categories. Expand
Human-level concept learning through probabilistic program induction
TLDR
A computational model is described that learns in a similar fashion and does so better than current deep learning algorithms and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce. Expand
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieveExpand
FaceNet: A unified embedding for face recognition and clustering
TLDR
A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface. Expand
Understanding the difficulty of training deep feedforward neural networks
TLDR
The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. Expand
...
1
2
3
...