• Corpus ID: 309759

Prototypical Networks for Few-shot Learning

@inproceedings{Snell2017PrototypicalNF,
  title={Prototypical Networks for Few-shot Learning},
  author={Jake Snell and Kevin Swersky and Richard S. Zemel},
  booktitle={NIPS},
  year={2017}
}
A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. [...] Key Method Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero…Expand
Prototypical Siamese Networks for Few-shot Learning
  • Junhua Wang, Yongping Zhai
  • Computer Science
    2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC)
  • 2020
We propose a novel architecture, called Prototypical Siamese Networks, for few-shot learning, where a classifier must generalize to new classes not seen in the training set, given only a few examples
Transductive Prototypical Network For Few-Shot Classification
TLDR
This paper proposes Transductive Prototypical Network (Td-PN), a universal transductive approach that refines the class representations by merging scarce labeled samples and high-confidence ones of target set in a classifying-friendly embedding space.
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
Meta-Relation Networks for Few Shot Learning
TLDR
A meta-relation network is proposed to solve the few shot learning problem, where the classifier must learn to recognize new classes given only few examples from each, based on relation networks and Model-Agnostic Meta-Learning training methods.
Subspace Networks for Few-shot Classification
We propose subspace networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each
One-Way Prototypical Networks
TLDR
A new way of training prototypical few-shot models for just a single class is shown, and a novel Gaussian layer for distance calculation in a prototypical network is proposed, which takes the support examples' distribution rather than just their centroid into account.
Prototypical Bregman Networks
In this work, we approach one-shot and few-shot learning problems as methods for finding good prototypes for each class, where these prototypes are generalizable to new data samples and classes. We
Meta-Learning for Semi-Supervised Few-Shot Classification
TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.
Simultaneous Perturbation Stochastic Approximation for Few-Shot Learning
TLDR
This paper suggests to consider the new multi-task loss function and proposes the SPSA-like few-shot learning approach based on the prototypical networks method, and provides a theoretical justification and an analysis of experiments for this approach.
Semi-Supervised and Active Few-Shot Learning with Prototypical Networks
We consider the problem of semi-supervised few-shot classification where a classifier needs to adapt to new tasks using a few labeled examples and (potentially many) unlabeled examples. We propose a
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.
Optimization as a Model for Few-Shot Learning
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions
TLDR
An approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes is proposed.
Synthesized Classifiers for Zero-Shot Learning
TLDR
This work introduces a set of "phantom" object classes whose coordinates live in both the semantic space and the model space and demonstrates superior accuracy of this approach over the state of the art on four benchmark datasets for zero-shot learning.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning
Evaluation of output embeddings for fine-grained image classification
TLDR
This project shows that compelling classification performance can be achieved on fine-grained categories even without labeled training data, and establishes a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets.
Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions
TLDR
A new model is presented that can classify unseen categories from their textual description and takes advantage of the architecture of CNNs and learn features at different layers, rather than just learning an embedding space for both modalities, as is common with existing approaches.
Towards a Neural Statistician
TLDR
An extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion is demonstrated that is able to learn statistics that can be used for clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes.
One shot learning of simple visual concepts
TLDR
A generative model of how characters are composed from strokes is introduced, where knowledge from previous characters helps to infer the latent strokes in novel characters, using a massive new dataset of handwritten characters.
...
1
2
3
4
5
...