Corpus ID: 19524161

Gaussian Prototypical Networks for Few-Shot Learning on Omniglot

@article{Fort2017GaussianPN,
  title={Gaussian Prototypical Networks for Few-Shot Learning on Omniglot},
  author={Stanislav Fort},
  journal={ArXiv},
  year={2017},
  volume={abs/1708.02735}
}
We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then… Expand
One-Way Prototypical Networks
TLDR
A new way of training prototypical few-shot models for just a single class is shown, and a novel Gaussian layer for distance calculation in a prototypical network is proposed, which takes the support examples' distribution rather than just their centroid into account. Expand
Prototypical Siamese Networks for Few-shot Learning
  • Junhua Wang, Yongping Zhai
  • Computer Science
  • 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC)
  • 2020
We propose a novel architecture, called Prototypical Siamese Networks, for few-shot learning, where a classifier must generalize to new classes not seen in the training set, given only a few examplesExpand
Multiclass triplet metric-learning network combined with feature mixing block for few shot learning
TLDR
A multiclass triplet metric-learning network combined with a simple foreground–background feature mixing block that learns a feature embedding function that could bring similar samples close to each other and keep samples of different classes far apart to promote few-shot learning. Expand
L2-norm prototypical networks for tackling the data shift problem in scene classification
ABSTRACT Currently, most scene classification algorithms are trained and evaluated based on a single dataset. However, practical applications are not usually restricted to specific satelliteExpand
Comparative Analysis on Classical Meta-Metric Models for Few-Shot Learning
TLDR
The experimental results show that for all models evaluated, the addition of non-pretrained networks will make the classification results worse, which shows that it is easy to overfit when using deep networks for few-shot learning. Expand
Class Representation Networks for Few-Shot Learning
  • Yongping Zhai, Junhua Wang
  • Computer Science
  • 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS)
  • 2020
TLDR
In the proposed CRNs, a high-quality class representation is learned by training a set-based neural network, and a network with fully connected layers was constructed for learning distance metric instead of using a predefined distance metric. Expand
Stochastic Prototype Embeddings
TLDR
This work describes an efficient sampler for approximate inference that allows the model to train at roughly the same space and time cost as its deterministic sibling, and aligns class-discriminating features with the axes of the embedding space, yielding an interpretable, disentangled representation. Expand
LGSim: local task-invariant and global task-specific similarity for few-shot classification
TLDR
This paper develops a neural network to learn the pairwise local relationship between each pair of samples in the union set that is composed of support set and query set, which fully utilize the supervision, and designs a global similarity function from the manifold perspective. Expand
MxML: Mixture of Meta-Learners for Few-Shot Classification
TLDR
This paper presents a method for constructing a mixture of meta-learners (MxML), where mixing parameters are determined by the weight prediction network (WPN) optimized to improve the few-shot classification performance. Expand
Multi-domain few-shot image recognition with knowledge transfer
TLDR
This work proposes a model that can adaptively integrate visual and semantic information to recognize novel categories and adopts a fine-tuning strategy to adjust the scale and shift parameters of the batch normalization layers to simulate various feature distributions under different domains. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 24 REFERENCES
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. Expand
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. Expand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Optimization as a Model for Few-Shot Learning
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Towards a Neural Statistician
TLDR
An extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion is demonstrated that is able to learn statistics that can be used for clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes. Expand
One-shot Learning with Memory-Augmented Neural Networks
TLDR
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated. Expand
One shot learning of simple visual concepts
TLDR
A generative model of how characters are composed from strokes is introduced, where knowledge from previous characters helps to infer the latent strokes in novel characters, using a massive new dataset of handwritten characters. Expand
Meta-Learning with Temporal Convolutions
TLDR
This work proposes a class of simple and generic meta-learner architectures, based on temporal convolutions, that is domain- agnostic and has no particular strategy or algorithm encoded into it and outperforms state-of-the-art methods that are less general and more complex. Expand
Meta Networks
TLDR
A novel meta learning method, Meta Networks (MetaNet), is introduced that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. Expand
...
1
2
3
...