Corpus ID: 3507990

Meta-Learning for Semi-Supervised Few-Shot Classification

@article{Ren2018MetaLearningFS,
  title={Meta-Learning for Semi-Supervised Few-Shot Classification},
  author={Mengye Ren and Eleni Triantafillou and S. Ravi and J. Snell and Kevin Swersky and J. Tenenbaum and H. Larochelle and R. Zemel},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.00676}
}
In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. [...] Key Method These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical…Expand
Training few-shot classification via the perspective of minibatch and pretraining
TLDR
This work proposes multi-episode and cross-way training techniques, which respectively correspond to the minibatch and pretraining in classification problems, and demonstrates that both the proposed training strategies can highly accelerate the training process without accuracy loss for varying few-shot classification problems on Omniglot and miniImageNet. Expand
Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification
TLDR
This work proposes a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled, and introduces a concept of controlling the degree of task-conditioning for meta-learning. Expand
Learning to Self-Train for Semi-Supervised Few-Shot Classification
TLDR
A novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance is proposed. Expand
Flexible Few-Shot Learning with Contextual Similarity
TLDR
This work proposes to build upon recent contrastive unsupervised learning techniques and use a combination of instance and class invariance learning, aiming to obtain general and flexible features, and finds that this approach performs strongly on new flexible few-shot learning benchmarks. Expand
Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
TLDR
It is demonstrated that the self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset and has comparable performance to supervised methods, but requires orders of magnitude fewer labels. Expand
Meta Generalized Network for Few-Shot Classification
TLDR
This paper develops a meta backbone training method that learns a flexible feature extractor and a classifier initializer efficiently, delightedly leading to fast adaption to unseen few-shot tasks without overfitting, and designs a trainable adaptive interval model to improve the cosine classifier, which increases the recognition accuracy of hard examples. Expand
Few-Shot Classification By Few-Iteration Meta-Learning
TLDR
This work is the first to integrate both induction and transduction into the base learner in an optimization-based meta-learning framework, and performs a comprehensive experimental analysis, demonstrating the effectiveness of the approach on four few-shot classification datasets. Expand
Task Cooperation for Semi-Supervised Few-Shot Learning
TLDR
This work couple the labeled support set in a few-shot task with easily-collected unlabeled instances, prediction agreement on which encodes the relationship between tasks, and learns smooth meta-model which promotes the generalization ability on supervised UNSEEN few- shot tasks. Expand
Class-Discriminative Feature Embedding For Meta-Learning based Few-Shot Classification
  • Alireza Rahimpour, H. Qi
  • Computer Science
  • 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2020
TLDR
A few-shot learning framework based on structured margin loss which takes into account the global structure of the support set in order to generate a highly discriminative feature space where the features from distinct classes are well separated in clusters is proposed. Expand
Transductive Propagation Network for Few-shot Learning
TLDR
This paper proposes Transductive Propagation Network (TPN), a transductive method that classifies the entire test set at once to alleviate the low-data problem and explicitly learns an underlying manifold space that is appropriate to propagate labels from few-shot examples. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 28 REFERENCES
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. Expand
Optimization as a Model for Few-Shot Learning
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Semi-Supervised Self-Training of Object Detection Models
TLDR
The key contributions of this empirical study are to demonstrate that a model trained in this manner can achieve results comparable to a modeltrained in the traditional manner using a much larger set of fully labeled data, and that a training data selection metric that is defined independently of the detector greatly outperforms a selection metric based on the detection confidence generated by the detector. Expand
Meta-Learning with Temporal Convolutions
TLDR
This work proposes a class of simple and generic meta-learner architectures, based on temporal convolutions, that is domain- agnostic and has no particular strategy or algorithm encoded into it and outperforms state-of-the-art methods that are less general and more complex. Expand
One shot learning of simple visual concepts
TLDR
A generative model of how characters are composed from strokes is introduced, where knowledge from previous characters helps to infer the latent strokes in novel characters, using a massive new dataset of handwritten characters. Expand
One-shot Learning with Memory-Augmented Neural Networks
TLDR
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated. Expand
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
...
1
2
3
...