• Corpus ID: 3507990

Meta-Learning for Semi-Supervised Few-Shot Classification

@article{Ren2018MetaLearningFS,
  title={Meta-Learning for Semi-Supervised Few-Shot Classification},
  author={Mengye Ren and Eleni Triantafillou and Sachin Ravi and Jake Snell and Kevin Swersky and Joshua B. Tenenbaum and H. Larochelle and Richard S. Zemel},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.00676}
}
In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. [] Key Method These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical…

Figures and Tables from this paper

Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot Tasks.

This work removes the requirement of base class labels and learns generalizable embeddings via Unsupervised Meta-Learning (UML), and applies embedding-based classifiers to novel tasks with labeled few-shot examples during meta-test.

Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification

This work proposes a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled, and introduces a concept of controlling the degree of task-conditioning for meta-learning.

Learning to Self-Train for Semi-Supervised Few-Shot Classification

A novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance is proposed.

Flexible Few-Shot Learning with Contextual Similarity

This work proposes to build upon recent contrastive unsupervised learning techniques and use a combination of instance and class invariance learning, aiming to obtain general and flexible features, and finds that this approach performs strongly on new flexible few-shot learning benchmarks.

Self-Adaptive Label Augmentation for Semi-supervised Few-shot Classification

Experiments demonstrate that SALA outperforms several state-of-the-art methods for semi-supervised few-shot classification on benchmark datasets.

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification

It is demonstrated that the self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset and has comparable performance to supervised methods, but requires orders of magnitude fewer labels.

Meta Generalized Network for Few-Shot Classification

This paper develops a meta backbone training method that learns a flexible feature extractor and a classifier initializer efficiently, delightedly leading to fast adaption to unseen few-shot tasks without overfitting, and designs a trainable adaptive interval model to improve the cosine classifier, which increases the recognition accuracy of hard examples.

Few-Shot Learning with a Strong Teacher

A novel meta-training objective for the few-shot learner is proposed, which is to encourage theFew- shot learner to generate classifiers that perform like strong classifiers, and meta-learning based FSL methods can consistently outperform non-meta- learning based methods at different numbers of shots.

Task Cooperation for Semi-Supervised Few-Shot Learning

This work couple the labeled support set in a few-shot task with easily-collected unlabeled instances, prediction agreement on which encodes the relationship between tasks, and learns smooth meta-model which promotes the generalization ability on supervised UNSEEN few- shot tasks.

Semi-Supervised Few-Shot Learning from A Dependency-Discriminant Perspective

  • Zejiang HouS. Kung
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2022
To train a classifier, a Dependency Maximization loss based on the Hilbert-Schmidt norm of the cross-covariance operator is proposed, which maximizes the statistical dependency between the embedded feature of the unlabeled data and their label predictions, together with the supervised loss over the support set.
...

References

SHOWING 1-10 OF 26 REFERENCES

Prototypical Networks for Few-shot Learning

This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Optimization as a Model for Few-Shot Learning

Matching Networks for One Shot Learning

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

Transductive Multi-View Zero-Shot Learning

A novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space that rectifies the projection shift between the auxiliary and target domains, exploits the complementarity of multiple semantic representations, and significantly outperforms existing methods for both zero- shot and N-shot recognition.

Siamese Neural Networks for One-Shot Image Recognition

A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

Semi-Supervised Self-Training of Object Detection Models

The key contributions of this empirical study are to demonstrate that a model trained in this manner can achieve results comparable to a modeltrained in the traditional manner using a much larger set of fully labeled data, and that a training data selection metric that is defined independently of the detector greatly outperforms a selection metric based on the detection confidence generated by the detector.

Meta-Learning with Temporal Convolutions

This work proposes a class of simple and generic meta-learner architectures, based on temporal convolutions, that is domain- agnostic and has no particular strategy or algorithm encoded into it and outperforms state-of-the-art methods that are less general and more complex.

One shot learning of simple visual concepts

A generative model of how characters are composed from strokes is introduced, where knowledge from previous characters helps to infer the latent strokes in novel characters, using a massive new dataset of handwritten characters.

One-shot Learning with Memory-Augmented Neural Networks

The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated.

Learning Algorithms for Active Learning

A model that learns active learning algorithms via metalearning jointly learns: a data representation, an item selection heuristic, and a prediction function for a distribution of related tasks.