• Corpus ID: 235694435

Few-Shot Learning with a Strong Teacher

  title={Few-Shot Learning with a Strong Teacher},
  author={Han-Jia Ye and Lu Ming and De-Chuan Zhan and Wei-Lun Chao},
Few-shot learning (FSL) aims to train a strong classifier using limited labeled examples. Many existing works take the meta-learning approach, sampling few-shot tasks in turn and optimizing the few-shot learner’s performance on classifying the query examples. In this paper, we point out two potential weaknesses of this approach. First, the sampled query examples may not provide sufficient supervision for the few-shot learner. Second, the effectiveness of meta-learning diminishes sharply with… 
Towards Enabling Meta-Learning from Target Models
It is found that with a small ratio of tasks armed with target models, classic meta-learning algorithms can be improved a lot without consuming many resources.


Meta-Learning for Semi-Supervised Few-Shot Classification
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.
Finding Task-Relevant Features for Few-Shot Learning by Category Traversal
A Category Traversal Module is introduced that can be inserted as a plug-and-play module into most metric-learning based few-shot learners, identifying task-relevant features based on both intra-class commonality and inter-class uniqueness in the feature space.
A Theoretical Analysis of the Number of Shots in Few-Shot Learning
A theoretical analysis of the impact of the shot number on Prototypical Networks, a state-of-the-art few-shot classification method, is introduced and a simple method that is robust to the choice of shot number used during meta-training is proposed, which is a crucial hyperparameter.
Learning to Compare: Relation Network for Few-Shot Learning
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
Learning to Self-Train for Semi-Supervised Few-Shot Classification
A novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance is proposed.
Few-Shot Learning With Global Class Representations
This paper proposes to tackle the challenging few-shot learning (FSL) problem by learning global class representations using both base and novel class training samples, and an effective sample synthesis strategy is developed to avoid overfitting.
Boosting Few-Shot Visual Learning With Self-Supervision
This work uses self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples.
TADAM: Task dependent adaptive metric for improved few-shot learning
This work identifies that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms and proposes and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space.
Unsupervised Meta-Learning for Few-Shot Image Classification
UMTRA is proposed, an algorithm that performs unsupervised, model-agnostic meta-learning for classification tasks, and trades off some classification accuracy for a reduction in the required labels of several orders of magnitude.
A Closer Look at Few-shot Classification
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.