Adaptive Task Sampling for Meta-Learning
@article{Liu2020AdaptiveTS, title={Adaptive Task Sampling for Meta-Learning}, author={Chenghao Liu and Zhihao Wang and Doyen Sahoo and Yuan Fang and Kun Zhang and Steven C. H. Hoi}, journal={ArXiv}, year={2020}, volume={abs/2007.08735} }
Meta-learning methods have been extensively studied and applied in computer vision, especially for few-shot classification tasks. The key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time by randomly sampling classes in meta-training data to construct few-shot tasks for episodic training. While a rich line of work focuses solely on how to extract meta-knowledge across tasks, we exploit the complementary problem on how to generate…
24 Citations
Not All Tasks are Equal - Task Attended Meta-learning for Few-shot Learning
- Computer Science
- 2022
A training curriculum called task attended meta-training is introduced to learn a meta-model from weighted tasks in a batch, and performance improvement of proposed curriculum over state-of-the-art task scheduling algo-12 rithms on noisy datasets, and cross-domain few shot learning setup validate its effectiveness.
Curriculum Meta Learning: Learning to Learn from Easy to Hard
- Computer ScienceProceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering
- 2021
This paper defines the hardness of subtasks at the class level and guides the model to learn training subtasks from easy to hard and proposes a curriculum learning framework to improve the generalization performance of different meta-learning algorithms.
The Effect of Diversity in Meta-Learning
- Computer ScienceArXiv
- 2022
It is demonstrated that even a handful of tasks, repeated over multiple batches, would be sufficient to achieve a performance similar to uniform sampling and draws into question the need for additional tasks to create better models.
Coarse-to-fine pseudo supervision guided meta-task optimization for few-shot object classification
- Computer SciencePattern Recognit.
- 2022
Uniform Sampling over Episode Difficulty
- Computer ScienceNeurIPS
- 2021
It is shown that sampling uniformly over episode difficulty outperforms other sampling schemes, including curriculum and easy-/hard-mining, as the proposed sampling method is algorithm agnostic and can leverage insights to improve few-shot learning accuracies across many episodic training algorithms.
Free-Lunch for Cross-Domain Few-Shot Learning: Style-Aware Episodic Training with Robust Contrastive Learning
- Computer ScienceACM Multimedia
- 2022
Style-aware Episodic Training with Robust Contrastive Learning (SET-RCL) is proposed, which focuses on manipulating the styl distribution of training tasks in the source domain, such that the learned model can achieve better adaption on test tasks with domain-specific styles.
Multidimensional Belief Quantification for Label-Efficient Meta-Learning
- Computer Science2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2022
A novel uncertainty-aware task selection model is proposed for label efficient meta-learning that formulates a multidimensional belief measure, which can quantify the known uncertainty and lower bound the unknown uncertainty of any given task.
Progressive Meta-Learning With Curriculum
- Computer ScienceIEEE Transactions on Circuits and Systems for Video Technology
- 2022
This paper develops a Curriculum-Based Meta-learning method based on a predefined curriculum, and proposes an end-to-end Self-Paced Meta- learning (SepMeta) method, which is effectively integrated as a regularization term into the objective so that the meta-learner can measure the hardness of tasks adaptively, according to what the model has already learned.
Curriculum-Based Meta-learning
- Computer ScienceACM Multimedia
- 2021
This paper presents a Curriculum-Based Meta-learning (CubMeta) method to train the meta-learner using tasks from easy to hard, and in each step, a module named BrotherNet is designed to establish harder tasks and an effective learning scheme for obtaining an ensemble of stronger meta-learners.
Near-Optimal Task Selection for Meta-Learning with Mutual Information and Online Variational Bayesian Unlearning
- Computer ScienceAISTATS
- 2022
This paper exploits the submodularity property of the new criterion for devising the first active task selection algorithm for meta-learning with a near-optimal performance guarantee and proposes an online variant of the Stein variational gradient descent to perform fast belief updates of the meta-parameters.
References
SHOWING 1-10 OF 63 REFERENCES
Meta-Transfer Learning for Few-Shot Learning
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
A novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks and introduces the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL.
Meta-Learning for Semi-Supervised Few-Shot Classification
- Computer ScienceICLR
- 2018
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.
Meta-SGD: Learning to Learn Quickly for Few Shot Learning
- Computer Science, EducationArXiv
- 2017
Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.
On First-Order Meta-Learning Algorithms
- Computer ScienceArXiv
- 2018
A family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates, including Reptile, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task.
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
- Computer ScienceICLR
- 2020
This work proposes Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks, and proposes a new set of baselines for quantifying the benefit of meta-learning in Meta- Dataset.
Meta-Learning With Differentiable Convex Optimization
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem.
TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning
- Computer ScienceICML
- 2019
TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning by employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks.
Learning to Compare: Relation Network for Few-Shot Learning
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
Learning to Propagate Labels: Transductive Propagation Network for Few-Shot Learning
- Computer ScienceICLR
- 2019
This paper proposes Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.