• Corpus ID: 221341010

learn2learn: A Library for Meta-Learning Research

@article{Arnold2020learn2learnAL,
  title={learn2learn: A Library for Meta-Learning Research},
  author={S{\'e}bastien M. R. Arnold and Praateek Mahajan and Debajyoti Datta and Ian Bunner and Konstantinos Saitas Zarkias},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.12284}
}
Meta-learning researchers face two fundamental issues in their empirical work: prototyping and reproducibility. Researchers are prone to make mistakes when prototyping new algorithms and tasks because modern meta-learning methods rely on unconventional functionalities of machine learning frameworks. In turn, reproducing existing results becomes a tedious endeavour -- a situation exacerbated by the lack of standardized implementations and benchmarks. As a result, researchers spend inordinate… 

MetaFlow: A Meta Learning With Normalizing Flows Approach For Few Shots Learning

An approach to enhance gradient-based algorithms for meta learning is proposed, where a mean accuracy of 62% with only 3 training iterations over tasks sampled from the Omniglot dataset is achieved.

A Channel Coding Benchmark for Meta-Learning

This work proposes the channel coding problem as a benchmark for meta-learning, and uses the MetaCC benchmark to study several aspects of meta- learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.

A Channel Coding Benchmark for Meta-Learning

This work proposes the channel coding problem as a benchmark for meta-learning, and uses the MetaCC benchmark to study several aspects of meta- learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.

A Channel Coding Benchmark for Meta-Learning

This work proposes the channel coding problem as a benchmark for meta- learning and uses this benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.

A Channel Coding Benchmark for Meta-Learning

This work proposes the channel coding problem as a benchmark for meta- learning and uses this benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.

Meta-Learning with Self-Improving Momentum Target

This work proposes a simple yet effective method, coined Self-improving Momentum Target (SiMT), which generates the target model by adapting from the temporal ensemble of the meta-learner, i.e., the momentum network, and demonstrates that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods under various applications.

Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation

This work proves that for over-parameterized neural networks with sufficient depth, the learned predictive functions of MTL and GBML are close, and corroborates the theoretical findings by showing that, with proper implementation, MTL is competitive against state-of-the-art GBML algorithms on a set of few-shot image classification benchmarks.

When MAML Can Adapt Fast and How to Assist When It Cannot

This work finds MAML adapts better with a deep architecture even if the tasks need only a shallow one (and thus, no representation learning is needed), and also finds that upper layers enable fast adaptation by being meta-learned to perform adaptive gradient update when generalizing to new tasks.

The Curse of Zero Task Diversity: On the Failure of Transfer Learning to Outperform MAML and their Empirical Equivalence

It is concluded that in the low diversity regime, MAML and transfer learning have equivalent meta-test performance when both are compared fairly.

protANIL: a Fast and Simple Meta-Learning Algorithm

This work proposes an algorithm that combines the complementary strengths of these two approaches, and, at the same time, significantly lowers the computational cost of the protANIL algorithm.

References

SHOWING 1-10 OF 36 REFERENCES

Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML

The ANIL (Almost No Inner Loop) algorithm is proposed, a simplification of MAML where the inner loop is removed for all but the (task-specific) head of a MAMl-trained network, and performance on the test tasks is entirely determined by the quality of the learned features, and one can remove even the head of the network (the NIL algorithm).

Meta-SGD: Learning to Learn Quickly for Few Shot Learning

Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Torchmeta: A Meta-Learning library for PyTorch

The constant introduction of standardized benchmarks in the literature has helped accelerating the recent advances in meta-learning research. They offer a way to get a fair comparison between

When MAML Can Adapt Fast and How to Assist When It Cannot

This work finds MAML adapts better with a deep architecture even if the tasks need only a shallow one (and thus, no representation learning is needed), and also finds that upper layers enable fast adaptation by being meta-learned to perform adaptive gradient update when generalizing to new tasks.

Meta-learning with differentiable closed-form solvers

The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.

Meta-Learning With Differentiable Convex Optimization

The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem.

Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning

This work uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context and demonstrates the importance of incorporating online adaptation into autonomous agents that operate in the real world.

Benchmarking Deep Reinforcement Learning for Continuous Control

This work presents a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, task with partial observations, and tasks with hierarchical structure.

Matching Networks for One Shot Learning

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.