• Corpus ID: 226226433

Combining Domain-Specific Meta-Learners in the Parameter Space for Cross-Domain Few-Shot Classification

  title={Combining Domain-Specific Meta-Learners in the Parameter Space for Cross-Domain Few-Shot Classification},
  author={Shuman Peng and Weilian Song and Martin Ester},
The goal of few-shot classification is to learn a model that can classify novel classes using only a few training examples. Despite the promising results shown by existing meta-learning algorithms in solving the few-shot classification problem, there still remains an important challenge: how to generalize to unseen domains while meta-learning on multiple seen domains? In this paper, we propose an optimization-based meta-learning method, called Combining Domain-Specific Meta-Learners (CosML… 

Figures and Tables from this paper



MxML: Mixture of Meta-Learners for Few-Shot Classification

This paper presents a method for constructing a mixture of meta-learners (MxML), where mixing parameters are determined by the weight prediction network (WPN) optimized to improve the few-shot classification performance.

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples

This work proposes Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks, and proposes a new set of baselines for quantifying the benefit of meta-learning in Meta- Dataset.

Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

The core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage, and applies a learning-to-learn approach to search for the hyper-parameters of the feature- wise transformation layers.

Optimization as a Model for Few-Shot Learning

Learning to Compare: Relation Network for Few-Shot Learning

A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.

A Closer Look at Few-shot Classification

The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.

Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation

This paper proposes a multimodal MAML (MMAML) framework, which is able to modulate its meta-learned prior parameters according to the identified mode, allowing more efficient fast adaptation and demonstrating the effectiveness of the model in modulating the meta-learning prior in response to the characteristics of tasks.

Meta-Learning with Latent Embedding Optimization

This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.

Few-Shot Learning with Metric-Agnostic Conditional Embeddings

This work introduces a novel architecture where class representations are conditioned for each few-shot trial based on a target image, and deviates from traditional metric-learning approaches by training a network to perform comparisons between classes rather than relying on a static metric comparison.

Few-Shot Learning Through an Information Retrieval Lens

This work defines a training objective that aims to extract as much information as possible from each training batch by effectively optimizing over all relative orderings of the batch points simultaneously and defines a model within the framework of structured prediction to optimize mean Average Precision over these rankings.