• Corpus ID: 239998536

On sensitivity of meta-learning to support data

@inproceedings{Agarwal2021OnSO,
  title={On sensitivity of meta-learning to support data},
  author={Mayank Agarwal and Mikhail Yurochkin and Yuekai Sun},
  booktitle={NeurIPS},
  year={2021}
}
Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4% or as high as 95% on… 

Deep Metric Learning for Few-Shot Image Classification: A Review of Recent Developments

TLDR
This paper provides an up-to-date review of deep metric learning methods for few-shot image classification from 2018 to 2022 and categorize them into three groups according to three stages of metric learning, namely learning feature embeddings, learning class representations, and learning distance measures.

Class-Specific Channel Attention for Few-Shot Learning

TLDR
This paper proposes Class-Specific Channel Attention (CSCA) module, which learns to highlight the discriminative channels in each class by assigning each class one CSCA weight vector, and aims to learn local and class-species features with very effective computation.

Channel Importance Matters in Few-Shot Image Classification

TLDR
It is found that the difficulty of representation transfer in FSL stems from the severe channel bias problem of image representations: channels may have different importance in different tasks, while convolutional neural networks are likely to be insensitive, or respond incorrectly to such a shift.

Log-Euclidean Signatures for Intrinsic Distances Between Unaligned Datasets

TLDR
This work uses manifold learning to compare the intrinsic geometric structures of SPD matrices by comparing their diffusion operators, symmetric positive-definite (SPD) matrices that relate to approximations of the continuous Laplace-Beltrami operator from discrete samples and develops a new theoretically-motivated distance based on a lower bound of the log-Euclidean metric.

References

SHOWING 1-10 OF 46 REFERENCES

Data Augmentation for Meta-Learning

TLDR
This work systematically dissects the meta-learning pipeline and investigates the distinct ways in which data augmentation can be integrated at both the image and class levels to improve the performance of meta-learners on few-shot classification benchmarks.

Prototypical Networks for Few-shot Learning

TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks

TLDR
A better understanding of the underlying mechanics of meta-learning is developed and a regularizer is developed which boosts the performance of standard training routines for few-shot classification.

Meta-Learning With Differentiable Convex Optimization

TLDR
The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem.

Is Support Set Diversity Necessary for Meta-Learning?

TLDR
This work proposes a modification to traditional meta-learning approaches in which the support sets are kept fixed across tasks, thus reducing task diversity, and finds that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta- learning methods.

Optimization as a Model for Few-Shot Learning

Siamese Neural Networks for One-Shot Image Recognition

TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

A Baseline for Few-Shot Image Classification

TLDR
This work performs extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode and finds that using a large number of meta-training classes results in high few- shot accuracies even for a largeNumber of few-shots classes.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Matching Networks for One Shot Learning

TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.