• Corpus ID: 239998536

On sensitivity of meta-learning to support data

@inproceedings{Agarwal2021OnSO,
  title={On sensitivity of meta-learning to support data},
  author={Mayank Agarwal and Mikhail Yurochkin and Yuekai Sun},
  booktitle={NeurIPS},
  year={2021}
}
Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4% or as high as 95% on… 
Deep Metric Learning for Few-Shot Image Classification: A Review of Recent Developments
TLDR
This paper provides an up-to-date review of deep metric learning methods for few-shot image classification from 2018 to 2022 and categorize them into three groups according to three stages of metric learning, namely learning feature embeddings, learning class representations, and learning distance measures.
Log-Euclidean Signatures for Intrinsic Distances Between Unaligned Datasets
TLDR
This work uses manifold learning to compare the intrinsic geometric structures of SPD matrices by comparing their diffusion operators, symmetric positive-definite (SPD) matrices that relate to approximations of the continuous Laplace-Beltrami operator from discrete samples and develops a new theoretically-motivated distance based on a lower bound of the log-Euclidean metric.
Channel Importance Matters in Few-Shot Image Classification
TLDR
It is shown that a simple channel-wise feature transformation may be the key to unraveling this secret from a channel perspective and can greatly improve the generalization ability of learned image representations, while being agnostic to the choice of datasets and training algorithms.

References

SHOWING 1-10 OF 46 REFERENCES
Data Augmentation for Meta-Learning
TLDR
This work systematically dissects the meta-learning pipeline and investigates the distinct ways in which data augmentation can be integrated at both the image and class levels to improve the performance of meta-learners on few-shot classification benchmarks.
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
TLDR
A better understanding of the underlying mechanics of meta-learning is developed and a regularizer is developed which boosts the performance of standard training routines for few-shot classification.
Meta-Learning With Differentiable Convex Optimization
TLDR
The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem.
Is Support Set Diversity Necessary for Meta-Learning?
TLDR
This work proposes a modification to traditional meta-learning approaches in which the support sets are kept fixed across tasks, thus reducing task diversity, and finds that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta- learning methods.
Optimization as a Model for Few-Shot Learning
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning
Meta-Learning in Neural Networks: A Survey
TLDR
A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today and surveys promising applications and successes ofMeta-learning such as few-shot learning and reinforcement learning.
Adversarially Robust Few-Shot Learning: A Meta-Learning Approach
TLDR
An algorithm for producing adversarially robust meta-learners is developed, which achieves far superior robust performance on few-shot image classification tasks, such as Mini-ImageNet and CIFAR-FS, than robust transfer learning.
...
...