• Corpus ID: 220404368

Predicting the Accuracy of a Few-Shot Classifier

@article{Bontonou2020PredictingTA,
  title={Predicting the Accuracy of a Few-Shot Classifier},
  author={Myriam Bontonou and Louis B'ethune and Vincent Gripon},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.04238}
}
In the context of few-shot learning, one cannot measure the generalization ability of a trained classifier using validation sets, due to the small number of labeled samples. In this paper, we are interested in finding alternatives to answer the question: is my classifier generalizing well to previously unseen data? We first analyze the reasons for the variability of generalization performances. We then investigate the case of using transfer-based solutions, and consider three settings: i… 
Graphs as Tools to Improve Deep Learning Methods
TLDR
This chapter is composed of four main parts: tools for visualizing intermediate layers in a DNN, denoising data representations, optimizing graph objective functions and regularizing the learning process.
Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs
TLDR
This work proposes exploiting Latent Geometry Graphs (LGGs) to represent the latent spaces of trained DNN architectures by connecting samples that yield similar latent representations at a given layer of the considered DNN.

References

SHOWING 1-10 OF 42 REFERENCES
Meta-Learning for Semi-Supervised Few-Shot Classification
TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.
A Closer Look at Few-shot Classification
TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
Exploiting Unsupervised Inputs for Accurate Few-Shot Classification
TLDR
This paper proposes a method able to exploit three levels of information: a) feature extractors pretrained on generic datasets, b) few labelled examples of classes to discriminate and c) other available unlabelled inputs.
Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions
TLDR
This paper proposes a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddeddings that are task-specific and are discriminative.
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
TLDR
It is shown that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods.
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
TADAM: Task dependent adaptive metric for improved few-shot learning
TLDR
This work identifies that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms and proposes and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space.
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification
TLDR
It is shown that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5\%$, while increasing the benefit of using unlabeled data in FSL to above $10\%$ performance gain.
SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning
TLDR
Surprisingly, simple feature transformations suffice to obtain competitive few-shot learning accuracies and it is found that a nearest-neighbor classifier used in combination with mean-subtraction and L2-normalization outperforms prior results in three out of five settings on the miniImageNet dataset.
...
...