• Corpus ID: 246240444

EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients

@article{Bendou2022EASYEA,
  title={EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients},
  author={Yassir Bendou and Yuqing Hu and Raphael Lafargue and Giulia Lioi and Bastien Pasdeloup and St{\'e}phane Pateux and Vincent Gripon},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.09699}
}
Few-shot learning aims at leveraging knowledge learned by one or more deep learning models, in order to obtain good classification performance on new problems, where only a few labeled samples per class are available. Recent years have seen a fair number of works in the field, introducing methods with numerous ingredients. A frequent problem, though, is the use of suboptimally trained models to extract knowledge, leading to interrogations on whether proposed approaches bring gains compared to… 
It's DONE: Direct ONE-shot learning without training optimization
TLDR
DONE might be telling us one-shot learning is an easy task that can be achieved by a simple principle not only for humans but also for current well-trained DNN models.
It's DONE: Direct ONE-shot learning with Hebbian weight imprinting
TLDR
DONE requires just one inference for learning a new concept and its procedure is simple, deterministic, not requiring parameter tuning and hyperparameters, and might be telling us one-shot learning is an easy task that can be achieved by a simple principle not only for humans but also for current well-trained DNN models.

References

SHOWING 1-10 OF 54 REFERENCES
Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning
TLDR
This paper proposes a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors.
Partner-Assisted Learning for Few-Shot Image Classification
TLDR
This paper proposes a two-stage training scheme, Partner-Assessment Learning (PAL), which first trains a Partner Encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a Main Encoder by aligning its outputs with soft- Anchors while attempting to maximize classification performance.
A Baseline for Few-Shot Image Classification
TLDR
This work performs extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode and finds that using a large number of meta-training classes results in high few- shot accuracies even for a largeNumber of few-shots classes.
Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning
TLDR
This work proposes a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations and shows that even without knowledge distillation this proposed method can outperform current state-of-the-art FSL methods on five popular benchmark datasets.
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
TLDR
It is shown that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods.
A Closer Look at Few-shot Classification
TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions
TLDR
This paper proposes a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddeddings that are task-specific and are discriminative.
Leveraging the Feature Distribution in Transfer-based Few-Shot Learning
TLDR
A transfer-based novel method that builds on two steps: preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and leveraging this preprocessing using an optimal-transport inspired algorithm.
Iterative label cleaning for transductive and semi-supervised few-shot learning
TLDR
This work introduces a new algorithm that leverages the manifold structure of the labeled and unlabeled data distribution to predict pseudo-labels, while balancing over classes and using the loss value distribution of a limited-capacity classifier to select the cleanest labels, iteratively improving the quality of pseudo-Labels.
...
...