Leveraging the Feature Distribution in Transfer-based Few-Shot Learning

@inproceedings{Hu2021LeveragingTF,
  title={Leveraging the Feature Distribution in Transfer-based Few-Shot Learning},
  author={Yuqing Hu and Vincent Gripon and St{\'e}phane Pateux},
  booktitle={ICANN},
  year={2021}
}
Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples. In the past few years, transfer-based methods have proved to achieve the best performance, thanks to well-thought-out backbone architectures combined with efficient postprocessing steps. Following this vein, in this paper we propose a transfer-based novel method that builds on two steps: 1) preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and… 

Active Few-Shot Classification: a New Paradigm for Data-Scarce Learning Settings

We consider a novel formulation of the problem of Active Few-Shot Classification (AFSC) where the objective is to classify a small, initially unlabeled, dataset given a very restrained labeling

EASE: Unsupervised Discriminant Subspace Learning for Transductive Few-Shot Learning

  • Hao ZhuPiotr Koniusz
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
TLDR
An unsupErvised discriminAnt Subspace lEarning (EASE) that improves transductive few-shot learning performance by learning a linear projection onto a subspace built from features of the support set and the unlabeled query set in the test time is presented.

Model-Agnostic Few-Shot Open-Set Recognition

TLDR
This work introduces an Open Set Transductive Information Maximization method (OSTIM), which hallucinates an outlier prototype while maximizing the mutual information between extracted features and assignments and shows that OSTIM’s model agnosticity allows it to successfully leverage the strong expressive abilities of the latest architectures and training strategies without any hyperparameter modi-cation.

Hybrid Consistency Training with Prototype Adaptation for Few-Shot Learning

TLDR
This work uses unlabeled examples to iteratively normalize features and adapt prototypes, as opposed to commonly used one-time update, for more reliable prototype-based transductive inference inFew-Shot Learning.

Realistic Evaluation of Transductive Few-Shot Learning

TLDR
This work introduces and study the effect of arbitrary class distributions within the query sets of few-shot tasks at inference, removing the class-balance artefact, and proposes a generalization of the mutual-information loss, based on α -divergences, which can handle effectively class-distribution variations.

Bridging Few-Shot Learning and Adaptation: New Challenges of Support-Query Shift

TLDR
This work addresses the new and challenging problem of Few-Shot Learning under Support/Query Shift (FSQS) i.e., when support and query instances are sampled from related but different distributions, and studies both the role of Batch-Normalization and Optimal Transport in aligning distributions.

Few-Shot Learning by Integrating Spatial and Frequency Representation

TLDR
This paper proposes to integrate the frequency information into the learning model to boost the discrimination ability of the system, employing Discrete Cosine Transformation to generate the frequency representation and integrating the features from both the spatial domain and frequency domain for classification.

Transfer learning based few-shot classification using optimal transport mapping from preprocessed latent space of backbone neural network

TLDR
This paper modifies the distribution of classes in a latent space produced by a backbone network for each class in order to better follow the Gaussian distribution, and utilizes optimal transport mapping using the Sinkhorn algorithm for this task.

Sill-Net: Feature Augmentation with Separated Illumination Representation

TLDR
This paper proposes a novel neural network architecture called Separating-Illumination Network (Sill-Net), which learns to separate illumination features from images, and then during training the authors augment training samples with these separated illumination features in the feature space.

Iterative label cleaning for transductive and semi-supervised few-shot learning

TLDR
This work introduces a new algorithm that leverages the manifold structure of the labeled and unlabeled data distribution to predict pseudo-labels, while balancing over classes and using the loss value distribution of a limited-capacity classifier to select the cleanest labels, iteratively improving the quality of pseudo-Labels.
...

References

SHOWING 1-10 OF 50 REFERENCES

Charting the Right Manifold: Manifold Mixup for Few-shot Learning

TLDR
This work observes that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance, and proposes the proposed method S2M2, which beats the current state-of-the-art accuracy on standard few- shot learning datasets.

SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning

TLDR
Surprisingly, simple feature transformations suffice to obtain competitive few-shot learning accuracies and it is found that a nearest-neighbor classifier used in combination with mean-subtraction and L2-normalization outperforms prior results in three out of five settings on the miniImageNet dataset.

A Closer Look at Few-shot Classification

TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.

Prototypical Networks for Few-shot Learning

TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Matching Networks for One Shot Learning

TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification

TLDR
It is shown that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5\%$, while increasing the benefit of using unlabeled data in FSL to above $10\%$ performance gain.

Meta-learning with differentiable closed-form solvers

TLDR
The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Learning to Learn

TLDR
This chapter discusses Reinforcement Learning with Self-Modifying Policies J. Schmidhuber, et al., and theoretical Models of Learning to Learn J. Baxter, a first step towards Continual Learning.

The Caltech-UCSD Birds-200-2011 Dataset

CUB-200-2011 is an extended version of CUB-200 [7], a challenging dataset of 200 bird species. The extended version roughly doubles the number of images per category and adds new part localization