Diagnosing and Remedying Shot Sensitivity with Cosine Few-Shot Learners

@article{Wertheimer2022DiagnosingAR,
  title={Diagnosing and Remedying Shot Sensitivity with Cosine Few-Shot Learners},
  author={Davis Wertheimer and Luming Tang and Bharath Hariharan},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.03398}
}
Few-shot recognition involves training an image classifier to distinguish novel concepts at test time using few examples (shot). Existing approaches generally assume that the shot number at test time is known in advance. This is not realistic, and the performance of a popular and foundational method has been shown to suffer when train and test shots do not match. We conduct a systematic empirical study of this phenomenon. In line with prior work, we find that shot sensitivity is broadly present… 

References

SHOWING 1-10 OF 37 REFERENCES

A Closer Look at Few-shot Classification

TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.

Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?

TLDR
It is shown that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods.

Generalizing from a Few Examples: A Survey on Few-Shot Learning

TLDR
A thorough survey to fully understand Few-Shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.

A New Meta-Baseline for Few-Shot Learning

TLDR
This work presents a Meta-Baseline method, by pre-training a classifier on all base classes and meta-learning on a nearest-centroid based few-shot classification algorithm, which outperforms recent state-of-the-art methods by a large margin.

Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning

TLDR
A simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric achieves competitive performance to state-of-the-art methods on standard bench-marks and sheds some light on understanding the trade-offs between the meta- learning objective and the whole- classification objective in few-shot learning.

Meta-Learning for Semi-Supervised Few-Shot Classification

TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.

Revisiting Pose-Normalization for Fine-Grained Few-Shot Recognition

TLDR
With a minimal increase in model capacity, pose normalization improves accuracy between 10 and 20 percentage points for shallow and deep architectures, generalizes better to new domains, and is effective for multiple few-shot algorithms and network backbones.

Adaptive Subspaces for Few-Shot Learning

TLDR
This paper provides a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples and empirically shows that such modelling leads to robustness against perturbations and yields competitive results on the task of supervised and semi-supervised few- shot classification.

Few-Shot Learning via Embedding Adaptation With Set-to-Set Functions

TLDR
This paper proposes a novel approach to adapt the instance embeddings to the target classification task with a set-to-set function, yielding embeddeddings that are task-specific and are discriminative.