Few-Shot Learning With Embedded Class Models and Shot-Free Meta Training

@article{Ravichandran2019FewShotLW,
  title={Few-Shot Learning With Embedded Class Models and Shot-Free Meta Training},
  author={Avinash Ravichandran and Rahul Bhotika and Stefano Soatto},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={331-339}
}
We propose a method for learning embeddings for few-shot learning that is suitable for use with any number of ways and any number of shots (shot-free. [...] Key Method The class representation function is defined implicitly, which allows us to deal with a variable number of shots per each class with a simple constant-size architecture. The class embedding encompasses metric learning, that facilitates adding new classes without crowding the class representation space. Despite being general and not tuned to the…Expand
Partner-Assisted Learning for Few-Shot Image Classification
TLDR
This paper proposes a two-stage training scheme, Partner-Assessment Learning (PAL), which first trains a Partner Encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a Main Encoder by aligning its outputs with soft- Anchors while attempting to maximize classification performance. Expand
Curvature Generation in Curved Spaces for Few-Shot Learning
Few-shot learning describes the challenging problem of recognizing samples from unseen classes given very few labeled examples. In many cases, few-shot learning is cast as learning an embedding spaceExpand
Learning Class-level Prototypes for Few-shot Learning
  • Minglei Yuan, Wenhai Wang, Tao Wang, Chunhao Cai, Qian Xu, Tong Lu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work proposes a simple yet effective framework for few-shot classification, which can learn to generate preferable prototypes from few support data, with the help of an episodic prototype generator module. Expand
Boosting Few-Shot Learning With Adaptive Margin Loss
TLDR
An adaptive margin principle is proposed to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems by developing a class-relevant additive margin loss, where semantic similarity between each pair of classes is considered to separate samples in the feature embedding space from similar classes. Expand
Contrastive Prototype Learning with Augmented Embeddings for Few-Shot Learning
TLDR
A novel contrastive prototype learning with augmented embeddings (CPLAE) model is proposed to overcome the lack of training data problem in the support set and achieves new state-of-the-art results. Expand
Few-shot Learning with Online Self-Distillation
  • Sihan Liu, Yue Wang
  • Computer Science
  • 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
  • 2021
TLDR
This work comes up with a model that learns representation through online self-distillation via a continuously updated teacher and identifies that data augmentation plays an important role in producing robust features. Expand
Local descriptor-based multi-prototype network for few-shot Learning
TLDR
A novel Local descriptor-based Multi-Prototype Network (LMPNet) is proposed, a well-designed framework that generates an embedding space with multiple prototypes that can capture more informative and subtler cues of an image than the normally adopted image-level features. Expand
Pseudo Shots: Few-Shot Learning with Auxiliary Data
TLDR
A masking module is proposed that adjusts the features of auxiliary data to be more similar to those of the target classes and can improve accuracy by up to 18 accuracy points, particularly when the auxiliary data is semantically distant from the target task. Expand
Revisiting Contrastive Learning for Few-Shot Classification
TLDR
A novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark. Expand
Task-Adaptive Negative Class Envision for Few-Shot Open-Set Recognition
TLDR
This paper proposes a novel task-adaptive negative class envision method (TANE) to model the open world, which uses an external memory to estimate a negative class representation and introduces a novel conjugate episode training strategy that strengthens the learning process. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 29 REFERENCES
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning. Expand
Meta-Learning for Semi-Supervised Few-Shot Classification
TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would. Expand
Optimization as a Model for Few-Shot Learning
Meta-Transfer Learning for Few-Shot Learning
TLDR
A novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks and introduces the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL. Expand
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. Expand
A Closer Look at Few-shot Classification
TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. Expand
Meta-Learning with Latent Embedding Optimization
TLDR
This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space. Expand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Dynamic Few-Shot Visual Learning Without Forgetting
TLDR
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. Expand
Meta-learning with differentiable closed-form solvers
TLDR
The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data. Expand
...
1
2
3
...