• Corpus ID: 219966219

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification

@article{Medina2020SelfSupervisedPT,
  title={Self-Supervised Prototypical Transfer Learning for Few-Shot Classification},
  author={Carlos Medina and Arnout Devos and Matthias Grossglauser},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.11325}
}
Most approaches in few-shot learning rely on costly annotated data related to the goal task domain during (pre-)training. Recently, unsupervised meta-learning methods have exchanged the annotation requirement for a reduction in few-shot classification performance. Simultaneously, in settings with realistic domain shift, common transfer learning has been shown to outperform supervised meta-learning. Building on these insights and on advances in self-supervised learning, we propose a transfer… 

Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot Tasks.

TLDR
This work removes the requirement of base class labels and learns generalizable embeddings via Unsupervised Meta-Learning (UML), and applies embedding-based classifiers to novel tasks with labeled few-shot examples during meta-test.

Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for the Characteristics of Few-Shot Tasks

TLDR
This work finds that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner, and two modifications -- the semi-normalized distance metric and the sufficient sampling -- improves un supervised meta-learning (UML) significantly.

Self-Supervised Class-Cognizant Few-Shot Classification

TLDR
This paper focuses on unsupervised learning from an abundance of unlabeled data followed by fewshot fine-tuning on a downstream classification task and extends a recent study on adopting contrastive learning for self-supervised pre-training by incorporating class-level cognizance through iterative clustering and re-ranking and expanding the contrastive optimization loss to account for it.

How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot Learning?

TLDR
It is found that representations extracted from selfsupervised methods exhibit stronger robustness than the supervised method, and whether self-supervised representations perform well on the source domain has little correlation with their applicability on the target domain.

ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning

TLDR
A framework for few-shot learning coined as ConFeSS (Contrastive Learning and Feature Selection System) that tackles large domain shift between base and novel categories and outperforms all meta-learning approaches and produces competitive results against recent cross-domain methods is proposed.

Self-Supervision Can Be a Good Few-Shot Learner

TLDR
This work proposes an effective unsupervised FSL method, learning representations with self-supervision, following the InfoMax principle, which achieves comparable performance on widely used FSL benchmarks without any labels of the base classes.

Unsupervised Few-Shot Action Recognition via Action-Appearance Aligned Meta-Adaptation

TLDR
This work presents MetaUVFS, a novel Action-Appearance Aligned Meta-adaptation module that learns to focus on the action-oriented video features in relation to the appearance features via explicit few-shot episodic meta-learning over unsupervised hard-mined episodes, as the first Unsupervised Meta-learning algorithm for Video Few-Shot action recognition.

Flexible Few-Shot Learning with Contextual Similarity

TLDR
This work proposes to build upon recent contrastive unsupervised learning techniques and use a combination of instance and class invariance learning, aiming to obtain general and flexible features, and finds that this approach performs strongly on new flexible few-shot learning benchmarks.

Spatial Contrastive Learning for Few-Shot Classification

TLDR
This paper presents a novel attention-based spatial contrastive objective to learn locally discriminative and class-agnostic features and outperforms state-of-the-art approaches in contrastive learning for few-shot classification.

Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning

TLDR
This work proposes a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations and shows that even without knowledge distillation this proposed method can outperform current state-of-the-art FSL methods on five popular benchmark datasets.

References

SHOWING 1-10 OF 44 REFERENCES

Meta-Learning for Semi-Supervised Few-Shot Classification

TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.

Self-Supervised Learning for Few-Shot Image Classification

TLDR
This paper proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.

Boosting Few-Shot Visual Learning With Self-Supervision

TLDR
This work uses self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples.

Unsupervised Few-shot Learning via Self-supervised Training

TLDR
This study develops a method to learn an unsupervised few-shot learner via self-supervised training (UFLST), which can effectively generalize to novel but related classes and demonstrates the feasibility of the model to a real-world application on person re-identification.

Unsupervised Few-shot Learning via Distribution Shift-based Augmentation

TLDR
A novel framework called ULDA is developed, which pays attention to the distribution diversity inside each constructed pretext few-shot task when using data augmentation, and can achieve superior generalization performance and obtain state-of-the-art results in a variety of established few- shot learning tasks.

Unsupervised Meta-Learning for Few-Shot Image Classification

TLDR
UMTRA is proposed, an algorithm that performs unsupervised, model-agnostic meta-learning for classification tasks, and trades off some classification accuracy for a reduction in the required labels of several orders of magnitude.

Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation

TLDR
A method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data, which achieves good generalization performance in a variety of established few- shot learning tasks on Omniglot and Mini-Imagenet.

A New Benchmark for Evaluation of Cross-Domain Few-Shot Learning

TLDR
The proposed CD-FSL benchmark is proposed, consisting of images from diverse domains with varying similarity to ImageNet, ranging from crop disease images, satellite images, and medical images, to serve as a challenging platform to guide future research on cross-domain few-shot learning due to its spectrum of diversity and coverage.

A Closer Look at Few-shot Classification

TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.

Prototypical Networks for Few-shot Learning

TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.