Learning to Compare: Relation Network for Few-Shot Learning

@article{Sung2017LearningTC,
  title={Learning to Compare: Relation Network for Few-Shot Learning},
  author={Flood Sung and Yongxin Yang and Li Zhang and Tao Xiang and Philip H. S. Torr and Timothy M. Hospedales},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2017},
  pages={1199-1208}
}
We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. [] Key Method During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.

Figures and Tables from this paper

Meta-Relation Networks for Few Shot Learning

A meta-relation network is proposed to solve the few shot learning problem, where the classifier must learn to recognize new classes given only few examples from each, based on relation networks and Model-Agnostic Meta-Learning training methods.

Memory-Augmented Relation Network for Few-Shot Learning

This work investigates a new metric-learning method to explicitly exploit its relationships with the others in the working context, and formulate the distance metric as a learnable relation module which learns to compare for similarity measurement, and equip theWorking context with memory slots, both contributing to generality.

Revisiting Metric Learning for Few-Shot Image Classification

PARN: Position-Aware Relation Networks for Few-Shot Learning

This paper proposes a position-aware relation network (PARN) to learn a more flexible and robust metric ability for few-shot learning, and introduces a deformable feature extractor (DFE) to extract more efficient features and design a dual correlation attention mechanism (DCA) to deal with its inherent local connectivity.

FSIL: Few-shot and Incremental Learning for Image Classification

This paper proposes a novel method which combines both using a few- shot adaptation stage that learns to assign importance to each feature, and embed incremental learning in this few-shot setup, and proposes a conceptually simple divergence penalty based on cosine similarity that prevents such interference.

Compare Learning: Bi-Attention Network for Few-Shot Learning

  • Li KeMeng PanWeigao WenDong Li
  • Computer Science
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2020
A novel approach named Bi-attention network to compare the instances is proposed, which can measure the similarity between embeddings of instances precisely, globally and efficiently and is verified on two benchmarks.

Task-adaptive Relation Dependent Network for Few-shot Learning

A novel metric-based few-shot algorithm called Task-adaptive Relation Dependent Network is proposed, which reduces the distribution bias by shifting the dataset and adopting a more detailed comparison of features to capture their intrinsic correspondence, improving the measurements of the similarity between the support set and the query set samples.

Augmented Bi-path Network for Few-shot Learning

The proposed Augmented Bi-path Network (ABNet) for learning to compare both global and local features on multi-scales is proposed, where the salient patches are extracted and embedded as the local features for every image.

Meta Generalized Network for Few-Shot Classification

This paper develops a meta backbone training method that learns a flexible feature extractor and a classifier initializer efficiently, delightedly leading to fast adaption to unseen few-shot tasks without overfitting, and designs a trainable adaptive interval model to improve the cosine classifier, which increases the recognition accuracy of hard examples.

Learning a Universal Template for Few-shot Dataset Generalization

This work designs a separate network that produces an initialization of those parameters for each given task, and then finetune its proposed initialization via a few steps of gradient descent, and achieves the state-of-the-art on the challenging Meta-Dataset benchmark.
...

References

SHOWING 1-10 OF 47 REFERENCES

Prototypical Networks for Few-shot Learning

This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Matching Networks for One Shot Learning

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

An embarrassingly simple approach to zero-shot learning

This paper describes a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets.

Siamese Neural Networks for One-Shot Image Recognition

A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

Synthesized Classifiers for Zero-Shot Learning

This work introduces a set of "phantom" object classes whose coordinates live in both the semantic space and the model space and demonstrates superior accuracy of this approach over the state of the art on four benchmark datasets for zero-shot learning.

Learning feed-forward one-shot learners

This paper constructs the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar, and obtains an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one- shot classification objective in a learning to learn formulation.

Semi-supervised Vocabulary-Informed Learning

  • Yanwei FuL. Sigal
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
A maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others.

Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions

A new model is presented that can classify unseen categories from their textual description and takes advantage of the architecture of CNNs and learn features at different layers, rather than just learning an embedding space for both modalities, as is common with existing approaches.

Zero-Shot Learning Through Cross-Modal Transfer

This work introduces a model that can recognize objects in images even if no training data is available for the object class, and uses novelty detection methods to differentiate unseen classes from seen classes.