Learning to Compare: Relation Network for Few-Shot Learning

@article{Sung2018LearningTC,
  title={Learning to Compare: Relation Network for Few-Shot Learning},
  author={Flood Sung and Yongxin Yang and Li Zhang and Tao Xiang and Philip H. S. Torr and Timothy M. Hospedales},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={1199-1208}
}
We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. [...] Key Method During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.Expand
Meta-Relation Networks for Few Shot Learning
TLDR
A meta-relation network is proposed to solve the few shot learning problem, where the classifier must learn to recognize new classes given only few examples from each, based on relation networks and Model-Agnostic Meta-Learning training methods.
Memory-Augmented Relation Network for Few-Shot Learning
TLDR
This work investigates a new metric-learning method to explicitly exploit its relationships with the others in the working context, and formulate the distance metric as a learnable relation module which learns to compare for similarity measurement, and equip theWorking context with memory slots, both contributing to generality.
Revisiting Metric Learning for Few-Shot Image Classification
TLDR
This work revisits the classical triplet network from deep metric learning, and extends it into a deep K-tuplet network for few-shot learning, utilizing the relationship among the input samples to learn a general representation learning via episode-training.
PARN: Position-Aware Relation Networks for Few-Shot Learning
TLDR
This paper proposes a position-aware relation network (PARN) to learn a more flexible and robust metric ability for few-shot learning, and introduces a deformable feature extractor (DFE) to extract more efficient features and design a dual correlation attention mechanism (DCA) to deal with its inherent local connectivity.
FSIL: Few-shot and Incremental Learning for Image Classification
The success of deep learning can be largely attributed to availability of large datasets. However creating large annotated datasets is often expensive and sometimes even impossible. Also, in many
Compare Learning: Bi-Attention Network for Few-Shot Learning
  • Li Ke, Meng Pan, Weigao Wen, Dong Li
  • Computer Science
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2020
TLDR
A novel approach named Bi-attention network to compare the instances is proposed, which can measure the similarity between embeddings of instances precisely, globally and efficiently and is verified on two benchmarks.
Meta Generalized Network for Few-Shot Classification
TLDR
This paper develops a meta backbone training method that learns a flexible feature extractor and a classifier initializer efficiently, delightedly leading to fast adaption to unseen few-shot tasks without overfitting, and designs a trainable adaptive interval model to improve the cosine classifier, which increases the recognition accuracy of hard examples.
Proxy Network for Few Shot Learning
TLDR
This work proposes a simple but effective end-to-end model that directly learns proxies for class representative and distance metric from data simultaneously, and conducts experiments to demonstrate the superiority of the proposed method over state-of-the-art methods.
Attention Relational Network for Few-Shot Learning
TLDR
A flexible and efficient framework for few-shot feature fusion, called Attention Relational Network (ARN), which is a three-branch structure of embedding module, weight module and matching module which can model adaptively the constribution weights of sample features from embedding modules and then generate the prototype representations by weighted fusion of the sample features.
Few-Shot Learning for Crossing-Sentence Relation Classification
TLDR
Both of the two network aim to learn a transferrable deep distance metric to recognize new relation categories given very few labelled samples to resolve the problem of few-shot relation classification for crossing-sentence task.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 53 REFERENCES
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
An embarrassingly simple approach to zero-shot learning
TLDR
This paper describes a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets.
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.
Synthesized Classifiers for Zero-Shot Learning
TLDR
This work introduces a set of "phantom" object classes whose coordinates live in both the semantic space and the model space and demonstrates superior accuracy of this approach over the state of the art on four benchmark datasets for zero-shot learning.
Learning feed-forward one-shot learners
TLDR
This paper constructs the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar, and obtains an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one- shot classification objective in a learning to learn formulation.
Semi-supervised Vocabulary-Informed Learning
  • Yanwei Fu, L. Sigal
  • Computer Science, Mathematics
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
A maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others.
Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions
TLDR
A new model is presented that can classify unseen categories from their textual description and takes advantage of the architecture of CNNs and learn features at different layers, rather than just learning an embedding space for both modalities, as is common with existing approaches.
Zero-Shot Learning Through Cross-Modal Transfer
TLDR
This work introduces a model that can recognize objects in images even if no training data is available for the object class, and uses novelty detection methods to differentiate unseen classes from seen classes.
...
1
2
3
4
5
...