Compare Learning: Bi-Attention Network for Few-Shot Learning

@article{Ke2020CompareLB,
  title={Compare Learning: Bi-Attention Network for Few-Shot Learning},
  author={Li Ke and Meng Pan and Weigao Wen and Dong Li},
  journal={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2020},
  pages={2233-2237}
}
  • Li KeMeng Pan Dong Li
  • Published 1 May 2020
  • Computer Science
  • ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Learning with few labeled data is a key challenge for visual recognition, as deep neural networks tend to overfit using a few samples only. One of the Few-shot learning methods called metric learning addresses this challenge by first learning a deep distance metric to determine whether a pair of images belong to the same category, then applying the trained metric to instances from other test set with limited labels. This method makes the most of the few samples and limits the overfitting… 

Figures and Tables from this paper

Attentive Graph Neural Networks for Few-Shot Learning

This work proposes a novel Attentive GNN (AGNN) to tackle few-shot learning challenges by incorporating a triple-attention mechanism, i.e., node self-att attention, neighborhood attention, and layer memory attention.

Epilepsy seizure prediction with few-shot learning method

The proposed few-shot learning method, based on previous knowledge gained from the generalizable method, is regulated with a few new patient samples for the patient, and results show that the accuracy obtained in this method is higher than thegeneralizable methods.

References

SHOWING 1-10 OF 21 REFERENCES

Learning to Compare: Relation Network for Few-Shot Learning

A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.

Siamese Neural Networks for One-Shot Image Recognition

A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

TADAM: Task dependent adaptive metric for improved few-shot learning

This work identifies that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms and proposes and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space.

Optimization as a Model for Few-Shot Learning

Matching Networks for One Shot Learning

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

MetaGAN: An Adversarial Approach to Few-Shot Learning

This paper proposes a conceptually simple and general framework called MetaGAN for few-shot learning problems, and shows that with this MetaGAN framework, it can extend supervised few- shot learning models to naturally cope with unlabeled data.

Prototypical Networks for Few-shot Learning

This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Aggregated Residual Transformations for Deep Neural Networks

On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.