Dynamic Few-Shot Visual Learning Without Forgetting

@article{Gidaris2018DynamicFV,
  title={Dynamic Few-Shot Visual Learning Without Forgetting},
  author={Spyros Gidaris and Nikos Komodakis},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={4367-4375}
}
  • Spyros Gidaris, N. Komodakis
  • Published 25 April 2018
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
The human visual system has the remarkably ability to be able to effortlessly learn novel concepts from only a few examples. [...] Key Method The latter, apart from unifying the recognition of both novel and base categories, it also leads to feature representations that generalize better on "unseen" categories. We extensively evaluate our approach on Mini-ImageNet where we manage to improve the prior state-of-the-art on few-shot recognition (i.e., we achieve 56.20% and 73.00% on the 1-shot and 5-shot settings…Expand
Improving the Generalised Few-shot Learning by Semantic Information
  • Liang Bai, Haoran Wang, Yanming Guo
  • Computer Science
    2020 6th International Conference on Big Data and Information Analytics (BigDIA)
  • 2020
TLDR
This paper proposes a two-head model including visual learning and textual learning, which surpasses the previous state-of-the-art methods in this setting and uses a simple weighted fusion technique to combine them.
SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning
TLDR
The SEmantic Guided Attention (SEGA) mechanism where the semantic knowledge is used to guide the visual perception in a top-down manner about what visual features should be paid attention to when distinguishing a category from the others is proposed.
Few-Shot Few-Shot Learning and the role of Spatial Attention
TLDR
The representation is obtained from a classifier pre-trained on a large-scale dataset of a different domain, assuming no access to its training process, while the base class data are limited to few examples per class and their role is to adapt the representation to the domain at hand rather than learn from scratch.
Few-Shot Class-Incremental Learning
TLDR
This paper proposes the TOpology-Preserving knowledge InCrementer (TOPIC) framework, which mitigates the forgetting of the old classes by stabilizing NG's topology and improves the representation learning for few-shot new classes by growing and adapting NG to new training samples.
Multi-domain few-shot image recognition with knowledge transfer
TLDR
This work proposes a model that can adaptively integrate visual and semantic information to recognize novel categories and adopts a fine-tuning strategy to adjust the scale and shift parameters of the batch normalization layers to simulate various feature distributions under different domains.
Few-Shot Incremental Learning with Continually Evolved Classifiers
TLDR
This paper adopted a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations and proposes a Continually Evolved Classifier (CEC) that employs a graph model to propagate context information between classifiers for adaptation.
Few-Shot Image Recognition With Knowledge Transfer
TLDR
A novel Knowledge Transfer Network architecture (KTN) for few-shot image recognition that jointly incorporates visual feature learning, knowledge inferring and classifier learning into one unified framework for their optimal compatibility.
Revisiting Metric Learning for Few-Shot Image Classification
TLDR
This work revisits the classical triplet network from deep metric learning, and extends it into a deep K-tuplet network for few-shot learning, utilizing the relationship among the input samples to learn a general representation learning via episode-training.
Few-shot Learning with Weakly-supervised Object Localization
TLDR
This paper designs a triplet-input module to obtain the initial object seeds and an Image-To-Class-Distance based localizer to activate the deep descriptors of the key objects, thus obtaining the more discriminative representations used to perform few-shot classification.
Adaptive Learning Knowledge Networks for Few-Shot Learning
TLDR
This paper proposes a new framework called Adaptive Learning Knowledge Networks (ALKN) for few-shot learning, which learns the knowledge of different classes from the features of labeled samples and store the learned knowledge into memory which will be dynamically updated during the learning process.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
Low-Shot Visual Recognition by Shrinking and Hallucinating Features
TLDR
This work presents a low-shot learning benchmark on complex images that mimics challenges faced by recognition systems in the wild, and proposes representation regularization techniques and techniques to hallucinate additional training examples for data-starved classes.
Low-Shot Learning from Imaginary Data
TLDR
This work builds on recent progress in meta-learning by combining a meta-learner with a "hallucinator" that produces additional training examples, and optimizing both models jointly, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark.
Low-Shot Learning with Imprinted Weights
TLDR
The process weight imprinting is called as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example, which provides immediate good classification performance and an initialization for any further fine-tuning in the future.
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.
Learning with Imprinted Weights
TLDR
The process weight imprinting is called as it directly sets weights for a new category based on an appropriately scaled copy of the embedding layer activations for that training example, which provides immediate good classification performance and an initialization for any further fine-tuning in the future.
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
Optimization as a Model for Few-Shot Learning
iCaRL: Incremental Classifier and Representation Learning
TLDR
iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail, and distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures.
Learning Deep Features for Scene Recognition using Places Database
TLDR
A new scene-centric database called Places with over 7 million labeled pictures of scenes is introduced with new methods to compare the density and diversity of image datasets and it is shown that Places is as dense as other scene datasets and has more diversity.
...
1
2
3
4
...