Corpus ID: 155100134

LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning

@article{Li2019LGMNetLT,
  title={LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning},
  author={Huaiyu Li and Weiming Dong and Xing Mei and Chongyang Ma and Feiyue Huang and Bao-Gang Hu},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.06331}
}
In this work, we propose a novel meta-learning approach for few-shot classification, which learns transferable prior knowledge across tasks and directly produces network parameters for similar unseen tasks with training samples. Our approach, called LGM-Net, includes two key modules, namely, TargetNet and MetaNet. The TargetNet module is a neural network for solving a specific task and the MetaNet module aims at learning to generate functional weights for TargetNet by observing training samples… Expand
Meta-Transfer Learning through Hard Tasks
TLDR
This work proposes a novel approach called meta-transfer learning (MTL), which learns to transfer the weights of a deep NN for few-shot learning tasks, and introduces the hard task (HT) meta-batch scheme as an effective learning curriculum of few- shot classification tasks. Expand
Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for the Characteristics of Few-Shot Tasks
TLDR
This work finds that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner, and two modifications -- the semi-normalized distance metric and the sufficient sampling -- improves un supervised meta-learning (UML) significantly. Expand
Meta-Generating Deep Attentive Metric for Few-shot Classification
TLDR
This study presents a novel deep metric meta-generation method that turns to an orthogonal direction, ie, learning to adaptively generate a specific metric for a new FSL task based on the task description, and generates a discriminative metric for each task with pleasing generalization performance. Expand
Trainable Class Prototypes for Few-Shot Learning
TLDR
This paper proposes the trainable prototypes for distance measure instead of the artificial ones within the meta-training and task-training framework and adopts non-episodic meta- training based on self-supervised learning. Expand
Revisiting Metric Learning for Few-Shot Image Classification
TLDR
This work revisits the classical triplet network from deep metric learning, and extends it into a deep K-tuplet network for few-shot learning, utilizing the relationship among the input samples to learn a general representation learning via episode-training. Expand
Prototype Completion for Few-Shot Learning
  • Baoquan Zhang, Xutao Li, Yunming Ye, Shanshan Feng
  • Computer Science
  • ArXiv
  • 2021
TLDR
A novel prototype completion based meta-learning framework that introduces primitive knowledge and extracts representative features for seen attributes as priors and develops a Gaussian based prototype fusion strategy that fuses the mean-based and completed prototypes by exploiting the unlabeled samples. Expand
ATRM: Attention-based Task-level Relation Module for GNN-based Few-shot Learning
TLDR
This work proposes a new relation measure method, namely the attention-based task-level relation module (ATRM), to explicitly model the task- level relation of one sample to all the others, and demonstrates that the proposed module is effective for GNN-based few-shot learning. Expand
Zero-shot task adaptation by homoiconic meta-mapping
TLDR
This work draws inspiration from functional programming and recent work in meta-learning to propose a class of Homoiconic Meta-Mapping approaches that represent data points and tasks in a shared latent space, and learn to infer transformations of that space. Expand
TAdaNet: Task-Adaptive Network for Graph-Enriched Meta-Learning
TLDR
A task- Adaptive network (TAdaNet) that makes use of a domain-knowledge graph to enrich data representations and provide task-specific customization, resulting in a task-adaptive metric space for classification. Expand
Few-Shot Image Classification via Contrastive Self-Supervised Learning
TLDR
This paper solves the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning and training a classifier using graph aggregation, self-distillation and manifold augmentation. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 41 REFERENCES
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning. Expand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Meta-learning autoencoders for few-shot prediction
TLDR
It is demonstrated that for previously unseen tasks, without additional training, this Meta-Learning Autoencoder (MeLA) framework can build models that closely match the true underlying models, with loss significantly lower than given by fine-tuned baseline networks and performance that compares favorably with state-of-the-art meta-learning algorithms. Expand
Optimization as a Model for Few-Shot Learning
Meta-SGD: Learning to Learn Quickly for Few Shot Learning
TLDR
Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. Expand
Few-Shot Image Recognition by Predicting Parameters from Activations
TLDR
A novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations is proposed, which achieves the state-of-the-art classification accuracy on Novel categories by a significant margin while keeping comparable performance on the large-scale categories. Expand
Siamese Neural Networks for One-Shot Image Recognition
TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. Expand
Dynamic Few-Shot Visual Learning Without Forgetting
TLDR
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. Expand
...
1
2
3
4
5
...