Corpus ID: 236428320

Transductive Maximum Margin Classifier for Few-Shot Learning

@article{Pan2021TransductiveMM,
  title={Transductive Maximum Margin Classifier for Few-Shot Learning},
  author={Fei Pan and Chunlei Xu and Jie Guo and Yanwen Guo},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11975}
}
  • Fei Pan, Chunlei Xu, +1 author Yanwen Guo
  • Published 2021
  • Computer Science
  • ArXiv
Few-shot learning aims to train a classifier that can generalize well when just a small number of labeled samples per class are given. We introduce Transductive Maximum Margin Classifier (TMMC) for few-shot learning. The basic idea of the classical maximum margin classifier is to solve an optimal prediction function that the corresponding separating hyperplane can correctly divide the training data and the resulting classifier has the largest geometric margin. In few-shot learning scenarios… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 62 REFERENCES
Learning to Propagate Labels: Transductive Propagation Network for Few-Shot Learning
TLDR
This paper proposes Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem. Expand
Cross Attention Network for Few-shot Classification
TLDR
A novel Cross Attention Network is introduced to deal with the problem of unseen classes and a transductive inference algorithm is proposed to alleviate the low-data problem, which iteratively utilizes the unlabeled query set to augment the support set, thereby making the class features more representative. Expand
Meta-Learning With Differentiable Convex Optimization
TLDR
The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. Expand
Meta-Learning for Semi-Supervised Few-Shot Classification
TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would. Expand
A Closer Look at Few-shot Classification
TLDR
The results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones, and a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. Expand
Boosting Few-Shot Visual Learning With Self-Supervision
TLDR
This work uses self-supervision as an auxiliary task in a few-shot learning pipeline, enabling feature extractors to learn richer and more transferable visual representations while still using few annotated samples. Expand
Discriminative k-shot learning using probabilistic models
TLDR
It is shown that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin and is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k- shot learning. Expand
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning. Expand
TADAM: Task dependent adaptive metric for improved few-shot learning
TLDR
This work identifies that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms and proposes and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. Expand
Transductive Episodic-Wise Adaptive Metric for Few-Shot Learning
TLDR
A Transductive Episodic-wise Adaptive Metric (TEAM) framework for few-shot learning is proposed, by integrating the meta-learning paradigm with both deep metric learning and transductive inference and leverages an attention-based bi-directional similarity strategy for extracting the more robust relationship between queries and prototypes. Expand
...
1
2
3
4
5
...