A Closer Look at Few-shot Classification
@article{Chen2019ACL, title={A Closer Look at Few-shot Classification}, author={Wei-Yu Chen and Yen-Cheng Liu and Zsolt Kira and Y. Wang and Jia-Bin Huang}, journal={ArXiv}, year={2019}, volume={abs/1904.04232} }
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. [] Key Method In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared…
1,111 Citations
A Closer Look at Few-Shot Video Classification: A New Baseline and Benchmark
- Computer ScienceBMVC
- 2021
This paper proposes a simple classifier-based baseline without any temporal alignment that surprisingly outperforms the state-of-the-art meta-learning based methods and presents a new benchmark with more base data to facilitate future few-shot video classification without pre-training.
A Baseline for Few-Shot Image Classification
- Computer ScienceICLR
- 2020
This work performs extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode and finds that using a large number of meta-training classes results in high few- shot accuracies even for a largeNumber of few-shots classes.
A Universal Representation Transformer Layer for Few-Shot Image Classification
- Computer ScienceICLR
- 2021
A Universal Representation Transformer (URT) layer is proposed, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations.
Meta Generalized Network for Few-Shot Classification
- Computer Science2020 25th International Conference on Pattern Recognition (ICPR)
- 2021
This paper develops a meta backbone training method that learns a flexible feature extractor and a classifier initializer efficiently, delightedly leading to fast adaption to unseen few-shot tasks without overfitting, and designs a trainable adaptive interval model to improve the cosine classifier, which increases the recognition accuracy of hard examples.
What Makes for Effective Few-shot Point Cloud Classification?
- Computer Science2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
- 2022
A novel plug-and-play component called Cross-Instance Adaptation (CIA) module is proposed, to address the high intra-class variances and subtle inter-class differences issues, which can be easily inserted into current baselines with significant performance improvement.
Region Comparison Network for Interpretable Few-shot Image Classification
- Computer ScienceArXiv
- 2020
A metric learning based method named Region Comparison Network (RCN) is proposed, able to reveal how few-shot learning works as in a neural network as well as to find out specific regions that are related to each other in images coming from the query and support sets.
Boosting Few-Shot Classification with View-Learnable Contrastive Learning
- Computer Science2021 IEEE International Conference on Multimedia and Expo (ICME)
- 2021
This work introduces the contrastive loss into few-shot classification for learning latent fine-grained structure in the embedding space and develops a learning-to-learn algorithm to automatically generate different views of the same image.
Revisiting Fine-tuning for Few-shot Learning
- Computer ScienceArXiv
- 2019
In this study, it is shown that in the commonly used low-resolution mini-ImageNet dataset, the fine-tuning method achieves higher accuracy than common few-shot learning algorithms in the 1-shot task and nearly the same accuracy as that of the state-of-the-art algorithm in the 5- shot task.
Looking Wider for Better Adaptive Representation in Few-Shot Learning
- Computer ScienceAAAI
- 2021
The Cross Non-Local Neural Network (CNL) is proposed for capturing the long-range dependency of the samples and the current task, and extracts the task-specific and context-aware features dynamically by strengthening the features of the sample at a position via aggregating information from all positions of itself and theCurrent task.
Novelty-Prepared Few-Shot Classification
- Computer ScienceArXiv
- 2020
This work proposes to use a novelty-prepared loss function, called self-compacting softmax loss (SSL), for few-shot classification, and shows that SSL leads to significant improvement of the state-of-the-art performance.
References
SHOWING 1-10 OF 33 REFERENCES
Learning to Compare: Relation Network for Few-Shot Learning
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
Few-Shot Learning with Metric-Agnostic Conditional Embeddings
- Computer ScienceArXiv
- 2018
This work introduces a novel architecture where class representations are conditioned for each few-shot trial based on a target image, and deviates from traditional metric-learning approaches by training a network to perform comparisons between classes rather than relying on a static metric comparison.
Prototypical Networks for Few-shot Learning
- Computer ScienceNIPS
- 2017
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
Matching Networks for One Shot Learning
- Computer ScienceNIPS
- 2016
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
Dynamic Few-Shot Visual Learning Without Forgetting
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors.
Siamese Neural Networks for One-Shot Image Recognition
- Computer Science
- 2015
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.
Domain Adaption in One-Shot Learning
- Computer ScienceECML/PKDD
- 2018
This paper proposes a domain adaption framework based on adversarial networks, generalized for situations where the source and target domain have different labels, and uses a policy network, inspired by human learning behaviors, to effectively select samples from the source domain in the training process.
Few-Shot Adversarial Domain Adaptation
- Computer ScienceNIPS
- 2017
This work provides a framework for addressing the problem of supervised domain adaptation with deep models by carefully designing a training scheme whereby the typical binary adversarial discriminator is augmented to distinguish between four different classes.
Deep transfer metric learning
- Computer Science2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
This paper proposes a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain.