Iterative label cleaning for transductive and semi-supervised few-shot learning

@article{Lazarou2021IterativeLC,
  title={Iterative label cleaning for transductive and semi-supervised few-shot learning},
  author={Michalis Lazarou and Yannis Avrithis and Tania Stathaki},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={8731-8740}
}
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks may be solved with both supervision and data being limited. Improved performance is possible by transductive inference, where the entire test set is available concurrently, and semi-supervised learning, where more unlabeled data is available.Focusing on these two settings, we introduce a new algorithm that leverages the manifold structure of the labeled and unlabeled data distribution to predict… 
Tensor feature hallucination for few-shot learning
TLDR
It is shown that: (1) using a simple loss function is more than enough for training a feature generator in the few-shot setting; and (2) learning to generate tensor features instead of vector features is superior.
Few-shot learning via tensor hallucination
TLDR
It is shown that using a simple loss function is more than enough for training a feature generator in the few-shot setting, and learning to generate tensor features instead of vector features is superior.
EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art Few-Shot Classification with Simple Ingredients
TLDR
This work proposes a simple methodology, that reaches or even beats state of the art performance on multiple standardized benchmarks of the field, while adding almost no hyperparameters or parameters to those used for training the initial deep learning models on the generic dataset.
Label Hallucination for Few-Shot Classification
TLDR
This paper pseudo-labels the entire large dataset using the linear classifier trained on the novel classes, and finetunes the entire model with a distillation loss on the pseudo-labeled base examples, in addition to the standard cross-entropy loss onThe novel dataset.

References

SHOWING 1-10 OF 82 REFERENCES
Learning to Propagate Labels: Transductive Propagation Network for Few-Shot Learning
TLDR
This paper proposes Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.
Label Propagation for Deep Semi-Supervised Learning
TLDR
This work employs a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network.
Learning to Self-Train for Semi-Supervised Few-Shot Classification
TLDR
A novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance is proposed.
Charting the Right Manifold: Manifold Mixup for Few-shot Learning
TLDR
This work observes that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance, and proposes the proposed method S2M2, which beats the current state-of-the-art accuracy on standard few- shot learning datasets.
Meta-Learning for Semi-Supervised Few-Shot Classification
TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would.
TransMatch: A Transfer-Learning Scheme for Semi-Supervised Few-Shot Learning
TLDR
A new transfer-learning framework for semi-supervised few-shot learning to fully utilize the auxiliary information from labeled base-class data and unlabeled novel- class data to significantly improve the accuracy of few- shot learning task, and achieve new state-of-the-art results.
Cross Attention Network for Few-shot Classification
TLDR
A novel Cross Attention Network is introduced to deal with the problem of unseen classes and a transductive inference algorithm is proposed to alleviate the low-data problem, which iteratively utilizes the unlabeled query set to augment the support set, thereby making the class features more representative.
Instance Credibility Inference for Few-Shot Learning
TLDR
This paper presents a simple statistical approach, dubbed Instance Credibility Inference (ICI), to exploit the distribution support of unlabeled instances for few-shot learning to establish new state-of-the-arts on four widely used few- shot learning benchmark datasets.
Transductive Episodic-Wise Adaptive Metric for Few-Shot Learning
TLDR
A Transductive Episodic-wise Adaptive Metric (TEAM) framework for few-shot learning is proposed, by integrating the meta-learning paradigm with both deep metric learning and transductive inference and leverages an attention-based bi-directional similarity strategy for extracting the more robust relationship between queries and prototypes.
ReLaB: Reliable Label Bootstrapping for Semi-Supervised Learning
TLDR
Reliable Label Bootstrapping (ReLaB) is proposed, an unsupervisedpreprossessing algorithm that paves the way for semi-supervised learning solutions, enabling them to work with much lowersupervision.
...
1
2
3
4
5
...