Few-Shot Non-Parametric Learning with Deep Latent Variable Model

@article{Jiang2022FewShotNL,
  title={Few-Shot Non-Parametric Learning with Deep Latent Variable Model},
  author={Zhiying Jiang and Yi-Zhu Dai and Ji Xin and Ming Li and Jimmy Lin},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.11573}
}
Most real-world problems that machine learning algorithms are expected to solve face the situation with 1) unknown data distribution; 2) little domain-specific knowledge; and 3) datasets with limited annotation. We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV), a learning framework for any dataset with abundant unlabeled data but very few labeled ones. By only training a generative model in an unsupervised way, the framework utilizes the data distribution to build… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 79 REFERENCES

Unsupervised Data Augmentation for Consistency Training

TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.

Optimization as a Model for Few-Shot Learning

Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks

TLDR
A novel Generative Adversarial Network is designed to model the latent distribution of each novel class given its related base counterparts, leading to substantial improvements on the ImageNet benchmark over the state of the art.

Generalizing from a Few Examples: A Survey on Few-Shot Learning

TLDR
A thorough survey to fully understand Few-Shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.

Siamese Neural Networks for One-Shot Image Recognition

TLDR
A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

Learning to Compare: Relation Network for Few-Shot Learning

TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.

Low-Shot Learning with Large-Scale Diffusion

TLDR
This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time and considers a semi-supervised setting based on a large collection of images to support label propagation.

Matching Networks for One Shot Learning

TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

Variational Lossy Autoencoder

TLDR
This paper presents a simple but principled method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN with greatly improve generative modeling performance of VAEs.

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

TLDR
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks, but it becomes unwieldy when learning large datasets, so Mean Teacher, a method that averages model weights instead of label predictions, is proposed.
...