Demystification of Few-shot and One-shot Learning

@article{Tyukin2021DemystificationOF,
  title={Demystification of Few-shot and One-shot Learning},
  author={Ivan Y. Tyukin and Alexander N. Gorban and Muhammad H. Alkhudaydi and Qinghua Zhou},
  journal={2021 International Joint Conference on Neural Networks (IJCNN)},
  year={2021},
  pages={1-7}
}
Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice. Classical statistical learning theories do not fully explain why few- or one-shot learning is at all possible since traditional generalisation bounds normally require large training and testing samples to be meaningful. This sharply contrasts with numerous examples of… 

Figures from this paper

Quasi-orthogonality and intrinsic dimensions as measures of learning and generalisation

It is shown, using the setup as in Mellor et al [1], that dimensionality and quasi-orthogonality may jointly serve as networks' performance discriminants and suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces: data dimension and semi-orthOGonality.

Editorial: Toward and beyond human-level AI, volume II

Toward and beyond human-level AI, volume II: Toward and above human- level AI,Volume II is published.

Learning from few examples with nonlinear feature maps

This work considers the problem of data classification where the training set consists of just a few data points and reveals key relationships between the geometry of an AI model’s feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.

Situation-based memory in spiking neuron-astrocyte network

This work proposes that neuron-astrocyte networks provide a network topology that is effectively adapted to accommodate situation-based memory and shows that astrocytes are structurally necessary for an effective function in such a learning and testing set-up.

Learning from Scarce Information: Using Synthetic Data to Classify Roman Fine Ware Pottery

It is shown that the proposed hybrid approach enables the creation of classifiers with appropriate generalisation performance significantly better than that of classifier trained exclusively on the original data which shows the promise of the approach to alleviate the fundamental issue of learning from small datasets.

High-Dimensional Separability for One- and Few-Shot Learning

New multi-correctors of AI systems are presented and illustrated with examples of predicting errors and learning new classes of objects by a deep convolutional neural network.

References

SHOWING 1-10 OF 20 REFERENCES

Prototypical Networks for Few-shot Learning

This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Matching Networks for One Shot Learning

This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.

General stochastic separation theorems with optimal bounds

Quasiorthogonal Dimension

The unreasonable effectiveness of deep learning in artificial intelligence

  • T. Sejnowski
  • Computer Science
    Proceedings of the National Academy of Sciences
  • 2020
Deep learning was inspired by the architecture of the cerebral cortex and insights into autonomy and general intelligence may be found in other brain regions that are essential for planning and survival, but major breakthroughs will be needed to achieve these goals.

High-Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality

A brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience is presented.

Stochastic Configuration Networks Based Adaptive Storage Replica Management for Power Big Data Processing

A novel adaptive power storage replica management system, named PARMS, based on stochastic configuration networks (SCNs), in which the network traffic and the data center (DC) geodistribution are taken into consideration to improve data real-time processing.

The unreasonable effectiveness of small neural ensembles in high-dimensional brain

High-Dimensional Brain: A Tool for Encoding and Rapid Learning of Memories by Single Neurons

It is shown that single neurons can selectively detect and learn arbitrary information items, given that they operate in high dimensions, and that no a priori assumptions on the structural organization of neuronal ensembles are necessary for explaining basic concepts of static and dynamic memories.