One-trial correction of legacy AI systems and stochastic separation theorems

@article{Gorban2019OnetrialCO,
  title={One-trial correction of legacy AI systems and stochastic separation theorems},
  author={Alexander N Gorban and Richard Burton and Ilya V. Romanenko and Ivan Y. Tyukin},
  journal={Inf. Sci.},
  year={2019},
  volume={484},
  pages={237-254}
}

Figures and Tables from this paper

Efficiency of Shallow Cascades for Improving Deep Learning AI Systems

TLDR
It is shown that, subject to mild technical assumptions on statistical properties of internal signals in Deep Learning AI, with probability close to one the technology enables instantaneous “learning away” of spurious and systematic errors.

Blessing of dimensionality at the edge

In this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) systems to continuously and incrementally improve with a-priori quantifiable guarantees - or more

Knowledge Transfer Between Artificial Intelligence Systems

TLDR
It is shown that if internal variables of the “student” Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals.

Stochastic Separation Theorems

Augmented Artificial Intelligence: a Conceptual Framework

TLDR
The mathematical foundations of AI non-destructive correction are presented and a series of new stochastic separation theorems are proven, demonstrating that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non- destructive corrector problem.

How Deep Should be the Depth of Convolutional Neural Networks: a Backyard Dog Case Study

TLDR
This work proposed a simple non-iterative method for shallowing down pre-trained deep convolutional networks, generic in the sense that it applies to a broad class of feed-forward networks, and is based on the advanced supervise principal component analysis.

The unreasonable effectiveness of small neural ensembles in high-dimensional brain

References

SHOWING 1-10 OF 66 REFERENCES

Intriguing properties of neural networks

TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.

Improving the Robustness of Deep Neural Networks via Stability Training

TLDR
This paper presents a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping.

ImageNet classification with deep convolutional neural networks

TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

TLDR
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision.

Randomness in neural networks: an overview

TLDR
An overview of the different ways in which randomization can be applied to the design of neural networks and kernel functions is provided to clarify innovative lines of research, open problems, and foster the exchanges of well‐known results throughout different communities.

Why Deep Learning Works: A Manifold Disentanglement Perspective

TLDR
This paper provides quantitative evidence to validate the flattening hypothesis and proposes a few quantities for measuring manifold entanglement under certain assumptions and conducts experiments with both synthetic and real-world data, which validate the proposition and lead to new insights on deep learning.

Deep Residual Learning for Image Recognition

TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Understanding the difficulty of training deep feedforward neural networks

TLDR
The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.

Expanding object detector's Horizon: Incremental learning framework for object detection in videos

TLDR
A new scalable and accurate incremental object detection algorithm, based on several extensions of large-margin embedding (LME), that is able to dynamically adjust the complexity of the detector over time by instantiating new prototypes to span all domains the model has seen.

Very Deep Convolutional Networks for Large-Scale Image Recognition

TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
...