Correction of AI systems by linear discriminants: Probabilistic foundations

@article{Gorban2018CorrectionOA,
  title={Correction of AI systems by linear discriminants: Probabilistic foundations},
  author={Alexander N Gorban and A. Golubkov and Bogdan Grechuk and Eugenij Moiseevich Mirkes and Ivan Y. Tyukin},
  journal={Inf. Sci.},
  year={2018},
  volume={466},
  pages={303-322}
}

Figures and Tables from this paper

High-Dimensional Separability for One- and Few-Shot Learning
TLDR
New multi-correctors of AI systems are presented and illustrated with examples of predicting errors and learning new classes of objects by a deep convolutional neural network.
Knowledge Transfer Between Artificial Intelligence Systems
TLDR
It is shown that if internal variables of the “student” Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals.
The unreasonable effectiveness of small neural ensembles in high-dimensional brain
Limit Theorems as Blessing of Dimensionality: Neural-Oriented Overview
TLDR
It is shown that such limit theorems often make analysis of complex systems easier—i.e., lead to blessing of dimensionality phenomenon—for all the aspects of these systems: the corresponding transformation, the system’s uncertainty, and the desired result of the system's analysis.
Practical stochastic separation theorems for product distributions
  • Bogdan Grechuk
  • Computer Science, Mathematics
    2019 International Joint Conference on Neural Networks (IJCNN)
  • 2019
TLDR
This work derives much less restrictive estimates for dataset size in terms of dimension, which still sufficient to guarantee Fisher separability with large probability, provided that data follow product distributions in the unit cube.
Linear and Fisher Separability of Random Points in the d-dimensional Spherical Layer
TLDR
The boundaries for linear and Fisher separability are proposed, when the points are drawn randomly, independently and uniformly from a d-dimensional spherical layer, to better outline the applicability limits of the stochastic separation theorems in applications.
Bringing the Blessing of Dimensionality to the Edge
TLDR
A distinctive feature of the approach is that, in the supervised setting, the approaches' computational complexity is sub-linear in the number of training samples, which makes it particularly attractive in applications in which the computational power and memory are limited.
Probabilistic Bounds for Binary Classification of Large Data Sets
TLDR
A probabilistic model for classification of task relevance is investigated and the Azuma-Hoeffding Inequality is exploited, which can be applied when the naive Bayes assumption is not satisfied.
On the Linear Separability of Random Points in the d-dimensional Spherical Layer and in the d-dimensional Cube
TLDR
The limits of applicability of this method for correcting errors of artificial intelligence systems are specified by estimating the number of points that are linearly separable with a probability close to 1 in two particular cases: when the points drawn randomly, independently and uniformly from a d- dimensional spherical layer and from the d-dimensional cube.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 61 REFERENCES
Augmented Artificial Intelligence: a Conceptual Framework
TLDR
The mathematical foundations of AI non-destructive correction are presented and a series of new stochastic separation theorems are proven, demonstrating that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non- destructive corrector problem.
Stochastic Separation Theorems
Knowledge Transfer Between Artificial Intelligence Systems
TLDR
It is shown that if internal variables of the “student” Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals.
Randomness in neural networks: an overview
TLDR
An overview of the different ways in which randomization can be applied to the design of neural networks and kernel functions is provided to clarify innovative lines of research, open problems, and foster the exchanges of well‐known results throughout different communities.
On the mathematical foundations of learning
(1) A main theme of this report is the relationship of approximation to learning and the primary role of sampling (inductive inference). We try to emphasize relations of the theory of learning to the
Adaptive computation and machine learning
TLDR
This book attempts to give an overview of the different recent efforts to deal with covariate shift, a challenging situation where the joint distribution of inputs and outputs differs between the training and test stages.
The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures
TLDR
This work proves that a mixture with known identical covariance matrices whose number of components is a polynomial of any fixed degree in the dimension n is polynomially learnable as long as a certain non-degeneracy condition on the means is satisfied.
Blessing of dimensionality: mathematical foundations of the statistical physics of data
  • Alexander N Gorban, I. Tyukin
  • Mathematics
    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2018
TLDR
Stochastic separation theorems provide us with classifiers and determine a non-iterative (one-shot) procedure for their construction and allow us to correct legacy artificial intelligence systems.
...
1
2
3
4
5
...