# Correction of AI systems by linear discriminants: Probabilistic foundations

@article{Gorban2018CorrectionOA, title={Correction of AI systems by linear discriminants: Probabilistic foundations}, author={Alexander N Gorban and A. Golubkov and Bogdan Grechuk and Eugenij Moiseevich Mirkes and Ivan Y. Tyukin}, journal={Inf. Sci.}, year={2018}, volume={466}, pages={303-322} }

## 43 Citations

High-Dimensional Separability for One- and Few-Shot Learning

- Computer ScienceEntropy
- 2021

New multi-correctors of AI systems are presented and illustrated with examples of predicting errors and learning new classes of objects by a deep convolutional neural network.

Fast Construction of Correcting Ensembles for Legacy Artificial Intelligence Systems: Algorithms and a Case Study

- Computer ScienceInf. Sci.
- 2019

Knowledge Transfer Between Artificial Intelligence Systems

- Computer ScienceFront. Neurorobot.
- 2018

It is shown that if internal variables of the “student” Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals.

The unreasonable effectiveness of small neural ensembles in high-dimensional brain

- Computer SciencePhysics of life reviews
- 2018

Limit Theorems as Blessing of Dimensionality: Neural-Oriented Overview

- Computer ScienceEntropy
- 2021

It is shown that such limit theorems often make analysis of complex systems easier—i.e., lead to blessing of dimensionality phenomenon—for all the aspects of these systems: the corresponding transformation, the system’s uncertainty, and the desired result of the system's analysis.

Practical stochastic separation theorems for product distributions

- Computer Science, Mathematics2019 International Joint Conference on Neural Networks (IJCNN)
- 2019

This work derives much less restrictive estimates for dataset size in terms of dimension, which still sufficient to guarantee Fisher separability with large probability, provided that data follow product distributions in the unit cube.

Linear and Fisher Separability of Random Points in the d-dimensional Spherical Layer

- Computer Science, Mathematics2020 International Joint Conference on Neural Networks (IJCNN)
- 2020

The boundaries for linear and Fisher separability are proposed, when the points are drawn randomly, independently and uniformly from a d-dimensional spherical layer, to better outline the applicability limits of the stochastic separation theorems in applications.

Bringing the Blessing of Dimensionality to the Edge

- Computer Science2019 1st International Conference on Industrial Artificial Intelligence (IAI)
- 2019

A distinctive feature of the approach is that, in the supervised setting, the approaches' computational complexity is sub-linear in the number of training samples, which makes it particularly attractive in applications in which the computational power and memory are limited.

Probabilistic Bounds for Binary Classification of Large Data Sets

- Computer ScienceINNSBDDL
- 2019

A probabilistic model for classification of task relevance is investigated and the Azuma-Hoeffding Inequality is exploited, which can be applied when the naive Bayes assumption is not satisfied.

On the Linear Separability of Random Points in the d-dimensional Spherical Layer and in the d-dimensional Cube

- Mathematics2019 International Joint Conference on Neural Networks (IJCNN)
- 2019

The limits of applicability of this method for correcting errors of artificial intelligence systems are specified by estimating the number of points that are linearly separable with a probability close to 1 in two particular cases: when the points drawn randomly, independently and uniformly from a d- dimensional spherical layer and from the d-dimensional cube.

## References

SHOWING 1-10 OF 61 REFERENCES

Augmented Artificial Intelligence: a Conceptual Framework

- Computer ScienceArXiv
- 2018

The mathematical foundations of AI non-destructive correction are presented and a series of new stochastic separation theorems are proven, demonstrating that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non- destructive corrector problem.

One-trial correction of legacy AI systems and stochastic separation theorems

- Computer ScienceInf. Sci.
- 2019

Knowledge Transfer Between Artificial Intelligence Systems

- Computer ScienceFront. Neurorobot.
- 2018

It is shown that if internal variables of the “student” Artificial Intelligent system have the structure of an n-dimensional topological vector space and n is sufficiently high then, with probability close to one, the required knowledge transfer can be implemented by simple cascades of linear functionals.

Randomness in neural networks: an overview

- Computer ScienceWiley Interdiscip. Rev. Data Min. Knowl. Discov.
- 2017

An overview of the different ways in which randomization can be applied to the design of neural networks and kernel functions is provided to clarify innovative lines of research, open problems, and foster the exchanges of well‐known results throughout different communities.

On the mathematical foundations of learning

- Computer Science
- 2001

(1) A main theme of this report is the relationship of approximation to learning and the primary role of sampling (inductive inference). We try to emphasize relations of the theory of learning to the…

Adaptive computation and machine learning

- Computer Science
- 1998

This book attempts to give an overview of the different recent efforts to deal with covariate shift, a challenging situation where the joint distribution of inputs and outputs differs between the training and test stages.

The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures

- Computer ScienceCOLT
- 2014

This work proves that a mixture with known identical covariance matrices whose number of components is a polynomial of any fixed degree in the dimension n is polynomially learnable as long as a certain non-degeneracy condition on the means is satisfied.

Blessing of dimensionality: mathematical foundations of the statistical physics of data

- MathematicsPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
- 2018

Stochastic separation theorems provide us with classifiers and determine a non-iterative (one-shot) procedure for their construction and allow us to correct legacy artificial intelligence systems.

The Blessing of Dimensionality: Separation Theorems in the Thermodynamic Limit

- Mathematics, Computer ScienceArXiv
- 2016