Approximation with random bases: Pro et Contra

@article{Gorban2016ApproximationWR,
  title={Approximation with random bases: Pro et Contra},
  author={Alexander N. Gorban and I. Tyukin and D. Prokhorov and Konstantin I. Sofeikov},
  journal={Inf. Sci.},
  year={2016},
  volume={364-365},
  pages={129-145}
}
In this work we discuss the problem of selecting suitable approximators from families of parameterized elementary functions that are known to be dense in a Hilbert space of functions. We consider and analyze published procedures, both randomized and deterministic, for selecting elements from these families that have been shown to ensure the rate of convergence in L2 norm of order O(1/N), where N is the number of elements. We show that both randomized and deterministic procedures are successful… Expand
Stochastic Separation Theorems
TLDR
Stochastic separation theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Expand
Classification by Sparse Neural Networks
TLDR
It is shown that when a priori knowledge of a type of classification tasks is limited, then the sparsity may be achieved only at the expense of large sizes of dictionaries. Expand
The Blessing of Dimensionality: Separation Theorems in the Thermodynamic Limit
TLDR
Stochastic separation theorems reveal an interesting implication for machine learning and data mining applications that deal with large data sets (big data) and high-dimensional data (many attributes): simple linear decision rules and learning machines are surprisingly efficient tools for separating and filtering out arbitrarily assigned points in large dimensions. Expand
Sensitivity Analysis of the Neural Networks Randomized Learning
TLDR
The proposed method of generating hidden nodes parameters ensures better adjustment of the random parameters to the target function and better distribution of neurons in the input space, when comparing to the previous approaches. Expand
Optimal Stopping via Randomized Neural Networks
TLDR
These approaches are applicable for high dimensional problems where the existing approaches become increasingly impractical and can be optimized using a simple linear regression, they are very easy to implement and theoretical guarantees can be provided. Expand
On the Approximation Lower Bound for Neural Nets with Random Weights
TLDR
It is shown that, despite the well-known fact that a shallow neural network is a universal approximator, a random net cannot achieve zero approximation error even for smooth functions, and it is proved that if the proposal distribution is compactly supported, then a lower bound is positive. Expand
Randomized mixture models for probability density approximation and estimation
TLDR
It is shown that RFVL networks can provide functional approximations that converge in Kullback-Leibler divergence, when the target function is a probability density function, and a simple randomized mixture model (MM) construction for density estimation from random data is demonstrated. Expand
A Method of Generating Random Weights and Biases in Feedforward Neural Networks with Random Hidden Nodes
TLDR
This work proposes a method of generating random weights and biases of hidden nodes in such a way that nonlinear fragments of the activation functions are located in the input space regions with data and can be used to construct the surface approximating a nonlinear target function. Expand
Stochastic Configuration Networks: Fundamentals and Algorithms
TLDR
Simulation results concerning both data regression and classification indicate some remarkable merits of the proposed SCNs in terms of less human intervention on the network size setting, the scope adaptation of random parameters, fast learning, and sound generalization. Expand
Randomness in neural networks: an overview
TLDR
An overview of the different ways in which randomization can be applied to the design of neural networks and kernel functions is provided to clarify innovative lines of research, open problems, and foster the exchanges of well‐known results throughout different communities. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 59 REFERENCES
Uniform approximation of functions with random bases
  • A. Rahimi, B. Recht
  • Mathematics
  • 2008 46th Annual Allerton Conference on Communication, Control, and Computing
  • 2008
Random networks of nonlinear functions have a long history of empirical success in function fitting but few theoretical guarantees. In this paper, using techniques from probability on Banach Spaces,Expand
Stochastic choice of basis functions in adaptive function approximation and the functional-link net
TLDR
A theoretical justification for the random vector version of the functional-link (RVFL) net is presented, based on a general approach to adaptive function approximation, which results are that the RVFL is a universal approximator for continuous functions on bounded finite dimensional sets. Expand
Approximation by superpositions of a sigmoidal function
  • G. Cybenko
  • Mathematics, Computer Science
  • Math. Control. Signals Syst.
  • 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn realExpand
Universal approximation bounds for superpositions of a sigmoidal function
  • A. Barron
  • Mathematics, Computer Science
  • IEEE Trans. Inf. Theory
  • 1993
TLDR
The approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings and the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption. Expand
Adaptation in the presence of a general nonlinear parameterization: an error model approach
TLDR
An error model approach is introduced to establish algorithms and their global stability and convergence properties and a number of applications of this error model in adaptive estimation and control are included, in which the new algorithm is shown to result in global boundedness. Expand
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
  • E. Candès, T. Tao
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2006
TLDR
If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. Expand
Feasibility of random basis function approximators for modeling and control
  • I. Tyukin, D. Prokhorov
  • Computer Science
  • 2009 IEEE Control Applications, (CCA) & Intelligent Control, (ISIC)
  • 2009
TLDR
Analysis of the published work on random basis function approximators demonstrates that their favorable error rate of convergence O(1/n) is guaranteed only with very substantial computational resources. Expand
A Simple Lemma on Greedy Approximation in Hilbert Space and Convergence Rates for Projection Pursuit Regression and Neural Network Training
A general convergence criterion for certain iterative sequences in Hilbert space is presented. For an important subclass of these sequences, estimates of the rate of convergence are given. Under veryExpand
Is the kk-NN classifier in high dimensions affected by the curse of dimensionality?
  • V. Pestov
  • Mathematics, Computer Science
  • Comput. Math. Appl.
  • 2013
TLDR
It is pointed out that the existing model for statistical learning is oblivious of dimension of the domain and so every learning problem admits a universally consistent deterministic reduction to the one-dimensional case by means of a Borel isomorphism. Expand
Adaptation and Parameter Estimation in Systems With Unstable Target Dynamics and Nonlinear Parametrization
TLDR
A solution to the problem of adaptive control and parameter estimation in systems with unstable target dynamics by allowing models of uncertainties to be nonlinearly parameterized, and required to be smooth and monotonic functions of linear functionals of the parameters. Expand
...
1
2
3
4
5
...