Share This Author
Universal approximation bounds for superpositions of a sigmoidal function
- A. Barron
- Computer ScienceIEEE Trans. Inf. Theory
- 1 May 1993
The approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings and the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption.
Risk bounds for model selection via penalization
It is shown that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve, which quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size.
The Minimum Description Length Principle in Coding and Modeling
The normalized maximized likelihood, mixture, and predictive codings are each shown to achieve the stochastic complexity to within asymptotically vanishing terms.
Minimum complexity density estimation
An index of resolvability is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length to demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference.
Information-theoretic determination of minimax rates of convergence
We present some general results determining minimax bounds on statistical risk for density estimation based on certain information-theoretic considerations. These bounds depend only on metric entropy…
The consistency of posterior distributions in nonparametric problems
We give conditions that guarantee that the posterior probability of every Hellinger neighborhood of the true distribution tends to 1 almost surely. The conditions are (1) a requirement that the prior…
ENTROPY AND THE CENTRAL LIMIT THEOREM
- A. Barron
On etend un argument de Brown (1982) pour montrer que les informations de Fisher convergent vers la reciproque de la variance
Information-theoretic asymptotics of Bayes methods
The authors examine the relative entropy distance D/sub n/ between the true density and the Bayesian density and show that the asymptotic distance is (d/2)(log n)+c, where d is the dimension of the parameter vector.
APPROXIMATION OF DENSITY FUNCTIONS BY SEQUENCES OF EXPONENTIAL FAMILIES
Analogous convergence results for the relative entropy are shown to hold in general, for any class of log-density functions and sequence of finite-dimensional linear spaces having L2 and L.
Information Theory and Mixing Least-Squares Regressions
An unbiased estimator of the risk of the mixture of general estimators is developed and the performance of these mixture estimator is better than that of a related model-selection estimator which picks a model with the highest weight.