• Publications
  • Influence
Neural Networks for Optimal Approximation of Smooth and Analytic Functions
  • H. Mhaskar
  • Mathematics, Computer Science
  • Neural Computation
  • 1996
TLDR
We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions. Expand
  • 261
  • 27
  • PDF
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
TLDR
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. Expand
  • 289
  • 11
  • PDF
Neural networks for localized approximation
We prove that feedforward artificial neural networks with a single hidden layer and an ideal sigmoidal response function cannot provide localized approximation in a Euclidean space of dimensionExpand
  • 83
  • 11
  • PDF
Approximation properties of a multilayered feedforward artificial neural network
  • H. Mhaskar
  • Mathematics, Computer Science
  • Adv. Comput. Math.
  • 1 February 1993
TLDR
We prove that an artificial neural network with multiple hidden layers and akth-order sigmoidal response function can be used to approximate any continuous function on any compact subset of a Euclidean space so as to achieve the Jackson rate of approximation. Expand
  • 130
  • 10
Introduction to the theory of weighted polynomial approximation
  • H. Mhaskar
  • Computer Science, Mathematics
  • Series in approximations and decompositions
  • 4 January 1997
TLDR
Polynomial inequalities degree of approximation applications of potential theory Freud-type orthogonal polynomials. Expand
  • 138
  • 9
  • PDF
Approximation by superposition of sigmoidal and radial basis functions
Let @s: R -> R be such that for some polynomial P, @sP is bounded. We consider the linear span of the functions {@s(@l . (x - t)): @l, t @e R^s}. We prove that unless @s is itself a polynomial, it isExpand
  • 262
  • 8
  • PDF
Extremal problems for polynomials with exponential weights
For the extremal problem: E„r(a):= min||exp(-W«)(x-+ ■■■)\\L„ a > 0, where U (0 < r < oo) denotes the usual integral norm over R, and the minimum is taken over all monic polynomials of degree n, weExpand
  • 168
  • 8
  • PDF
Learning Functions: When Is Deep Better Than Shallow
TLDR
We prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower number of training parameters as well as VC-dimension. Expand
  • 102
  • 6
  • PDF
Spherical Marcinkiewicz-Zygmund inequalities and positive quadrature
TLDR
We obtain quadrature formulas that are exact for spherical harmonics of a fixed order, have nonnegative weights, and are based on function values at scattered sites. Expand
  • 169
  • 6
  • PDF
Diffusion polynomial frames on metric measure spaces
We construct a multiscale tight frame based on an arbitrary orthonormal basis for the L2 space of an arbitrary sigma finite measure space. The approximation properties of the resulting multiscale areExpand
  • 74
  • 6
  • PDF
...
1
2
3
4
5
...