## Figures and Tables from this paper

## 119 Citations

### Integral representation of shallow neural network that attains the global minimum.

- Computer Science
- 2018

The modified ridgelet transform has an explicit expression that can be computed by numerical integration, which suggests that the global minimizer of BP, without BP, can be obtained.

### Theory of Deep Convolutional Neural Networks III: Approximating Radial Functions

- Computer ScienceNeural Networks
- 2021

### Deep Convolutional Neural Nets

- Computer Science
- 2015

In this chapter, neural nets are a class of predictors that have been shown empirically to achieve very good performance on tasks whose inputs are images, speech, or audio signals, and they often generalize better than one would predict.

### The global optimum of shallow neural network is attained by ridgelet transform

- Computer Science
- 2018

By introducing a continuous model of neural networks, this work reduces the training problem to a convex optimization in an infinite dimensional Hilbert space, and obtains the explicit expression of the global optimizer via the ridgelet transform.

### Fast generalization error bound of deep learning without scale invariance of activation functions

- Computer ScienceNeural Networks
- 2020

### Nonconvex regularization for sparse neural networks

- Computer Science, MathematicsApplied and Computational Harmonic Analysis
- 2022

### Tunable Activation Functions for Deep Neural Networks

- Computer ScienceLecture Notes in Computational Intelligence and Decision Making
- 2021

The performance of artificial neural networks significantly depends on the choice of the nonlinear activation function of the neuron. Usually this choice comes down to an empirical one from a list of…

### Double Continuum Limit of Deep Neural Networks

- Computer Science
- 2017

This study-in-progress can synthesize a deep neural network from broken line approximation and numerical integration of a double continuum model, without backpropagation, and develops the ridgelet transform for potential field, and synthesized an autoencoder without back Propagation.

### Numerical Integration Method for Training Neural Network

- Computer ScienceArXiv
- 2019

A generalized kernel quadrature method with a fast convergence guarantee in a function norm that is applicable to signed measures, and a natural choice of kernels is developed.

## References

SHOWING 1-10 OF 50 REFERENCES

### Harmonic Analysis of Neural Networks

- Computer Science, Mathematics
- 1999

A special admissibility condition for neural activation functions is introduced which requires that the neural activation function be oscillatory and linear transforms are constructed which represent quite general functions f as a superposition of ridge functions.

### Universal approximation bounds for superpositions of a sigmoidal function

- Computer ScienceIEEE Trans. Inf. Theory
- 1993

The approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings and the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption.

### On the approximate realization of continuous mappings by neural networks

- Computer ScienceNeural Networks
- 1989

### Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function

- Computer ScienceNeural Networks
- 1993

### An Integral Representation of Functions Using Three-layered Networks and Their Approximation Bounds

- Computer Science, MathematicsNeural Networks
- 1996

### Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory

- MathematicsNeural Networks
- 1991

### ImageNet classification with deep convolutional neural networks

- Computer ScienceCommun. ACM
- 2012

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

### Approximation theory of the MLP model in neural networks

- Computer Science, MathematicsActa Numerica
- 1999

In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of the more popular and…

### Construction of neural nets using the radon transform

- Computer ScienceInternational 1989 Joint Conference on Neural Networks
- 1989

The authors present a method for constructing a feedforward neural net implementing an arbitrarily good approximation to any L/sub 2/ function over (-1, 1)/sup n/. The net uses n input nodes, a…

### Improving deep neural networks for LVCSR using rectified linear units and dropout

- Computer Science2013 IEEE International Conference on Acoustics, Speech and Signal Processing
- 2013

Modelling deep neural networks with rectified linear unit (ReLU) non-linearities with minimal human hyper-parameter tuning on a 50-hour English Broadcast News task shows an 4.2% relative improvement over a DNN trained with sigmoid units, and a 14.4% relative improved over a strong GMM/HMM system.