## 2,771 Citations

### A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a.

- Computer ScienceCerebral cortex
- 1991

Two neural networks are developed with architecture similar to Zipser and Andersen's model and trained to perform the same task using a more biologically plausible learning procedure than backpropagation, which corroborates the validity of this neural network's computational algorithm as a plausible model of how area 7a may perform coordinate transformations.

### DESIGN OF NEURAL NETWORK FILTERS

- Computer Science
- 1996

The objective is to clarify a number of phases involved in the design of neural network filter architectures in connection with “black box” modeling tasks such as system identification, inverse modeling and timeseries prediction.

### A new algorithm for training multilayer feedforward neural networks

- Computer Science1993 IEEE International Symposium on Circuits and Systems
- 1993

The authors present a new learning and synthesis algorithm for training multilayer feedforward neural networks that can classify both linear separable and linear nonseparable families, whereas the backpropagation algorithm will fail sometimes.

### Method to design a neural network with minimal number of neurons for approximation problems

- Computer ScienceIFAC-PapersOnLine
- 2022

### A robust backpropagation learning algorithm for function approximation

- Computer ScienceIEEE Trans. Neural Networks
- 1994

A robust BP learning algorithm is derived that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process, and its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed.

### Feed Forward Neural Network Entities

- Computer ScienceIWANN
- 1997

Although the entities' concept is still developing, some preliminary results indicate superiority over the single FFNN model when applicable to problems involving high-dimensional data (e.g. financial/meteorological data analysis, etc.).

### Explicit solutions of the optimum weights of layered neural networks

- Computer Science[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
- 1992

It is shown that, if the hidden layer units take a sinusoidal activation function, the optimum weights of the three-layer feedforward neural network can be explicitly solved by relating the layered…

### Robust design of multilayer feedforward neural networks: an experimental approach

- Computer ScienceEng. Appl. Artif. Intell.
- 2004

### Rates of Approximation in a Feedforward Network Depend on the Type of Computational Unit

- Computer Science
- 1998

De Vore et al. proved the following result: if one approximates continuously a class of functions of d variables with bounded partial derivatives on a compacta, in order to accomplish the order of approximation O(1/n), it is necessary to use at least O(n d ) number of neurons, regardless of the activation function.

## References

SHOWING 1-10 OF 76 REFERENCES

### Backpropagation: past and future

- MathematicsIEEE 1988 International Conference on Neural Networks
- 1988

The author proposes development of a general theory of intelligence in which backpropagation and comparisons to the brain play a central role, and points to a series of intermediate steps and applications leading up to the construction of such generalized systems.

### On the use of backpropagation in associative reinforcement learning

- Computer ScienceIEEE 1988 International Conference on Neural Networks
- 1988

A description is given of several ways that backpropagation can be useful in training networks to perform associative reinforcement learning tasks and it is observed that such an approach even permits a seamless blend of associatives reinforcement learning and supervised learning within the same network.

### Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation

- Computer ScienceNeural Computation
- 1989

It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control.

### Fast learning in artificial neural systems: multilayer perceptron training using optimal estimation

- Computer ScienceIEEE 1988 International Conference on Neural Networks
- 1988

Initial results indicate that optimal estimate training (OET) is a supervised learning technique that is faster and more accurate than backward error propagation and the information content loaded into a set of network interconnection weights is also characterized well.

### Dynamic node creation in backpropagation networks

- Computer ScienceInternational 1989 Joint Conference on Neural Networks
- 1989

A novel method called dynamic node creation (DNC) that attacks issues of training large networks and of testing networks with different numbers of hidden layer units is presented, which yielded a solution for every problem tried.

### Connectionist nonparametric regression: Multilayer feedforward networks can learn arbitrary mappings

- Computer ScienceNeural Networks
- 1990

### Learning representations by back-propagating errors

- Computer ScienceNature
- 1986

Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.

### Neocognitron: A hierarchical neural network capable of visual pattern recognition

- Computer ScienceNeural Networks
- 1988

### Capabilities of three-layered perceptrons

- Computer ScienceIEEE 1988 International Conference on Neural Networks
- 1988

A theorem is proved to the effect that three-layered perceptrons with an infinite number of computing units can represent arbitrary mapping if the desired mapping and the input-output characteristics…