# A direct adaptive method for faster backpropagation learning: the RPROP algorithm

@article{Riedmiller1993ADA, title={A direct adaptive method for faster backpropagation learning: the RPROP algorithm}, author={Martin A. Riedmiller and Heinrich Braun}, journal={IEEE International Conference on Neural Networks}, year={1993}, pages={586-591 vol.1} }

A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behavior of the error function. Contrary to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforeseeable influence of the size of the derivative, but only dependent on the temporal behavior of its sign. This leads…

## 4,469 Citations

A Reliable Resilient Backpropagation Method with Gradient Ascent

- Computer ScienceICIC
- 2006

A fast and reliable learning algorithm for multi-layer artificial neural networks that is capable of escaping from the local minima and converge faster than the Backpropagation with momentum algorithm and the simulated annealing techniques is proposed.

Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques

- Computer ScienceInternational Conference on Computational Intelligence
- 2004

This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight, and one, known as Dynamic Adaptation Methods, which dynamically adapts the momentum factors, and learning rate, with respect to the iteration number or gradient.

Improving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods

- Computer ScienceNeural Computation
- 1999

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the…

A fast learning algorithm for training feedforward neural networks

- Computer ScienceInt. J. Syst. Sci.
- 2006

From the test results for the examples undertaken it is concluded that SORRPROP has small convergence times and better performance in comparison to other first-order learning algorithms.

Two Frameworks for Improving Gradient-Based Learning Algorithms

- Computer ScienceOppositional Concepts in Computational Intelligence
- 2008

This chapter proposes opposite transfer functions as a means to improve the numerical conditioning of neural networks and extrapolate two backpropagation-based learning algorithms for improvement in accuracy and generalization ability on common benchmark functions.

Adaptive Hybrid Learning for Neural Networks

- Computer ScienceNeural Computation
- 2004

A robust locally adaptive learning algorithm is developed via two enhancements of the Resilient Propagation (RPROP) method, shown to be faster and more accurate than the standard RPROP in solving classification tasks based on natural data sets taken from the UCI repository of machine learning databases.

Using Rprop for on-line learning of inverse dynamics

- Computer Science2001 European Control Conference (ECC)
- 2001

The Rprop algorithm is compared with Backpropagation in the on-line learning of inverse dynamics using Kawato's feedback error learning structure and the proposed scheme shows an improved performance in terms of training time over Back Propagation.

A fast learning algorithm with Promising Convergence Capability

- Computer ScienceThe 2011 International Joint Conference on Neural Networks
- 2011

A new algorithm is proposed, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the convergence of a learning process is promising with a fast learning rate.

Advanced supervised learning in multi-layer perceptrons — From backpropagation to adaptive learning algorithms

- Computer Science
- 1994

A new adaptive learning algorithm using magnified gradient function

- Computer ScienceIJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
- 2001

An algorithm is proposed to solve the "flat spot" problem in backpropagation networks by magnifying the gradient function by varying the gradient of the activation function so as to magnify the backward propagated error signal gradient function.

## References

SHOWING 1-6 OF 6 REFERENCES

An empirical study of learning speed in back-propagation networks

- Computer Science
- 1988

A new learning algorithm is developed that is faster than standard backprop by an order of magnitude or more and that appears to scale up very well as the problem size increases.

SuperSAB: Fast adaptive back propagation with good scaling properties

- Computer ScienceNeural Networks
- 1990

Optimization of the Backpropagation Algorithm for Training Multilayer Perceptrons

- Computer Science
- 1994

Learning rate adaptation for each training pattern 12 and nearly optimal learning rate adjust using line search 15 5.6.1 Polak–Ribiere method and line search 17 5.4 Evolutionarily adapted learning rate 12 5.5 Global learning rate adaptation 8 5.1 Fixed calculating of the learning rate.

Increased rates of convergence through learning rate adaptation

- Computer ScienceNeural Networks
- 1988

Learning to tell two spirals apart

- Computer Science
- 1988

A networkarchitecture is exhibited that facilitates the learning of the spiral task, and the leaming speed of several variants of the back-propagation algorithm is compared.

McClelland

- Parallel Distributed Processing.
- 1986