A direct adaptive method for faster backpropagation learning: the RPROP algorithm

@article{Riedmiller1993ADA,
  title={A direct adaptive method for faster backpropagation learning: the RPROP algorithm},
  author={Martin A. Riedmiller and Heinrich Braun},
  journal={IEEE International Conference on Neural Networks},
  year={1993},
  pages={586-591 vol.1}
}
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behavior of the error function. Contrary to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforeseeable influence of the size of the derivative, but only dependent on the temporal behavior of its sign. This leads… 

Figures from this paper

A Reliable Resilient Backpropagation Method with Gradient Ascent
TLDR
A fast and reliable learning algorithm for multi-layer artificial neural networks that is capable of escaping from the local minima and converge faster than the Backpropagation with momentum algorithm and the simulated annealing techniques is proposed.
Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques
TLDR
This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight, and one, known as Dynamic Adaptation Methods, which dynamically adapts the momentum factors, and learning rate, with respect to the iteration number or gradient.
Improving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the
A fast learning algorithm for training feedforward neural networks
TLDR
From the test results for the examples undertaken it is concluded that SORRPROP has small convergence times and better performance in comparison to other first-order learning algorithms.
Two Frameworks for Improving Gradient-Based Learning Algorithms
TLDR
This chapter proposes opposite transfer functions as a means to improve the numerical conditioning of neural networks and extrapolate two backpropagation-based learning algorithms for improvement in accuracy and generalization ability on common benchmark functions.
Adaptive Hybrid Learning for Neural Networks
TLDR
A robust locally adaptive learning algorithm is developed via two enhancements of the Resilient Propagation (RPROP) method, shown to be faster and more accurate than the standard RPROP in solving classification tasks based on natural data sets taken from the UCI repository of machine learning databases.
Using Rprop for on-line learning of inverse dynamics
TLDR
The Rprop algorithm is compared with Backpropagation in the on-line learning of inverse dynamics using Kawato's feedback error learning structure and the proposed scheme shows an improved performance in terms of training time over Back Propagation.
A fast learning algorithm with Promising Convergence Capability
TLDR
A new algorithm is proposed, which provides a systematic approach to make use of the characteristics of different fast learning algorithms so that the convergence of a learning process is promising with a fast learning rate.
A new adaptive learning algorithm using magnified gradient function
  • S. Ng, C. Cheung, S. Leung, A. Luk
  • Computer Science
    IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
  • 2001
TLDR
An algorithm is proposed to solve the "flat spot" problem in backpropagation networks by magnifying the gradient function by varying the gradient of the activation function so as to magnify the backward propagated error signal gradient function.
...
...

References

SHOWING 1-6 OF 6 REFERENCES
An empirical study of learning speed in back-propagation networks
TLDR
A new learning algorithm is developed that is faster than standard backprop by an order of magnitude or more and that appears to scale up very well as the problem size increases.
Optimization of the Backpropagation Algorithm for Training Multilayer Perceptrons
TLDR
Learning rate adaptation for each training pattern 12 and nearly optimal learning rate adjust using line search 15 5.6.1 Polak–Ribiere method and line search 17 5.4 Evolutionarily adapted learning rate 12 5.5 Global learning rate adaptation 8 5.1 Fixed calculating of the learning rate.
Increased rates of convergence through learning rate adaptation
Learning to tell two spirals apart
TLDR
A networkarchitecture is exhibited that facilitates the learning of the spiral task, and the leaming speed of several variants of the back-propagation algorithm is compared.
McClelland
  • Parallel Distributed Processing.
  • 1986