Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 224,951,220 papers from all fields of science
Search
Sign In
Create Free Account
Rprop
Known as:
Resilient backpropagation
, Resilient propagation
Rprop, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a…
Expand
Wikipedia
(opens in a new tab)
Create Alert
Alert
Related topics
Related topics
11 relations
Algorithm
Artificial neural network
Backpropagation
Deep learning
Expand
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2017
2017
Image Compression Using Neural Network for Biomedical Applications
G. Rao
,
G. Kumari
,
B. Rao
International Conference on Soft Computing for…
2017
Corpus ID: 70348973
As images are of large size and require huge bandwidth and large storage space, an effective compression algorithm is essential…
Expand
2011
2011
Effectiveness of PSO Based Neural Network for Seasonal Time Series Forecasting
Ratnadip Adhikari
,
R. Agrawal
Indian International Conference on Artificial…
2011
Corpus ID: 17527004
Recently, the Particle Swarm Optimization (PSO) technique has gained much attention in the field of time series forecasting…
Expand
2008
2008
Parallelizing neural network training for cluster systems
G. Dahl
,
Alan McAvinney
,
T. Newhall
2008
Corpus ID: 8980368
We present a technique for parallelizing the training of neural networks. Our technique is designed for parallelization on a…
Expand
2008
2008
Estimating landmark locations from geo-referenced photographs
Henrik Kretzschmar
,
C. Stachniss
,
Christian Plagemann
,
Wolfram Burgard
IEEE/RSJ International Conference on Intelligent…
2008
Corpus ID: 2725916
The problem of estimating the positions of landmarks using a mobile robot equipped with a camera has intensively been studied in…
Expand
2006
2006
Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization
Marcio Carvalho
,
Teresa B Ludermir
International Conference on Neural Information…
2006
Corpus ID: 12841023
Training neural networks is a complex task of great importance in problems of supervised learning. The Particle Swarm…
Expand
2005
2005
Hourly Forecasting of SO2 Pollutant Concentration Using an Elman Neural Network
U. Brunelli
,
V. Piazza
,
L. Pignato
,
F. Sorbello
,
S. Vitabile
WIRN/NAIS
2005
Corpus ID: 531410
In this paper the first results produced by an Elman neural network for hourly SO2 ground concentration forecasting are presented…
Expand
1998
1998
Speeding Up Backpropagation Algorithms by Using Cross-Entropy Combined with Pattern Normalization
M. Joost
,
W. Schiffmann
Int. J. Uncertain. Fuzziness Knowl. Based Syst.
1998
Corpus ID: 39578731
This paper demonstrates how the backpropagation algorithm (BP) and its variants can be accelerated significantly while the…
Expand
1997
1997
A Cascade Network Algorithm Employing Progressive RPROP
N. Treadgold
,
Tom Gedeon
International Work-Conference on Artificial and…
1997
Corpus ID: 11722095
Cascade Correlation (Cascor) has proved to be a powerful method for training neural networks. Cascor, however, has been shown not…
Expand
1997
1997
Computational neural networks for mapping calorimetric data: Application of feed-forward neural networks to kinetic parameters determination and signals filtering
N. Sbirrazzuoli
,
D. Brunnel
Neural computing & applications (Print)
1997
Corpus ID: 24157712
Feedforward neural networks have been used for kinetic parameters determination and signal filtering in differential scanning…
Expand
1997
1997
Adaptation of Learning Rule Parameters Using a Meta Neural Network
C. McCormack
Connection science
1997
Corpus ID: 9798679
This paper proposes an application-independent method of automating learning rule parameter selection using a form of supervisor…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
(opens in a new tab)
,
Terms of Service
(opens in a new tab)
, and
Dataset License
(opens in a new tab)
ACCEPT & CONTINUE