BEYOND BACKPROPAGATION : USING SIMULATED ANNEALING FOR TRAINING NEURAL NETWORKS
@inproceedings{Sexton1999BEYONDB, title={BEYOND BACKPROPAGATION : USING SIMULATED ANNEALING FOR TRAINING NEURAL NETWORKS}, author={Randall S. Sexton and Robert E. Dorsey}, year={1999} }
The vast majority of neural network research relies on a gradient algorithm, typically a variation of backpropagation, to obtain the weights of the model. Because of the enigmatic nature of complex nonlinear optimization problems, such as training artificial neural networks, this technique has often produced inconsistent and unpredictable results. To go beyond backpropagation’s typical selection of local solutions, simulated annealing is suggested as an alternative training technique that will…
Figures and Tables from this paper
22 Citations
Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach
- Computer ScienceHIS
- 2001
Preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach.
Global optimization of neural network weights
- Computer ScienceProceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)
- 2002
This study examines, through Monte-Carlo simulations, the relative efficiency of a local search algorithm to 8 stochastic global algorithms to show that even ignoring the computational requirements of the global algorithms, there is little evidence to support the use of theglobal algorithms examined for training neural networks.
Training neural networks using Metropolis Monte Carlo and an adaptive variant
- Computer ScienceArXiv
- 2022
It is suggested that, as for molecular simulation, Monte Carlo methods should be a complement to gradient-based methods for training neural networks, allowing access to a distinct set of network architectures and principles.
Comparative Analysis of Genetic Algorithm, Simulated Annealing and Cutting Angle Method for Artificial Neural Networks
- Computer ScienceMLDM
- 2005
A comparative study for the effects of probabilistic and deterministic global search method for artificial neural network using fully connected feed forward multi-layered perceptron architecture and a deterministic cutting angle method to find weights in neural network.
Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights
- Computer ScienceNeural Processing Letters
- 2007
There is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks, and neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.
Embedding Simulated Annealing within Stochastic Gradient Descent
- Computer ScienceOLA
- 2021
We propose a new metaheuristic training scheme for Machine Learning that combines Stochastic Gradient Descent (SGD) and Discrete Optimization in an unconventional way. Our idea is to define a…
Global Optimization of Neural Network Weights – A Simulation Study
- Computer Science
- 2006
The results show that even ignoring the computational requirements of the global algorithms, there is little evidence to support the use of theglobal algorithms examined in this paper for training neural networks.
ANN Training: A Survey of Classical & Soft Computing Approaches
- Computer Science
- 2016
The classical as well as soft computing based search and optimization algorithms in existing literature for training neural networks are presented and the qualitative comparison among them will enable the researchers to develop new algorithms either hybrid or stand alone for ANN model identification.
OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network
- Computer Science
- 2011
The proposed algorithm OMBP is a algorithm for a fast training and accurate prediction for Feedforward Neural Network and has shown an upper hand over three different algorithms in terms of number of iterations, time needed to reach the convergence, the error of prediction, and percentage of trials failed to converge.
A COMPARATIVE STUDY ON NEURAL NET CLASSIFIER OPTIMIZATIONS
- Computer Science
- 2012
It is suggested that Simulated Annealing might be a reasonable choice for NNC optimization to start with, when both accuracy and convergence speed are considered.
References
SHOWING 1-10 OF 28 REFERENCES
Global optimization for artificial neural networks: A tabu search application
- Computer ScienceEur. J. Oper. Res.
- 1998
The projection neural network
- Computer Science[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
- 1992
A novel neural network model, the projection neural network, is developed to overcome three key drawbacks of backpropagation-trained neural networks (BPNN), i.e., long training times, the large…
On learning the derivatives of an unknown mapping with multilayer feedforward networks
- Computer ScienceNeural Networks
- 1992
Learning representations by back-propagating errors
- Computer ScienceNature
- 1986
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Efficacy of modified backpropagation and optimisation methods on a real-world medical problem
- Computer ScienceNeural Networks
- 1995
The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting
- Computer Science
- 1994
This book discusses forms of Backpropagation for Sensitivity Analysis, Optimization, and Neural Networks, and the importance of the Multivariate ARMA(1,1) Model in this regard.
Application of the Back Propagation Neural Network Algorithm with Monotonicity Constraints for Two‐Group Classification Problems*
- Computer Science
- 1993
This research suggests the application of monotonicity constraints to the back propagation learning algorithm to improve neural network performance and efficiency in classification applications where the feature vector is related monotonically to the pattern vector.
SIMANN: FORTRAN module to perform Global Optimization of Statistical Functions with Simulated Annealing
- Economics
- 1992
The Use of Parsimonious Neural Networks for Forecasting Financial Time Series
- Business
- 1998
When attempting to determine the optimal complexity of a neural network the correct decision is dependent upon obtaining the global solution at each stage of the decision process. Failure to ensure…
On the approximate realization of continuous mappings by neural networks
- Computer ScienceNeural Networks
- 1989