• Corpus ID: 10609304

BEYOND BACKPROPAGATION : USING SIMULATED ANNEALING FOR TRAINING NEURAL NETWORKS

@inproceedings{Sexton1999BEYONDB,
  title={BEYOND BACKPROPAGATION : USING SIMULATED ANNEALING FOR TRAINING NEURAL NETWORKS},
  author={Randall S. Sexton and Robert E. Dorsey},
  year={1999}
}
The vast majority of neural network research relies on a gradient algorithm, typically a variation of backpropagation, to obtain the weights of the model. Because of the enigmatic nature of complex nonlinear optimization problems, such as training artificial neural networks, this technique has often produced inconsistent and unpredictable results. To go beyond backpropagation’s typical selection of local solutions, simulated annealing is suggested as an alternative training technique that will… 

Figures and Tables from this paper

Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach
TLDR
Preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach.
Global optimization of neural network weights
  • L. Hamm, B. Wade Brorsen, M. Hagan
  • Computer Science
    Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290)
  • 2002
TLDR
This study examines, through Monte-Carlo simulations, the relative efficiency of a local search algorithm to 8 stochastic global algorithms to show that even ignoring the computational requirements of the global algorithms, there is little evidence to support the use of theglobal algorithms examined for training neural networks.
Training neural networks using Metropolis Monte Carlo and an adaptive variant
TLDR
It is suggested that, as for molecular simulation, Monte Carlo methods should be a complement to gradient-based methods for training neural networks, allowing access to a distinct set of network architectures and principles.
Comparative Analysis of Genetic Algorithm, Simulated Annealing and Cutting Angle Method for Artificial Neural Networks
TLDR
A comparative study for the effects of probabilistic and deterministic global search method for artificial neural network using fully connected feed forward multi-layered perceptron architecture and a deterministic cutting angle method to find weights in neural network.
Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights
TLDR
There is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks, and neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.
Embedding Simulated Annealing within Stochastic Gradient Descent
We propose a new metaheuristic training scheme for Machine Learning that combines Stochastic Gradient Descent (SGD) and Discrete Optimization in an unconventional way. Our idea is to define a
Global Optimization of Neural Network Weights – A Simulation Study
TLDR
The results show that even ignoring the computational requirements of the global algorithms, there is little evidence to support the use of theglobal algorithms examined in this paper for training neural networks.
ANN Training: A Survey of Classical & Soft Computing Approaches
TLDR
The classical as well as soft computing based search and optimization algorithms in existing literature for training neural networks are presented and the qualitative comparison among them will enable the researchers to develop new algorithms either hybrid or stand alone for ANN model identification.
OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network
TLDR
The proposed algorithm OMBP is a algorithm for a fast training and accurate prediction for Feedforward Neural Network and has shown an upper hand over three different algorithms in terms of number of iterations, time needed to reach the convergence, the error of prediction, and percentage of trials failed to converge.
A COMPARATIVE STUDY ON NEURAL NET CLASSIFIER OPTIMIZATIONS
TLDR
It is suggested that Simulated Annealing might be a reasonable choice for NNC optimization to start with, when both accuracy and convergence speed are considered.
...
...

References

SHOWING 1-10 OF 28 REFERENCES
The projection neural network
  • G. Wilensky, N. Manukian
  • Computer Science
    [Proceedings 1992] IJCNN International Joint Conference on Neural Networks
  • 1992
A novel neural network model, the projection neural network, is developed to overcome three key drawbacks of backpropagation-trained neural networks (BPNN), i.e., long training times, the large
On learning the derivatives of an unknown mapping with multilayer feedforward networks
Learning representations by back-propagating errors
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting
TLDR
This book discusses forms of Backpropagation for Sensitivity Analysis, Optimization, and Neural Networks, and the importance of the Multivariate ARMA(1,1) Model in this regard.
Application of the Back Propagation Neural Network Algorithm with Monotonicity Constraints for Two‐Group Classification Problems*
TLDR
This research suggests the application of monotonicity constraints to the back propagation learning algorithm to improve neural network performance and efficiency in classification applications where the feature vector is related monotonically to the pattern vector.
The Use of Parsimonious Neural Networks for Forecasting Financial Time Series
When attempting to determine the optimal complexity of a neural network the correct decision is dependent upon obtaining the global solution at each stage of the decision process. Failure to ensure
On the approximate realization of continuous mappings by neural networks
...
...