Global Optimization for Neural Network Training

@article{Shang1996GlobalOF,
  title={Global Optimization for Neural Network Training},
  author={Yi Shang and Benjamin W. Wah},
  journal={Computer},
  year={1996},
  volume={29},
  pages={45-54}
}
We propose a novel global minimization method, called NOVEL (Nonlinear Optimization via External Lead), and demonstrate its superior performance on neural network learning problems. The goal is improved learning of application problems that achieves either smaller networks or less error prone networks of the same size. This training method combines global and local searches to find a good local minimum. In benchmark comparisons against the best global optimization algorithms, it demonstrates… 

Deterministic global optimization for FNN training

  • K. Toh
  • Computer Science
    IEEE Trans. Syst. Man Cybern. Part B
  • 2003
Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions.

ALTERNATIVES TO GRADIENT-BASED NEURAL TRAINING

Several new global optimization methods suitable for architecture optimization and neural training are described here, including multistart initialization methods offered as an alternative to global minimization.

Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach

Preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach.

Optimization and global minimization methods suitable for neural networks

A survey of global minimization methods used for optimization of neural structures and network cost functions, including some aspects of genetic algorithms, are provided.

Constrained Formulations for Neural Network Training and Their Applications to Solve the Two-Spiral

It is shown that constraints violated during a search provide additional force to help escape from local minima using the newly developed constrained simulated annealing (CSA) algorithm.

Global optimization issues in deep network regression: an overview

An overview of global issues in optimization methods for training feedforward neural networks (FNN) in a regression setting is presented and some recent results on the existence of non-global stationary points of the unconstrained nonlinear problem are reviewed.

First-Order Optimization Method for Single and Multiple-Layer Feedforward Artificial Neural Networks

The first-order optimization method is applied in single and multiple-layer feedforward artificial neural networking problem and some useful results are obtained.

Training Recurrent Neural Networks as a Constraint Satisfaction Problem

This study converts the training set of a neural network into a CSP and uses the quotient gradient system to find its solutions and compares it to a genetic algorithm and error backpropagation.
...

References

SHOWING 1-10 OF 15 REFERENCES

Neural Networks and Unconstrained Optimization

When performing the unconstrained optimisation of a complicated industrial problem, the main computational time is usually spent in the calculation of the objective function and its derivatives, so if a parallel processing machine is to be used, then a number of function evaluations must be calculated in parallel.

The Cascade-Correlation Learning Architecture

The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.

Topics in global optimization

A summary of the reserch done in global optimization at the Numerical Analysis Departament of IIMAS-UNAM is given, showing the robustness of the Tunnelling Algorithm.

First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method

First- and second-order optimization methods for learning in feedforward neural networks are reviewed to illustrate the main characteristics of the different methods and their mutual relations.

Algorithms for continuous optimization : the state of the art

General Optimality Conditions via a Separation Scheme F. Giannessi, G. Di Pillo, V.G. Evtushenko, M. Potapov, and M.W. Dixon.

Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithmCorrigenda for this article is available here

A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.

Genetic Algorithms + Data Structures = Evolution Programs

  • Z. Michalewicz
  • Computer Science, Economics
    Springer Berlin Heidelberg
  • 1996
GAs and Evolution Programs for Various Discrete Problems, a Hierarchy of Evolution Programs and Heuristics, and Conclusions.

Parallel Networks that Learn to Pronounce English Text

H hierarchical clustering techniques applied to NETtalk reveal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units, which suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.

Global Optimization