A complete proof of global exponential convergence of a neural network for quadratic optimization with bound constraints

Abstract

Sudharsanan and Sundareshan developed (1991) a neural-network model for bound constrained quadratic minimization and proved the global exponential convergence of their proposed neural network. The global exponential convergence is a critical property of the synthesized neural network for solving the optimization problem successfully. However, Davis and Pattison (1992) presented a counterexample to show that the proof given by Sudharsanan and Sundareshan for the global exponential convergence of the neural network is not correct. Bouzerdoum and Pattison (ibid., vol.4, no.2, p.293-303, 1993) then generalized the neural-network model given by Sudharsanan and Sundareshan and derived the global exponential convergence of the neural network under an appropriate condition. In this letter, we demonstrate through an example that the global exponential convergence condition given by Bouzerdoum and Pattison is not always satisfied by the quadratic minimization problem and show that the neural-network model under the global exponential convergence condition given by Bouzerdoum and Pattison is essentially restricted to contractive networks. Subsequently, a complete proof of the global exponential convergence of the neural-network models proposed by Sudharsanan and Sundareshan and Bouzerdoum and Pattison is given for the general case, without resorting to the global exponential convergence condition given by Bouzerdoum and Pattison. An illustrative simulation example is also presented.

DOI: 10.1109/72.925567

4 Figures and Tables

Cite this paper

@article{Liang2001ACP, title={A complete proof of global exponential convergence of a neural network for quadratic optimization with bound constraints}, author={Xue-Bin Liang}, journal={IEEE transactions on neural networks}, year={2001}, volume={12 3}, pages={636-9} }