Paul Kang-Hoh Phua

Learn More
Computational experience with several limited-memory quasi-Newton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a well-known test library [J. on several synthetic problems allowing control of the clustering of eigenvalues in the Hessian spectrum, and on some large-scale problems in(More)
In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of(More)
Vectorization techniques are applied here for the vectorization of the non-linear conjugate-gradient method for large-scale unconstrained minimization. Until now the main thrust of vectorization techniques has been directed towards vectorization of linear conjugate-gradient methods designed to solve symmetric linear systems of algebraic equations.(More)
  • 1