Paul Kang-Hoh Phua

Learn More
In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of(More)
Computational experience with several limited-memory quasi-Newton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a well-known test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1981), pp. 17-41], on several synthetic problems allowing control of(More)
Vectorization techniques are applied here for the vectorization of the non-linear conjugate-gradient method for large-scale unconstrained minimization. Until now the main thrust of vectorization techniques has been directed towards vectorization of linear conjugate-gradient methods designed to solve symmetric linear systems of algebraic equations.(More)
  • 1