Learn More
We present a new matrix-free method for the large-scale trust-region subproblem, assuming that the approximate Hessian is updated by the L-BFGS formula with m = 1 or 2. We determine via simple formulas the eigenvalues of these matrices and, at each iteration, we construct a positive definite matrix whose inverse can be expressed analytically, without using(More)
We present a nearly-exact method for the large scale trust region subproblem (TRS) based on the properties of the minimal-memory BFGS method. Our study is concentrated in the case where the initial BFGS matrix can be any scaled identity matrix. The proposed method is a variant of the Moré–Sorensen method that exploits the eigenstructure of the approximate(More)
We present an interval branch-and-prune algorithm for computing verified enclosures for the global minimum and all global minimizers of univariate functions subject to bound constraints. The algorithm works within the branch-and-bound framework and uses first order information of the objective function. In this context, we investigate valuable properties of(More)
—Artificial neural networks have been widely used for knowledge extraction from biomedical datasets and constitute an important role in bio-data exploration and analysis. In this work, we proposed a new curvilinear algorithm for training large neural networks which is based on the analysis of the eigenstructure of the memoryless BFGS matrices. The proposed(More)
We present a new matrix-free method for the computation of negative curvature directions based on the eigenstructure of minimal-memory BFGS matrices. We determine via simple formulas the eigenvalues of these matrices and we compute the desirable eigenvectors by explicit forms. Consequently, a negative curvature direction is computed in such a way that(More)
—In this paper, we evaluate the performance of descent conjugate gradient methods and we propose a new algorithm for training recurrent neural networks. The presented algorithm preserves the advantages of classical conjugate gradient methods while simultaneously avoids the usually inefficient restarts. Simulation results are also presented using three(More)
  • 1