Corpus ID: 53474150

ALGORITHM XXX: SC-SR1: MATLAB software for solving shape-changing L-SR1 trust-region subproblems

@article{Brust2016ALGORITHMXS,
  title={ALGORITHM XXX: SC-SR1: MATLAB software for solving shape-changing L-SR1 trust-region subproblems},
  author={Johannes J. Brust and Oleg P. Burdakov and Jennifer B. Erway and Roummel F. Marcia and Ya-Xiang Yuan},
  journal={arXiv: Optimization and Control},
  year={2016}
}
We present a MATLAB implementation of the shape-changing symmetric rank-one (SC-SR1) method that solves trust-region subproblems when a limited-memory symmetric rank-one (L-SR1) matrix is used in place of the true Hessian matrix. The method takes advantage of two shape-changing norms [4, 3] to decompose the trust-region subproblem into two separate problems. Using one of the proposed shape-changing norms, the resulting subproblems then have closed-form solutions. In the other proposed norm, one… Expand
A New Multipoint Symmetric Secant Method with a Dense Initial Matrix
TLDR
Numerical results on the CUTEst test problems suggest that the MSS method using a dense initialization outperforms the standard initialization and is competitive with a basic L-SR1 trust-region method. Expand
A NEW MULTIPOINT SECANT METHOD WITH A DENSE INITIAL MATRIX
TLDR
Numerical results on the CUTEst test problems suggest that the MSS method using a dense initialization outperforms the standard initialization and also suggest that this approach is competitive with a basic L-SR1 trust-region method. Expand
On efficiently combining limited-memory and trust-region techniques
TLDR
This work shows how to efficiently combine limited-memory and trust-region techniques based on the eigenvalue decomposition of the limited- Memory quasi-Newton approximation of the Hessian matrix and proposes improved versions of the existing limited- memory trust- region algorithms. Expand

References

SHOWING 1-10 OF 35 REFERENCES
A Subspace Minimization Method for the Trust-Region Step
TLDR
A method is proposed that allows the trust-region norm to be defined independently of the preconditioner over a sequence of evolving low-dimensional subspaces and shows that the method can require significantly fewer function evaluations than other methods. Expand
On solving trust-region and other regularised subproblems in optimization
TLDR
Methods that obtain the solution of a sequence of parametrized linear systems by factorization are used, and enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both globally and asymptotically at least superlinearly convergent in all cases. Expand
Iterative Methods for Finding a Trust-region Step
TLDR
This work proposes an extension of the Steihaug-Toint method that allows a solution to be calculated to any prescribed accuracy and includes a parameter that allows the user to take advantage of the tradeoff between the overall number of function evaluations and matrix-vector products associated with the underlying trust-region method. Expand
Algorithm 943
TLDR
A MATLAB implementation of the Moré-Sorensen sequential (MSS) method that computes the minimizer of a quadratic function defined by a limited-memory BFGS matrix subject to a two-norm trust-region constraint is presented. Expand
The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-More iterations,Expand
On efficiently combining limited-memory and trust-region techniques
TLDR
This work shows how to efficiently combine limited-memory and trust-region techniques based on the eigenvalue decomposition of the limited- Memory quasi-Newton approximation of the Hessian matrix and proposes improved versions of the existing limited- memory trust- region algorithms. Expand
A Theoretical and Experimental Study of the Symmetric Rank-One Update
TLDR
A new analysis is presented that shows that the SRi method with a line search is $( n + 1)$-step q-superlinearly convergent without the assumption of linearly independent iterates. Expand
On solving L-SR1 trust-region subproblems
In this article, we consider solvers for large-scale trust-region subproblems when the quadratic model is defined by a limited-memory symmetric rank-one (L-SR1) quasi-Newton matrix. We propose a so...
Measures for Symmetric Rank-One Updates
  • H. Wolkowicz
  • Mathematics, Computer Science
  • Math. Oper. Res.
  • 1994
TLDR
It is shown that the σ-optimal updates, as well as the Oren-Luenberger self-scaling updates, are all optimal updates for the κ measure, the l 2 condition number, which provides a natural Broyden class replacement for the SR1 when it is not positive definite. Expand
Updating Quasi-Newton Matrices With Limited Storage
We study how to use the BFGS quasi-Newton matrices to precondition minimization methods for problems where the storage is critical. We give an update formula which generates matrices usingExpand
...
1
2
3
4
...