Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems

@article{Paige1982Algorithm5L,
  title={Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems},
  author={C. Paige and M. Saunders},
  journal={ACM Trans. Math. Softw.},
  year={1982},
  volume={8},
  pages={195-209}
}
Received 4 June 1980; revised 23 September 1981, accepted 28 February 1982 This work was supported by Natural Sciences and Engineering Research Council of Canada Grant A8652, by the New Zealand Department of Scientific and Industrial Research; and by U S. National Science Foundation Grants MCS-7926009 and ECS-8012974, the Department of Energy under Contract AM03-76SF00326, PA No. DE-AT03-76ER72018, the Office of Naval Research under Contract N00014-75-C-0267, and the Army Research Office under… Expand
Numerical linear algebra and some problems in computational statistics
for any value of x in the interval (a, b). Chebyshev motivated his problem by investigation of limit theorems in probability theory, with some related work done even earlier by Heine. It wasExpand
Bibliography of the Book Matrix Computations
This bibliography is from the book Matrix Computations, Second Edition, by Gene H. Golub and Charles F. Van Loan, The Johns Hopkins University Press, Baltimore, Maryland 21218, 1989. The originalExpand
Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods
TLDR
Tory results are revealed revealing upper and lower bounds on the residual norm of any linear least squares problem were derived in terms of the total least squares (TLS) correction of the corresponding scaled TLS problem. Expand
Numerical Equivalences among Krylov Subspace Algorithms for Skew-Symmetric Matrices
TLDR
The numerical equivalence of Lanczos tridiagonalization and Golub--Kahan bidiagonalization for any real skew-symmetric matrix $A$ is shown and these last two numerical equivalences add to the theoretical equivalences in the work by Eisenstat. Expand
Large-Scale Numerical Optimization Instructor : Michael Saunders Spring 2019 Notes 10 : MINOS Part 1 : the Reduced-Gradient Method 1 Origins
  • 2019
where φ(x) is a smooth function (ideally involving only some of the variables), and A is a sparse m×n matrix as in a typical LO problem. The gradients of φ(x) were assumed to be available, but no useExpand
Simultaneous Analysis and Design in Pde-constrained Optimization a Dissertation Submitted to the Institute for Computational and Mathematical Engineering and the Committee on Graduate Studies of Stanford University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
TLDR
New methods for solving certain types of PDE-constrained optimization problems are presented, to augment state-of-the-art PDE methods and to aid stability and to facilitate computing the solution of the linear systems that arise within the algorithm. Expand
Estimation of resolution and covariance for large matrix inversions
SUMMARY Key advantages of conjugate gradient (CG) methods are that they require far less computer memory than full singular value decomposition (SVD), and that iteration may be stopped at any time toExpand
Implimenting cholesky factorization for interior point methods of linear programming
Every iteration of an interior point method of large scale linear programming requires computing at least one orthogonal projection of the objective function gradient onto the null space of a linearExpand
9 Numerical aspects of solving linear least squares problems
  • J. Barlow
  • Mathematics, Computer Science
  • Computational Statistics
  • 1993
TLDR
This chapter explains some matrix computations that are common in statistics, including the solution of partial differential equations and networking problems, and two methods used for analyzing rounding errors. Expand
On Updating Preconditioners for the Iterative Solution of Linear Systems
TLDR
The main contribution of this thesis is the development of a technique for updating preconditioners by bordering that consists in the computation of an approximate decomposition for an equivalent augmented linear system, that is used as preconditionser for the original problem. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 11 REFERENCES
Algorithm 539: Basic Linear Algebra Subprograms for Fortran Usage [F1]
TLDR
This work was supported by the National Aeronautics and Space Administration under Contract NAS 7-100 and by the Office of Naval Research under Contract NR 044-457. Expand
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
TLDR
Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I ~QR is the most reliable algorithm when A is ill-conditioned. Expand
Methods of conjugate gradients for solving linear systems
An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns. The solution is given in n steps. It is shown that this method is a special case of a very generalExpand
Algorithms for the regularization of ill-conditioned least squares problems
Two regularization methods for ill-conditioned least squares problems are studied from the point of view of numerical efficiency. The regularization methods are formulated as quadraticallyExpand
Basic Linear Algebra Subprograms for Fortran Usage
TLDR
A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented, intended to be used with FORTRAN. Expand
A Critique of Some Ridge Regression Methods
Abstract Ridge estimates seem motivated by a belief that least squares estimates tend to be too large, particularly when there is multicollinearity. The ridge solution is to supplement the data byExpand
The Collinearity Problem in Linear Regression. The Partial Least Squares (PLS) Approach to Generalized Inverses
The use of partial least squares (PLS) for handling collinearities among the independent variables X in multiple regression is discussed. Consecutive estimates $({\text{rank }}1,2,\cdots )$ areExpand
A Practical Examination of Some Numerical Methods for Linear Discrete Ill-Posed Problems
Four well-known methods for the numerical solution of linear discrete ill-posed problems are investigated from a common point of view: namely, the type of algebraic expansion generated for the solu...
A bidlagonahzation algorithm for solving ill-posed systems of linear equatmns Rep LITH-MAT-R-80-33
  • A bidlagonahzation algorithm for solving ill-posed systems of linear equatmns Rep LITH-MAT-R-80-33
  • 1980
J Solvmg Least Squares Problems
  • N.J
  • 1974
...
1
2
...