Singular value decomposition and least squares solutions

@article{Golub2007SingularVD,
  title={Singular value decomposition and least squares solutions},
  author={Gene H. Golub and Christian H. Reinsch},
  journal={Numerische Mathematik},
  year={2007},
  volume={14},
  pages={403-420}
}
Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that $$A = U\sum {V^T}$$ (1) where $${U^T}U = {V^T}V = V{V^T} = {I_n}{\text{ and }}\sum {\text{ = diag(}}{\sigma _{\text{1}}}{\text{,}} \ldots {\text{,}}{\sigma _n}{\text{)}}{\text{.}}$$ The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative… Expand
A simultaneous decomposition of four real quaternion matrices encompassing $\eta$-Hermicity and its applications
Let $\mathbb{H}$ be the real quaternion algebra and $\mathbb{H}^{m\times n}$ denote the set of all $m\times n$ matrices over $\mathbb{H}$. Let $\mathbf{i},\mathbf{j},\mathbf{k}$ be the imaginaryExpand
An iterative method for computing a symplectic SVD-like decomposition
In this paper, we present an effective iterative method for computing symplectic SVD-like decomposition for a 2n-by-m rectangular real matrix A. The main purpose here is a block-power iterativeExpand
Provable Approximations for Constrained $\ell_p$ Regression
TLDR
The first provable constant factor approximation algorithm that solves the constrained $\ell_p$ regression directly, for every constant $p,d\geq 1$ using core-sets, its running time is $O(n \log n)$ including extensions for streaming and distributed (big) data. Expand
Numerical methods for computing angles between linear subspaces
TLDR
Experimental results are given, which indicates that MGS gives $\theta_k$ with equal precision and fewer arithmetic operations than HT, however, HT gives principal vectors, which are orthogonal to working accuracy, which is not in general true for MGS. Expand
Fast and Accurate Least-Mean-Squares Solvers
TLDR
An algorithm that gets a finite set of $n$ $d$-dimensional real vectors and returns a weighted subset of $d+1$ vectors whose sum is \emph{exactly} the same is suggested, based on a novel paradigm of fusion between different data summarization techniques, known as sketches and coresets. Expand
Product Eigenvalue Problems
TLDR
The intent of this paper is to demonstrate that the product eigenvalue problem is a powerful unifying concept and that the standard algorithms for solving them are instances of a generic $GR$ algorithm applied to a related cyclic matrix. Expand
Truncated Cauchy Non-Negative Matrix Factorization
TLDR
A Truncated CauchyNMF loss that handle outliers by truncating large errors, and developed to robustly learn the subspace on noisy datasets contaminated by outliers are proposed. Expand
45 Computation of the Singular Value Decomposition
then σ is a singular value of A and u and v are corresponding left and right singular vectors, respectively. (For generality it is assumed that the matrices here are complex, although given theseExpand
DEVELOPMENT OF ORTHOGONALITY OF SINGULAR VECTORS COMPUTED BY I-SVD ALGORITHM
ABSTRACT Recently an O m 2 ( ) algorithm named Integrable-Singular Value Decomposition (I-SVD) for bidiagonal singular value decomposition is developed. Here m is the dimension size. The modifiedExpand
Faster Projective Clustering Approximation of Big Data
TLDR
This work suggests to reduce the size of existing coresets by suggesting the first $O(\log(m))$ approximation for the case of lines clustering in $O(ndm)$ time, and proves that for a sufficiently large $m$ the authors obtain a coreset for projective clustering. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 13 REFERENCES
Linear least squares solutions by householder transformations
Let A be a given m×n real matrix with m≧n and of rank n and b a given vector. We wish to determine a vector x such that $$\parallel b - A\hat x\parallel = \min .$$ where ∥ … ∥ indicates theExpand
Calculating the singular values and pseudo-inverse of a matrix
  • G. Golub, W. Kahan
  • Mathematics, Computer Science
  • Milestones in Matrix Computation
  • 2007
TLDR
The use of the pseudo-inverse $A^I = V\Sigma ^I U^* $ to solve least squares problems in a way which dampens spurious oscillation and cancellation is mentioned. Expand
Solution of linear equations by diagonalization of coefficients matrix
This form is very convenient since the multiplication of matrices is performed by an electronic computer in almost no time. The unitary matrices U and T are known to exist (for example modal matricesExpand
The cyclic Jacobi method for computing the principal values of a complex matrix
is diagonal (T denotes the transpose), then the main diagonal of A is made up of the numbers Xi in some order. If it is desired to compute the Xi numerically, this result is of no immediate use,Expand
On some algorithms for the solution of the complete eigenvalue problem
In this article we give the basis and details of algorithms used in the solution of the complete problem of the eigenvalues of a matrix, already briefly explained in a note of the author's [1], andExpand
On the Stationary Values of a Second-Degree Polynomial on the Unit Sphere
(H denotes complex conjugate transpose) is a real number. C. R. Rao of the Indian Statistical Institute, Calcutta, suggested to us the problem of maximizing (or minimizing) 4(x) for complex x on theExpand
Householder's tridiagonalization of a symmetric matrix
In an early paper in this series [4] Householder’s algorithm for the tridiagonalization of a real symmetric matrix was discussed. In the light of experience gained since its publication and in viewExpand
...
1
2
...