Learn More
The problem of nonlinear dimensionality reduction is considered. We focus on problems where prior information is available, namely, semi-supervised dimensionality reduction. It is shown that basic nonlinear dimensionality reduction algorithms, such as Locally Linear Embedding (LLE), Isometric feature mapping (ISOMAP), and Local Tangent Space Alignment(More)
This paper studies the solution of the linear least squares problem for a large and sparse m by n matrix A with m n by QR factorization of A and transformation of the right-hand side vector b to Q T b. A multifrontal-based method for computing Q T b using Householder factorization is presented. A theoretical operation count for the K by K unbordered grid(More)
Two new algorithms for one-sided bidiagonalization are presented. The first is a block version which improves execution time by improving cache utilization from the use of BLAS 2.5 operations and more BLAS 3 operations. The second is adapted to parallel computation. When incorporated into singular value decomposition software, the second algorithm is faster(More)
An error analysis result is given for classical Gram–Schmidt fac-torization of a full rank matrix A into A = QR where Q is left orthogonal (has orthonormal columns) and R is upper triangular. The work presented here shows that the computed R satisfies R T R = A T A + E where E is an appropriately small backward error, but only if the diagonals of R are(More)
Bidiagonal reduction is the preliminary stage for the fastest stable algorithms for computing the singular value decomposition. However, the best error bounds on bidiagonal reduction methods are of the form A + A = UBV T ; kAk 2 " M f(n)kAk 2 where B is bidiagonal, U and V are orthogonal, " M is machine precision, and f(n) is a modestly growing function of(More)