# Randomized Approximation of the Gram Matrix: Exact Computation and Probabilistic Bounds

@article{Holodnak2015RandomizedAO, title={Randomized Approximation of the Gram Matrix: Exact Computation and Probabilistic Bounds}, author={John T. Holodnak and Ilse C. F. Ipsen}, journal={SIAM J. Matrix Anal. Appl.}, year={2015}, volume={36}, pages={110-137} }

Given a real matrix A with n columns, the problem is to approximate the Gram product AA^T by c = rank(A) columns depend on the right singular vector matrix of A. For a Monte-Carlo matrix multiplication algorithm by Drineas et al. that samples outer products, we present probabilistic bounds for the 2-norm relative error due to randomization. The bounds depend on the stable rank or the rank of A, but not on the matrix dimensions. Numerical experiments illustrate that the bounds are informative…

## Figures and Tables from this paper

## 38 Citations

### A Probabilistic Subspace Bound with Application to Active Subspaces

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2018

This work presents a bound on the number of samples so that with high probability the angle between the dominant subspaces of E and S is less than a user-specified tolerance, and suggests that Monte Carlo sampling can be efficient in the presence of many parameters, as long as the underlying function f is sufficiently smooth.

### LOW-RANK MATRIX APPROXIMATIONS DO NOT NEED A SINGULAR VALUE GAP \ast

- Mathematics
- 2019

Abstract. Low-rank approximations to a real matrix A can be conputed from ZZTA, where Z is a matrix with orthonormal columns, and the accuracy of the approximation can be estimated from some norm of…

### Low-Rank Matrix Approximations Do Not Need a Singular Value Gap

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 2019

It is shown that the low-rank approximation errors, in the two-norm, Frobenius norm and more generally, any Schatten p- norm, are insensitive to additive rank-preserving perturbations in the projector basis; and to matrix perturbation that are additive or change the number of columns.

### A Note on Random Sampling for Matrix Multiplication

- Computer Science, MathematicsArXiv
- 2018

This paper extends the framework of randomised matrix multiplication to a coarser partition and proposes an algorithm as a complement to the classical algorithm, especially when the optimal…

### Input Sparsity Time Low-rank Approximation via Ridge Leverage Score Sampling

- Computer ScienceSODA
- 2017

We present a new algorithm for finding a near optimal low-rank approximation of a matrix $A$ in $O(nnz(A))$ time. Our method is based on a recursive sampling scheme for computing a representative…

### Sampling for Approximate Maximum Search in Factorized Tensor

- Computer ScienceIJCAI
- 2017

A sampling-based approach for finding the top entries of a tensor which is decomposed by the CANDECOMP/PARAFAC model is proposed and an algorithm to sample the entries with probabilities proportional to their values is developed.

### Sub-sampled Newton methods

- Computer Science, MathematicsMath. Program.
- 2019

For large-scale finite-sum minimization problems, we study non-asymptotic and high-probability global as well as local convergence properties of variants of Newton’s method where the Hessian and/or…

### ACCURACY OF RESPONSE SURFACES OVER ACTIVE SUBSPACES COMPUTED WITH RANDOM SAMPLING∗

- Computer Science
- 2015

A randomized algorithm for determining k and computing an orthonormal basis for the active subspace is presented, and a tighter probabilistic bound on the number of samples required for approximating theactive subspace to a user-specified accuracy is derived.

### Approximating matrices and convex bodies through Kadison-Singer

- Mathematics
- 2016

We show that any $n\times m$ matrix $A$ can be approximated in operator norm by a submatrix with a number of columns of order the stable rank of $A$. This improves on existing results by removing an…

### Coreset Construction via Randomized Matrix Multiplication

- Computer ScienceArXiv
- 2017

This structural result implies a simple, randomized algorithm that constructs coresets whose sizes are independent of the number and dimensionality of the input points, and yields an improvement over the state-of-the-art deterministic approach.

## References

SHOWING 1-10 OF 52 REFERENCES

### Fast Monte-Carlo algorithms for approximate matrix multiplication

- Computer ScienceProceedings 2001 IEEE International Conference on Cluster Computing
- 2001

Given an m ? n matrix A and an n ? p matrix B, we present 2 simple and intuitive algorithms to compute an approximation P to the product A ? B, with provable bounds for the norm of the "error matrix"…

### Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

- Computer ScienceSIAM Rev.
- 2011

This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.

### On sparse representations of linear operators and the approximation of matrix products

- Computer Science2008 42nd Annual Conference on Information Sciences and Systems
- 2008

This paper represents a linear operator by a sum of rank-one operators, and shows how a sparse representation that guarantees a low approximation error for the product can be obtained from analyzing an induced quadratic form.

### Tail inequalities for sums of random matrices that depend on the intrinsic dimension

- Mathematics
- 2012

This work provides exponential tail inequalities for sums of random matrices that depend only on intrinsic dimensions rather than explicit matrix dimensions. These tail inequalities are similar to…

### Exact Matrix Completion via Convex Optimization

- Computer Science, MathematicsFound. Comput. Math.
- 2009

It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.

### Topics in Matrix Sampling Algorithms

- Computer ScienceArXiv
- 2011

Improved algorithms for Low-rank Matrix Approximation and Regression and algorithms for a new problem domain ( K-means Clustering) are presented.

### User-Friendly Tail Bounds for Sums of Random Matrices

- MathematicsFound. Comput. Math.
- 2012

This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices and provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid.

### Relative-Error CUR Matrix Decompositions

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2008

These two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist.

### The Effect of Coherence on Sampling from Matrices with Orthonormal Columns, and Preconditioned Least Squares Problems

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2014

A bound on the condition number of the sampled matrices in terms of the coherence $\mu$ of $Q$ is derived, which implies a, not necessarily tight, lower bound of $\mathcal{O}(m\mu\ln{n})$ for the number of sampled rows.

### The spectral norm error of the naive Nystrom extension

- Computer Science, MathematicsArXiv
- 2011

This paper provides the first relative-error bound on the spectral norm error incurred in this process, which follows from a natural connection between the Nystrom extension and the column subset selection problem.