# GP-HMAT: Scalable, ${O}(n\log(n))$ Gaussian Process Regression with Hierarchical Low-Rank Matrices

@inproceedings{Keshavarzzadeh2021GPHMATS, title={GP-HMAT: Scalable, \$\{O\}(n\log(n))\$ Gaussian Process Regression with Hierarchical Low-Rank Matrices}, author={Vahid Keshavarzzadeh and Shandian Zhe and Robert M. Kirby and Akil C. Narayan}, year={2021} }

A Gaussian process (GP) is a powerful and widely used regression technique. The main building block of a GP regression is the covariance kernel, which characterizes the relationship between pairs in the random field. The optimization to find the optimal kernel, however, requires several large-scale and often unstructured matrix inversions. We tackle this challenge by introducing a hierarchical matrix approach, named HMAT, which effectively decomposes the matrix structure, in a recursive manner…

## Figures and Tables from this paper

## References

SHOWING 1-10 OF 54 REFERENCES

### Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

- Computer ScienceSIAM Rev.
- 2011

This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.

### Scalable Gaussian Process Computations Using Hierarchical Matrices

- Computer Science, MathematicsJournal of Computational and Graphical Statistics
- 2019

A kernel-independent method that applies hierarchical matrices to the problem of maximum likelihood estimation for Gaussian processes provides natural and scalable stochastic estimators for its gradient and Hessian as well as the expected Fisher information matrix that are computable in quasilinear complexity for a large range of models.

### Sparse Gaussian Processes using Pseudo-inputs

- Computer ScienceNIPS
- 2005

It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.

### On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning

- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2005

An algorithm to compute an easily-interpretable low-rank approximation to an n x n Gram matrix G such that computations of interest may be performed more rapidly.

### An O (N log N) Fast Direct Solver for Partial Hierarchically Semi-Separable Matrices - With Application to Radial Basis Function Interpolation

- Computer ScienceJ. Sci. Comput.
- 2013

A fast direct solver for partial hierarchically semiseparable systems with recursion, efficient low-rank factorization using Chebyshev interpolation, and the Sherman-Morrison-Woodbury formula is described.

### Robust Approximate Cholesky Factorization of Rank-Structured Symmetric Positive Definite Matrices

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 2010

The hierarchical compression scheme in this work is useful in the development of more HSS algorithms and can be used to provide effective structured preconditioners for large sparse problems when combined with some sparse matrix techniques.

### Linear-Cost Covariance Functions for Gaussian Random Fields

- Computer Science
- 2017

This work proposes a construction of covariance functions that result in matrices with a hierarchical structure that is powered by matrix algorithms that scale linearly with the matrix dimension, and proved to be efficient for a variety of random field computations, including sampling, kriging, and likelihood evaluation.

### Rates of Convergence for Sparse Variational Gaussian Process Regression

- Computer ScienceICML
- 2019

The results show that as datasets grow, Gaussian process posteriors can truly be approximated cheaply, and provide a concrete rule for how to increase $M$ in continual learning scenarios.

### Fast algorithms for hierarchically semiseparable matrices

- Computer Science, MathematicsNumer. Linear Algebra Appl.
- 2010

This paper generalizes the hierarchically semiseparable (HSS) matrix representations and proposes some fast algorithms for HSS matrices that are useful in developing fast‐structured numerical methods for large discretized PDEs, integral equations, eigenvalue problems, etc.

### A fast randomized algorithm for the approximation of matrices ✩

- Mathematics, Computer Science
- 2007