Fast Direct Methods for Gaussian Processes

@article{Ambikasaran2016FastDM,
  title={Fast Direct Methods for Gaussian Processes},
  author={Sivaram Ambikasaran and Daniel Foreman-Mackey and Leslie Greengard and David W. Hogg and Michael O'Neil},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2016},
  volume={38},
  pages={252-265}
}
A number of problems in probability and statistics can be addressed using the multivariate normal (Gaussian) distribution. In the one-dimensional case, computing the probability for a given mean and variance simply requires the evaluation of the corresponding Gaussian density. In the n-dimensional setting, however, it requires the inversion of an n x n covariance matrix, C, as well as the evaluation of its determinant, det(C). In many cases, such as regression using Gaussian processes, the… 

Gaussian Process Regression

This work provides formulae for the error of discretizations, the condition number of the Gaussian process covariance matrix, convergence rates of iterative methods, and a general formula for the accuracy of the posterior mean at the data points for approximate methods.

Scalable Gaussian Process Computations Using Hierarchical Matrices

A kernel-independent method that applies hierarchical matrices to the problem of maximum likelihood estimation for Gaussian processes provides natural and scalable stochastic estimators for its gradient and Hessian as well as the expected Fisher information matrix that are computable in quasilinear complexity for a large range of models.

Efficient reduced-rank methods for Gaussian processes with eigenfunction expansions

This work introduces a reduced-rank algorithm for Gaussian process regression and introduces a class of fast algorithms for Bayesian fitting of hyperparameters, which does not require translation invariance of the covariance kernel.

Likelihood approximation with hierarchical matrices for large spatial datasets

Linear-Cost Covariance Functions for Gaussian Random Fields

This work proposes a construction of covariance functions that result in matrices with a hierarchical structure that is powered by matrix algorithms that scale linearly with the matrix dimension, and proved to be efficient for a variety of random field computations, including sampling, kriging, and likelihood evaluation.

Equispaced Fourier representations enable fast iterative Gaussian process regression

A class of Fourier-based fast algorithms for computing with Gaussian processes via complex exponentials with equispaced frequencies that results in a weight-space linear system with a matrix that can be applied in O ( m log m ) operations where m is the number of frequencies, and can be solved efficiently with iterative methods.

A general linear-time inference method for Gaussian Processes on one dimension

The first general proof of this conjecture is provided, showing that any stationary GP on one dimension with vector-valued observations governed by a Lebesgue-integrable continuous kernel can be approximated to any desired precision using a specifically-chosen state-space model: the Latent Exponentially Generated (LEG) family.

Sequential randomized matrix factorization for Gaussian processes

This paper addresses the scalability aspect of Gaussian processes in sequential settings using recent advances in randomized matrix computations.

A Scalable Method to Exploit Screening in Gaussian Process Models with Noise

While in this work the application to Vecchia’s approximation is focused on, a particularly popular and powerful framework in which it can demonstrate true second-order optimization of M steps, the method can also be applied using entirely matrix-vector products, making it applicable to a very wide class of precision matrix-based approximation methods.

Fast Spatial Gaussian Process Maximum Likelihood Estimation via Skeletonization Factorizations

A framework for unstructured observations in two spatial dimensions that allows for evaluation of the log-likelihood and its gradient in $\tilde O(n^{3/2})$ time under certain assumptions, where $n$ is the number of observations.
...

References

SHOWING 1-10 OF 96 REFERENCES

A Matrix-free Approach for Solving the Parametric Gaussian Process Maximum Likelihood Problem

This work presents a matrix-free approach for computing the solution of the maximum likelihood problem involving Gaussian processes based on a stochastic programming reformulation followed by sample average approximation applied to either the maximization problem or its optimality conditions.

Large-scale stochastic linear inversion using hierarchical matrices

A method is developed to solve large-scale stochastic linear inverse problems, based on the hierarchical matrix (or ℋ2 matrix) approach, that makes it easier, among other things, to optimize the location of sources and receivers, by minimizing the mean square error of the estimation.

Local and global sparse Gaussian process approximations

This paper develops a new sparse GP approximation which is a combination of both the global and local approaches, and shows that it is derived as a natural extension of the framework developed by Quinonero Candela and Rasmussen for sparse GP approximations.

Monte Carlo estimates of the log determinant of large sparse matrices

A framework for evaluating approximation methods for Gaussian process regression

Four different approximation algorithms are empirically investigated on four different prediction problems, and the quality of the predictions obtained as a function of the compute time taken are assessed.

Fast Monte-Carlo algorithms for finding low-rank approximations

  • A. FriezeR. KannanS. Vempala
  • Computer Science
    Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280)
  • 1998
This paper develops an algorithm which is qualitatively faster provided the entries of the matrix are sampled according to a natural probability distribution and the algorithm takes time polynomial in k, 1//spl epsiv/, log(1//spl delta/) only, independent of m, n.

A Fast Summation Tree Code for Matérn Kernel

This paper designs a fast summation algorithm for the Matern kernel in order to efficiently perform matrix-vector multiplications and addresses several practical issues: the anisotropy of the kernel, the nonuniform distribution of the point set, and a tight error estimate of the approximation.

Gaussian Processes for Machine Learning

The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.

Fast and Exact Simulation of Stationary Gaussian Processes through Circulant Embedding of the Covariance Matrix

This paper shows that for many important correlation functions in geostatistics, realizations of the associated process over $m+1$ equispaced points on a line can be produced at the cost of an initial FFT of length $2m$ with each new realization requiring an additionalFFT of the same length.

A fast randomized algorithm for the approximation of matrices ✩

...