• Corpus ID: 232076304

Hierarchical Inducing Point Gaussian Process for Inter-domain Observations

@inproceedings{Wu2021HierarchicalIP,
  title={Hierarchical Inducing Point Gaussian Process for Inter-domain Observations},
  author={Luhuan Wu and Andrew Miller and Lauren Anderson and Geoff Pleiss and David M. Blei and John P. Cunningham},
  booktitle={AISTATS},
  year={2021}
}
We examine the general problem of interdomain Gaussian Processes (GPs): problems where the GP realization and the noisy observations of that realization lie on different domains. When the mapping between those domains is linear, such as integration or differentiation, inference is still closed form. However, many of the scaling and approximation techniques that our community has developed do not apply to this setting. In this work, we introduce the hierarchical inducing point GP (HIP-GP), a… 

Figures and Tables from this paper

Variational Nearest Neighbor Gaussian Processes
TLDR
This work proposes variational nearest neighbor Gaussian process (VNNGP), which introduces a prior that only retains correlations within K nearest-neighboring observations, thereby inducing sparse precision structure and enabling stochastic optimization with a time complexity of O(K).
Variational nearest neighbor Gaussian process
TLDR
This work proposes variational nearest neighbor Gaussian process (VNNGP), which introduces a prior that only retains correlations within K nearest-neighboring observations, thereby inducing sparse precision structure and comparing VNNGP to other scalable GPs.
Spatial Multivariate Trees for Big Data Bayesian Regression
TLDR
This work proposes Bayesian multivariate regression models based on spatial multivariate trees (SpamTrees) which achieve scalability via conditional independence assumptions on latent random effects following a treed directed acyclic graph.
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Equivariant Projected Kernels
TLDR
This work presents a general recipe for constructing gauge equivariant kernels, which induce Gaussian vector fields, i.e. vector-valued Gaussian processes coherent with geometry, from scalar-valued Riemannian kernels, and extends standard Gaussian process training methods, such as variational inference, to this setting.
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels
TLDR
This work presents a general recipe for constructing gauge independent kernels, which induce Gaussian vector fields, i.e. vector-valued Gaussian processes coherent with geometry, from scalar-valued Riemannian kernels, and extends standard Gaussian process training methods to this setting.

References

SHOWING 1-10 OF 44 REFERENCES
Inter-domain Gaussian Processes for Sparse Inference using Inducing Features
TLDR
A general inference framework for inter-domain Gaussian Processes (GPs) is presented and it is shown how previously existing models fit into this framework and will be used to develop two new sparse GP models.
Scalable Gaussian Processes with Grid-Structured Eigenfunctions (GP-GRIEF)
TLDR
A kernel approximation strategy that enables computation of the Gaussian process log marginal likelihood and all hyperparameter derivatives in $\mathcal{O}(p)$ time and enables type-I or II Bayesian inference on large-scale datasets is introduced.
A Framework for Interdomain and Multioutput Gaussian Processes
TLDR
This work presents a mathematical and software framework for scalable approximate inference in GPs, which combines interdomain approximations and multiple outputs, and provides a unified interface for many existing multioutput models, as well as more recent convolutional structures.
Sparse Orthogonal Variational Inference for Gaussian Processes
TLDR
A new interpretation of sparse variational approximations for Gaussian processes using inducing points is introduced, which can lead to more scalable algorithms than previous methods and report state-of-the-art results on CIFAR-10 among purely GP-based models.
Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition
TLDR
The key idea of theTT-GP is to use Tensor Train decomposition for variational parameters, which allows to train GPs with billions of inducing inputs and achieve state-of-the-art results on several benchmarks.
Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)
TLDR
A new structured kernel interpolation (SKI) framework is introduced, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs) and naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability.
Exact Gaussian Processes on a Million Data Points
TLDR
A scalable approach for exact GPs is developed that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication, and is generally applicable, without constraints to grid data or specific kernel classes.
MCMC for Variationally Sparse Gaussian Processes
TLDR
A Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs.
Thoughts on Massively Scalable Gaussian Processes
TLDR
The MSGP framework enables the use of Gaussian processes on billions of datapoints, without requiring distributed inference, or severe assumptions, and reduces the standard GP learning and inference complexity to O(n), and the standard test point prediction complexity to $O(1).
Gaussian Processes for Big Data
TLDR
Stochastic variational inference for Gaussian process models is introduced and it is shown how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform Variational inference.
...
...