Corpus ID: 235422445

SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes

@article{Kapoor2021SKIingOS,
  title={SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes},
  author={Sanyam Kapoor and Marc Finzi and Ke Alexander Wang and Andrew Gordon Wilson},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.06695}
}
State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the covariance kernel. The Structured Kernel Interpolation (SKI) framework accelerates these MVMs by performing efficient MVMs on a grid and interpolating back to the original space. In this work, we develop a connection between SKI and the permutohedral lattice used for highdimensional fast bilateral filtering. Using a sparse simplicial grid instead of a dense… Expand
When are Iterative Gaussian Processes Reliably Accurate?
While recent work on conjugate gradient methods and Lanczos decompositions have achieved scalable Gaussian process inference with highly accurate point predictions, in several implementations theseExpand

References

SHOWING 1-10 OF 44 REFERENCES
Fast High‐Dimensional Filtering Using the Permutohedral Lattice
TLDR
This algorithm is the first implementation of a high‐dimensional Gaussian filter that is both linear in input size and polynomial in dimensionality, and achieves a consistently high accuracy relative to ground truth (> 45 dB). Expand
Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)
TLDR
A new structured kernel interpolation (SKI) framework is introduced, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs) and naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability. Expand
Product Kernel Interpolation for Scalable Gaussian Processes
TLDR
A new technique for MVM based learning that exploits product kernel structure is developed, resulting in linear rather than exponential runtime with dimension for SKI, as well as state-of-the-art asymptotic complexity for multi-task GPs. Expand
Exact Gaussian Processes on a Million Data Points
TLDR
A scalable approach for exact GPs is developed that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication, and is generally applicable, without constraints to grid data or specific kernel classes. Expand
GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
TLDR
This work presents an efficient and general approach to GP inference based on Blackbox Matrix-Matrix multiplication (BBMM), a modified batched version of the conjugate gradients algorithm to derive all terms for training and inference in a single call. Expand
Lattice-Based High-Dimensional Gaussian Filtering and the Permutohedral Lattice
TLDR
This paper analyzes lattice-based high-dimensional Gaussian filtering algorithms, and gives a rigorous exposition of the properties of the permutohedral lattice and argues that it is the optimal lattice forGaussian filtering. Expand
Thoughts on Massively Scalable Gaussian Processes
TLDR
The MSGP framework enables the use of Gaussian processes on billions of datapoints, without requiring distributed inference, or severe assumptions, and reduces the standard GP learning and inference complexity to O(n), and the standard test point prediction complexity to $O(1). Expand
SPLATNet: Sparse Lattice Networks for Point Cloud Processing
  • Hang Su, V. Jampani, +4 authors J. Kautz
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice that outperforms existing state-of-the-art techniques on 3D segmentation tasks. Expand
Gaussian KD-trees for fast high-dimensional filtering
TLDR
A Monte-Carlo kd-tree sampling algorithm that efficiently computes any filter that can be expressed in this way, along with a GPU implementation of this technique and a fast adaptation of non-local means to geometry. Expand
MCMC for Variationally Sparse Gaussian Processes
TLDR
A Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Expand
...
1
2
3
4
5
...