• Corpus ID: 236154832

Adaptive Inducing Points Selection For Gaussian Processes

@article{GalyFajou2021AdaptiveIP,
  title={Adaptive Inducing Points Selection For Gaussian Processes},
  author={Th{\'e}o Galy-Fajou and Manfred Opper},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.10066}
}
Gaussian Processes (GPs) are flexible nonparametric models with strong probabilistic interpretation. While being a standard choice for performing inference on time series, GPs have little techniques to work in a streaming setting. (Bui et al., 2017) developed an efficient variational approach to train online GPs by using sparsity techniques: The whole set of observations is approximated by a smaller set of inducing points (IPs) and moved around with new data. Both the number and the locations… 

Figures from this paper

Numerically Stable Sparse Gaussian Processes via Minimum Separation using Cover Trees

This work study the numerical stability of scalable sparse approximations based on inducing points and proposes an automated method for computing inducing points satisfying these conditions, showing that, in geospatial settings, sparse approximation with guaranteed numerical stability often perform comparably to those without.

Fully-probabilistic Terrain Modelling with Stochastic Variational Gaussian Process Maps

This letter proposes a framework to build large-scale GP maps with UIs based on Stochastic Variational GPs and Monte Carlo sampling of the UIs distributions and shows how using UI SVGP maps yields more accurate particle localization results than DI SVGP on a real AUV mission over an entirely predicted area.

Efficient and Adaptive Decentralized Sparse Gaussian Process Regression for Environmental Sampling Using Autonomous Vehicles

Efficient and Adaptive Decentralized Sparse Gaussian Process Regression for Environmental Sampling Using Autonomous Vehicles Tanner A. Norton Department of Computer Science, BYU Master of Science In

Robust Learning of Physics Informed Neural Networks

Gaussian Process (GP) based smoothing that recovers the performance of a PINN and promises a robust architecture against noise/errors in measurements is introduced and an inexpensive method of quantifying the evolution of uncertainty based on the variance estimation of GPs on boundary data is illustrated.

Latent Graph Inference using Product Manifolds

This work proposes a computationally tractable approach to produce product manifolds of constant curvature model spaces that can encode latent features of varying structure that are used to compute richer similarity measures that are leveraged by the latent graph learning model to obtain optimized latent graphs.

References

SHOWING 1-10 OF 16 REFERENCES

Streaming Sparse Gaussian Process Approximations

A new principled framework for deploying Gaussian process probabilistic models in the streaming setting is developed, providing methods for learning hyperparameters and optimising pseudo-input locations.

Sparse On-Line Gaussian Processes

An approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets is developed based on a combination of a Bayesian on-line algorithm and a sequential construction of a relevant subsample of data that fully specifies the prediction of the GP model.

Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation

A new scalable multi-class Gaussian process classification approach building on a novel modified softmax likelihood function that leads to well-calibrated uncertainty estimates and competitive predictive performance while being up to two orders faster than the state of the art.

Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation

We propose a scalable stochastic variational approach to GP classification building on Pólya-Gamma data augmentation and inducing points. Unlike former approaches, we obtain closed-form updates based

Variational Fourier Features for Gaussian Processes

This work hinges on a key result that there exist spectral features related to a finite domain of the Gaussian process which exhibit almost-independent covariances, and derives these expressions for Matern kernels in one dimension, and generalize to more dimensions using kernels with specific structures.

Variational Learning of Inducing Variables in Sparse Gaussian Processes

A variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood.

A Unifying View of Sparse Approximate Gaussian Process Regression

A new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression, relies on expressing the effective prior which the methods are using, and highlights the relationship between existing methods.

Understanding Probabilistic Sparse Gaussian Process Approximations

This work thoroughly investigates the FITC and VFE approximations for regression both analytically and through illustrative examples, and draws conclusions to guide practical application.

Rates of Convergence for Sparse Variational Gaussian Process Regression

The results show that as datasets grow, Gaussian process posteriors can truly be approximated cheaply, and provide a concrete rule for how to increase $M$ in continual learning scenarios.

Spectral methods in machine learning and new strategies for very large datasets

Two new algorithms for the approximation of positive-semidefinite kernels based on the Nyström method are presented, each of which demonstrates the improved performance of the approach relative to existing methods.