• Corpus ID: 54446127

Batch Selection for Parallelisation of Bayesian Quadrature

@article{Wagstaff2018BatchSF,
  title={Batch Selection for Parallelisation of Bayesian Quadrature},
  author={Edward Wagstaff and Saad Hamid and Michael A. Osborne},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.01553}
}
Integration over non-negative integrands is a central problem in machine learning (e.g. for model averaging, (hyper-)parameter marginalisation, and computing posterior predictive distributions). Bayesian Quadrature is a probabilistic numerical integration technique that performs promisingly when compared to traditional Markov Chain Monte Carlo methods. However, in contrast to easily-parallelised MCMC methods, Bayesian Quadrature methods have, thus far, been essentially serial in nature… 

Figures from this paper

Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination

This work proposes a parallelised (batch) BQ method, employing techniques from kernel quadrature, that possesses an empirically exponential convergence rate and permits simultaneous inference of both posteriors and model evidence.

Bayesian Quadrature on Riemannian Data Manifolds

This work focuses on Bayesian quadrature ( bq) to numerically compute integrals over normal laws on Riemannian manifolds learned from data and shows that by leveraging both prior knowledge and an active exploration scheme, bq outperforms Monte Carlo methods on a wide range of integration problems.

SOBER: Scalable Batch Bayesian Optimization and Quadrature using Recombination Constraints

Batch Bayesian optimisation (BO) has shown to be a sample-efficient method of performing optimisation where expensive-to-evaluate objective functions can be queried in parallel. However, current

References

SHOWING 1-10 OF 20 REFERENCES

Active Learning of Model Evidence Using Bayesian Quadrature

This work proposes a novel Bayesian Quadrature approach for numerical integration when the integrand is non-negative, such as the case of computing the marginal likelihood, predictive distribution, or normalising constant of a probabilistic model.

Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature

A warped model for probabilistic integrands (likelihoods) that are known to be non-negative are introduced, permitting a cheap active learning scheme to optimally select sample locations.

Batch Bayesian Optimization via Local Penalization

A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives.

An Improved Bayesian Framework for Quadrature

It is demonstrated that this framework significantly improves upon the performance of previous Bayesian quadrature methods in terms of wall-clock time on both a synthetic and a real-world example.

Monte Carlo Sampling Methods Using Markov Chains and Their Applications

SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and

Budgeted Batch Bayesian Optimization With Unknown Batch Sizes

The Budgeted Batch Bayesian Optimization (B3O) is presented - it is shown empirically that the proposed B3O outperforms the existing fixed batch BO approaches in finding the optimum whilst requiring a fewer number of evaluations, thus saving cost and time.

Monte Carlo is fundamentally unsound

This work presents some fundamental objections to the Monte Carlo method of numerical integration, which has long been known to numerical analysts and was brought to the attention of the Bayesian statistics community by Kloek & van Dijk (1978).

Probabilistic Integration: A Role in Statistical Computation?

These show that probabilistic integrators can in principle enjoy the "best of both worlds", leveraging the sampling efficiency of Monte Carlo methods whilst providing a principled route to assess the impact of numerical error on scientific conclusions.

Classical quadrature rules via Gaussian processes

  • T. KarvonenS. Särkkä
  • Mathematics, Computer Science
    2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)
  • 2017
All classical polynomial-based quadrature rules can be interpreted as Bayesian quadratures rules if the covariance kernel is selected suitably and the resulting Bayesian Quadrature Rules have zero posterior integral variance.

Kriging is well-suited to parallelize optimization

This work investigates a multi-points optimization criterion, the multipoints expected improvement (\(q-{\mathbb E}I\)), aimed at choosing several points at the same time, and proposes two classes of heuristic strategies meant to approximately optimize the Q-EI, and applies them to the classical Branin-Hoo test-case function.