• Corpus ID: 18652680

Bayesian active learning for posterior estimation

@inproceedings{Kandasamy2015BayesianAL,
  title={Bayesian active learning for posterior estimation},
  author={Kirthevasan Kandasamy and Jeff G. Schneider and Barnab{\'a}s P{\'o}czos},
  year={2015}
}
This paper studies active posterior estimation in a Bayesian setting when the likelihood is expensive to evaluate. Existing techniques for posterior estimation are based on generating samples representative of the posterior. Such methods do not consider efficiency in terms of likelihood evaluations. In order to be query efficient we treat posterior estimation in an active regression framework. We propose two myopic query strategies to choose where to evaluate the likelihood and implement them… 

Figures from this paper

Adaptive Gaussian Process Approximation for Bayesian Inference with Expensive Likelihood Functions
TLDR
A Gaussian process (GP)–based method to approximate the joint distribution of the unknown parameters and the data, built on recent work, and provides an adaptive algorithm to construct such an approximation.
Efficient Acquisition Rules for Model-Based Approximate Bayesian Computation
TLDR
This paper proposes to compute the uncertainty in the ABC posterior density, which is due to a lack of simulations to estimate this quantity accurately, and defines a loss function that measures this uncertainty and proposes to select the next evaluation location to minimise the expected loss.
Variational Bayesian Monte Carlo
TLDR
A novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC), which combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective.
Variational Bayesian Monte
TLDR
A novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC), which combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective.
Parallel Gaussian process surrogate method to accelerate likelihood-free inference
TLDR
Motivated by recent progress in batch Bayesian optimisation, various batch-sequential strategies where multiple simulations are adaptively selected to minimise either the expected or median loss function measuring the uncertainty in the resulting posterior are developed.
Inverse Gaussian Process regression for likelihood-free inference
TLDR
This work presents a method to compute an approximate of the posterior with a limited number of model simulations of an inverse Gaussian Process regression (IGPR), and provides an adaptive algorithm with a tempering procedure to construct the approximations of the marginal posterior distributions.
Ranking Documents Through Stochastic Sampling on Bayesian Network-based Models: A Pilot Study
TLDR
Experimental results suggest that performance of the Bayesian Networks model is at least comparable to the baseline ones such as BM25, and the framework of this model potentially offers new and novel ways in weighting documents.
Factor Screening using Bayesian Active Learning and Gaussian Process Meta-Modelling
In this paper we propose a data-efficient Bayesian active learning framework for factor screening, which is important when dealing with systems which are expensive to evaluate, such as combat
An Exploration of Acquisition and Mean Functions in Variational Bayesian Monte Carlo
TLDR
The performance of VBMC under variations of two key components of the framework is studied, and a new general family of acquisition functions for active sampling is proposed and evaluated, which includes as special cases the acquisition functions used in the original work.
...
1
2
3
4
...

References

SHOWING 1-10 OF 32 REFERENCES
Sampling for Inference in Probabilistic Models with Fast Bayesian Quadrature
TLDR
A warped model for probabilistic integrands (likelihoods) that are known to be non-negative are introduced, permitting a cheap active learning scheme to optimally select sample locations.
A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning
TLDR
A tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions using the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function.
Active Learning of Model Evidence Using Bayesian Quadrature
TLDR
This work proposes a novel Bayesian Quadrature approach for numerical integration when the integrand is non-negative, such as the case of computing the marginal likelihood, predictive distribution, or normalising constant of a probabilistic model.
Nested sampling for general Bayesian computation
Nested sampling estimates directly how the likelihood function relates to prior mass. The evidence (alternatively the marginal likelihood, marginal den- sity of the data, or the prior predictive) is
Approximate Bayesian computational methods
TLDR
In this survey, the various improvements and extensions brought on the original ABC algorithm in recent years are studied.
Active Area Search via Bayesian Quadrature
TLDR
This paper presents an algorithm for the case where the positive class is defined in terms of a region’s average function value being above some threshold with high probability, a problem the authors call active area search that outperforms previous algorithms that were developed for other active search goals.
On sequential Monte Carlo, partial rejection control and approximate Bayesian computation
TLDR
It is proved that the new sampler can reduce the variance of the incremental importance weights when compared with standard sequential Monte Carlo samplers, and provide a central limit theorem.
Markov chain Monte Carlo without likelihoods
TLDR
A Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods is presented, which can be used in frequentist applications, in particular for maximum-likelihood estimation.
Gaussian Process Regression: Active Data Selection and Test Point Rejection
TLDR
It is found that, for both a two-dimensional toy problem and a real-world benchmark problem, the variance is a reasonable criterion for both active data selection and test point rejection.
Kernel Bayes' rule: Bayesian inference with positive definite kernels
TLDR
A kernel method for realizing Bayes' rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces, including Bayesian computation without likelihood and filtering with a nonparametric state-space model.
...
1
2
3
4
...