• Corpus ID: 229363479

Testing whether a Learning Procedure is Calibrated

@inproceedings{Cockayne2020TestingWA,
  title={Testing whether a Learning Procedure is Calibrated},
  author={Jon Cockayne and Matthew M. Graham and Chris. J. Oates and Timothy John Sullivan and Onur Teymur},
  year={2020}
}
. A learning procedure takes as input a dataset and performs inference for the parameters θ of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about θ after seeing the dataset. Bayesian inference is a prime example of such a procedure, but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to… 
BayesCG As An Uncertainty Aware Version of CG
TLDR
This work’s CG-based implementation of BayesCG under a structure-exploiting prior distribution represents an ’uncertainty-aware’ version of CG that consists of CG iterates and posterior covariances that can be propagated to subsequent computations.
Probabilistic Iterative Methods for Linear Systems
TLDR
This paper presents a probabilistic perspective on iterative methods for approximating the solution x ∈ Rd of a nonsingular linear system Ax = b, and characterises both the rate of contraction of μm to an atomic measure on x and the nature of the uncertainty quantification being provided.
Black Box Probabilistic Numerics
TLDR
This paper proposes to construct probabilistic numerical methods based only on the final output from a traditional method, which massively expands the range of tasks to which Probabilistic numerics can be applied, inherits the features and performance of state-of-the-art numerical methods, and enables provably higher orders of convergence to be achieved.
Bayesian Numerical Methods for Nonlinear Partial Differential Equations
TLDR
Proof-of-concept experimental results demonstrate that meaningful probabilistic uncertainty quantification for the unknown solution of the PDE can be performed, while controlling the number of times the right-handside, initial and boundary conditions are evaluated.

References

SHOWING 1-10 OF 73 REFERENCES
A general framework for updating belief distributions
TLDR
It is argued that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case.
Calibrated Approximate Bayesian Inference
TLDR
It is shown that the original approximate inference had poor coverage for these data and should not be trusted, by exploiting the symmetry of the coverage error under permutation of low level group labels and smoothing with Bayesian Additive Regression Trees.
Diagnostic tools for approximate Bayesian computation using the coverage property
TLDR
Diagnostic tools for the choice of the kernel scale parameter based on assessing the coverage property are proposed, which asserts that credible intervals have the correct coverage levels in appropriately designed simulation settings.
Calibration Procedures for Approximate Bayesian Credible Sets
TLDR
Two calibration procedures for checking the coverage of approximate Bayesian credible sets including intervals estimated using Monte Carlo methods are developed and applied.
Bayesian fractional posteriors
We consider the fractional posterior distribution that is obtained by updating a prior distribution via Bayes theorem with a fractional likelihood function, a usual likelihood function raised to a
Bayesian Synthetic Likelihood
TLDR
The accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach is explored in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions.
Bayesian Calibration of computer models
TLDR
A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Validating Bayesian Inference Algorithms with Simulation-Based Calibration
TLDR
It is argued that SBC is a critical part of a robust Bayesian workflow, as well as being a useful tool for those developing computational algorithms and statistical software.
Variational Inference: A Review for Statisticians
TLDR
Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.
Constructing summary statistics for approximate Bayesian computation: semi‐automatic approximate Bayesian computation
TLDR
This work shows how to construct appropriate summary statistics for ABC in a semi‐automatic manner, and shows that optimal summary statistics are the posterior means of the parameters.
...
...