Testing whether a Learning Procedure is Calibrated
@inproceedings{Cockayne2020TestingWA, title={Testing whether a Learning Procedure is Calibrated}, author={Jon Cockayne and M. Graham and Chris J. Oates and T. J. Sullivan}, year={2020} }
A learning procedure takes as input a dataset and performs inference for the parameters θ of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about θ after seeing the dataset. Bayesian inference is a prime example of such a procedure but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to be… CONTINUE READING
Figures from this paper
One Citation
References
SHOWING 1-10 OF 53 REFERENCES
A general framework for updating belief distributions
- Mathematics, Medicine
- Journal of the Royal Statistical Society. Series B, Statistical methodology
- 2016
- 162
- PDF
Diagnostic tools for approximate Bayesian computation using the coverage property
- Mathematics
- 2013
- 46
- PDF
Validating Bayesian Inference Algorithms with Simulation-Based Calibration
- Computer Science, Mathematics
- 2018
- 63
- PDF
Variational Inference: A Review for Statisticians
- Computer Science, Mathematics
- ArXiv
- 2016
- 1,657
- Highly Influential
- PDF
Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation (with Discussion)
- Computer Science, Mathematics
- 2012
- 323