• Corpus ID: 29978167

Using stacking to average Bayesian predictive distributions Using stacking to average Bayesian predictive distributions

@inproceedings{Yao2017UsingST,
  title={Using stacking to average Bayesian predictive distributions Using stacking to average Bayesian predictive distributions},
  author={Yuling Yao and Andrew Gelman},
  year={2017}
}
Abstract The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit. We take the idea of stacking from the point estimation literature and generalize to the combination of predictive distributions, extending the utility function to any proper scoring rule, using Pareto smoothed importance sampling to efficiently compute the required leave-one-out posterior distributions and… 

References

SHOWING 1-10 OF 41 REFERENCES
Bayesian Model Assessment and Comparison Using Cross-Validation Predictive Densities
TLDR
This work proposes an approach using cross-validation predictive densities to obtain expected utility estimates and Bayesian bootstrap to obtain samples from their distributions, and discusses the probabilistic assumptions made and properties of two practical cross- validate methods, importance sampling and k-fold cross- validation.
Comparing Bayes Model Averaging and Stacking When Model Approximation Error Cannot be Ignored
  • B. Clarke
  • Computer Science
    J. Mach. Learn. Res.
  • 2003
TLDR
Bayes Model Averaging is compared to a non-Bayes form of model averaging called stacking and the results suggest the stacking has better robustness properties than BMA in the most important settings.
A Bayes interpretation of stacking for M-complete and M-open settings
TLDR
It is shown that the stacking weights also asymptotically minimize a posterior expected loss, and formally provides a Bayesian justification for cross-validation.
Turning Bayesian model averaging into Bayesian model combination
TLDR
It is shown that even the most simplistic of Bayesian model combination strategies outperforms the traditional ad hoc techniques of bagging and boosting, as well as outperforming BMA over a wide variety of cases, suggesting that the power of ensembles does not come from their ability to account for model uncertainty, but instead comes from the changes in representational and preferential bias inherent in the process of combining several different models.
Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC
TLDR
An efficient computation of LOO is introduced using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights, and it is demonstrated that PSIS-LOO is more robust in the finite case with weak priors or influential observations.
Comparison of Bayesian predictive methods for model selection
TLDR
The study demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.
A framework for probabilistic inferences from imperfect models
TLDR
This work proposes a new concept of absolute model probabilities, which measure the quality of imperfect models, and provides simple analytic forms for routine implementation and shows that D-probabilities automatically penalize model complexity.
Minimax Optimal Bayesian Aggregation
TLDR
This paper proposes Bayesian convex and linear aggregation approaches motivated by regression applications and shows that the proposed approach is minimax optimal when the true data-generating model is a convex or linear combination of models in the list.
Bayesian Model Averaging: A Tutorial
TLDR
Bayesian model averaging (BMA) provides a coherent mechanism for ac- counting for this model uncertainty and provides improved out-of- sample predictive performance.
Optimal Prediction Pools
...
1
2
3
4
5
...