Ensemble Learning in Bayesian Neural Networks

@inproceedings{Barber1998EnsembleLI,
  title={Ensemble Learning in Bayesian Neural Networks},
  author={David Barber and Christopher M. Bishop},
  year={1998}
}
Bayesian treatments of learning in neural networks are typically based either on a local Gaussian approximation to a mode of the posterior weight distribution, or on Markov chain Monte Carlo simulations. A third approach, called ensemble learning, was introduced by Hinton and van Camp (1993). It aims to approximate the posterior distribution by minimizing the Kullback-Leibler divergence between the true posterior and a parametric approximating distribution. The original derivation of a… CONTINUE READING
Highly Cited
This paper has 158 citations. REVIEW CITATIONS

Citations

Publications citing this paper.

158 Citations

02040'98'03'09'15
Citations per Year
Semantic Scholar estimates that this publication has 158 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 21 references

Neural Networks for Pattern Recognition

  • C. M. Bishop
  • 1995
Highly Influential
4 Excerpts

A new view of the EM algorithm that justifies incremental and other variants

  • R. M. Neal, G. E. Hinton
  • Learning in Graphical Models. Kluwer
  • 1998
2 Excerpts

An introduction to variational methods for graphical models

  • M. I. Jordan, Z. Gharamani, T. S. Jaakkola, L. K. Saul
  • Learning in Graphical Models. Kluwer
  • 1998

Approximating posteriors via mixture models

  • T. Jaakkola, M. I. Jordan
  • Learning in Graphical Models. Kluwer
  • 1998
1 Excerpt

Latent variables, mixture distributions and topographic mappings

  • C. M. Bishop
  • Learning in Graphical Models. Kluwer
  • 1998
3 Excerpts