• Publications
  • Influence
An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
TLDR
It is found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit, and that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Expand
Update Rules for Parameter Estimation in Bayesian Networks
TLDR
This paper provides a unified framework for parameter estimation that encompasses both on-line learning and batch learning, and provides both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parameters than does standard EM. Expand