• Corpus ID: 9323183

Improving parameter learning of Bayesian nets from incomplete data

@article{Corani2011ImprovingPL,
  title={Improving parameter learning of Bayesian nets from incomplete data},
  author={Giorgio Corani and Cassio Polpo de Campos},
  journal={ArXiv},
  year={2011},
  volume={abs/1110.3239}
}
  • Giorgio Corani, Cassio Polpo de Campos
  • Published 12 October 2011
  • Computer Science
  • ArXiv
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to… 
1 Citations

Figures and Tables from this paper

Discovering Subgroups of Patients from DNA Copy Number Data Using NMF on Compacted Matrices

TLDR
The aim of this work is to derive a procedure to compact high dimensional data, in order to improve NMF applicability without compromising the quality of the clustering, particularly for analyzing high-resolution microarray data.

References

SHOWING 1-10 OF 11 REFERENCES

Robust Bayesian Linear Classifier Ensembles

TLDR
The conjugate distribution for one dependence estimators is developed and empirically show that uniform averaging is clearly superior to Bayesian model averaging for this family of models and the maximum a posteriori linear mixture weights improve accuracy significantly over uniform aggregation.

Exact model averaging with naive Bayesian classifiers

TLDR
This paper shows that, given N features of interest, it is possible to perform tractable exact model averaging (MA) over all 2 possible feature-set models, and shows that C can be constructed using the same time and space complexity required to construct a single naive classifier with MAP parameters.

The EM algorithm and extensions

TLDR
The EM Algorithm and Extensions describes the formulation of the EM algorithm, details its methodology, discusses its implementation, and illustrates applications in many statistical contexts, opening the door to the tremendous potential of this remarkably versatile statistical tool.

Bayesian model averaging: a tutorial (with comments by M. Clyde, David Draper and E. I. George, and a rejoinder by the authors

TLDR
Bayesian model averaging (BMA) provides a coherent mechanism for ac- counting for this model uncertainty and provides improved out-of- sample predictive performance.

Credal Networks under Maximum Entropy

TLDR
This work presents a new kind of maximum entropy models, which are computed sequentially and show that for all general Bayesian networks, the sequential maximum entropy model coincides with the unique joint distribution.

Combining Statistical Language Models via the Latent Maximum Entropy Principle

We present a unified probabilistic framework for statistical language modeling which can simultaneously incorporate various aspects of natural language, such as local word interaction, syntactic

The EM algorithm for graphical association models with missing data

The ALARM Monitoring System: A Case Study with two Probabilistic Inference Techniques for Belief Networks

TLDR
Two algorithms were applied to this belief network: a message-passing algorithm by Pearl for probability updating in multiply connected networks using the method of conditioning and the Lauritzen-Spiegelhalter algorithm for local probability computations on graphical structures.

Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper

Vibratory power unit for vibrating conveyers and screens comprising an asynchronous polyphase motor, at least one pair of associated unbalanced masses disposed on the shaft of said motor, with the