Learn More
Two or more Bayesian network structures are Markov equivalent when the corresponding acyclic digraphs encode the same set of conditional independencies. Therefore, the search space of Bayesian network structures may be organized in equivalence classes, where each of them represents a different set of conditional independencies. The collection of sets of(More)
Hierarchical latent class (HLC) models are tree-structured Bayesian networks where leaf nodes are observed while internal nodes are hidden. In earlier work, we have demonstrated in principle the possibility of reconstructing HLC models from data. We address the scalability issue and develop a search-based algorithm that can efficiently learn high-quality(More)
This paper proposes and evaluates the k-greedy equivalence search algorithm (KES) for learning Bayesian networks (BNs) from complete data. The main characteristic of KES is that it allows a trade-off between greediness and randomness, thus exploring different good local optima when run repeatedly. When greediness is set at maximum, KES corresponds to the(More)
The search space of Bayesian Network struc­ tures is usually defined as Acyclic Directed Graphs (DAGs) and the search is done by lo­ cal transformations of DAGs. But the space of Bayesian Networks is ordered with respect to inclusion and it is natural to consider that a good search policy should take this into ac­ count. The first attempt to do this (Chick­(More)
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the num­ ber of independent parameters. When hid­ den variables are present, however, standard dimension might no longer be appropriate. One should instead use(More)
The inclusion problem deals with how to characterize (in graphical terms) whether all independence statements in the model in­ duced by a DAG K are in the model induced by a second DAG L. Meek (1997) conjec­ tured that this inclusion holds iff there exists a sequence of DAGs from L to K such that only certain 'legal' arrow reversal and 'legal' arrow adding(More)
There exist a lot of algorithms for the construction of Bayesian Networks (BN). But almost all computations in BN are carried out by transforming them to another special type of probabilistic models-decom-posable models (DM). This task of transformation is known to be a NP complex problem and todays algorithms for the construction of BN cannot guarantee the(More)
Hierarchical latent class (HLC) models are tree-structured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a(More)
A problem in learning latent class models (also known as naive Bayes models with a hidden class variable) is that local maximum parameters are often found. This leads not only to sub-optimal parameters, but also to a wrong number of classes (components) for a hidden variable. The standard solution of having many random starting points for the EM algorithm(More)