Effective and efficient structure learning with pruning and model averaging strategies

@article{Constantinou2021EffectiveAE,
  title={Effective and efficient structure learning with pruning and model averaging strategies},
  author={Anthony C. Constantinou and Yang Liu and Neville Kenneth Kitson and Kiattikun Chobtham and Zhi-gao Guo},
  journal={Int. J. Approx. Reason.},
  year={2021},
  volume={151},
  pages={292-321}
}

A survey of Bayesian Network structure learning

This paper provides a comprehensive review of combinatoric algorithms proposed for learning BN structure from data, describing 74 algorithms including prototypical, well-established and state-of-the-art approaches.

Information fusion between knowledge and data in Bayesian network structure learning

The overall results show that knowledge becomes less important with big data due to higher learning accuracy rendering knowledge less important, but some of the knowledge approaches are actually found to be more important withbig data.

The impact of prior knowledge on causal structure learning

The main conclusions are the observation that reduced search space obtained from knowledge does not always imply reduced computational complexity, perhaps because the relationships implied by the data and knowledge are in tension.

References

SHOWING 1-10 OF 30 REFERENCES

Approximate Learning of High Dimensional Bayesian Network Structures via Pruning of Candidate Parent Sets

This paper explores a strategy towards pruning the size of candidate parent sets, and which could form part of existing score-based algorithms as an additional pruning phase aimed at high dimensionality problems.

The max-min hill-climbing Bayesian network structure learning algorithm

The first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other are presented, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search.

Evaluating structure learning algorithms with a balanced scoring function

A Balanced Scoring Function (BSF) is proposed that eliminates this bias by adjusting the reward function based on the difficulty of discovering an edge, or no edge, proportional to their occurrence rate in the ground truth graph.

Maximal ancestral graph structure learning via exact search

This work develops methodology for score-based structure learning of directed maximal ancestral graphs employing a linear Gaussian BIC score, as well as score pruning techniques, which are essential for exact structure learning approaches.

Bayesian network learning with cutting planes

The problem of learning the structure of Bayesian networks from complete discrete data with a limit on parent set size is considered and it is shown that this is a particularly fast method for exact BN learning.

A Gibbs Sampler for Learning DAGs

The proposed Gibbs sampler for structure learning in directed acyclic graph (DAG) models gives robust results in diverse settings, outperforming several existing Bayesian and frequentist methods.

Learning Equivalence Classes of Bayesian-Network Structures

It is argued that it is often appropriate to search among equivalence classes of network structures as opposed to the more common approach of searching among individual Bayesian-network structures, and a convenient graphical representation for an equivalence class of structures is described and a set of operators that can be applied to that representation by a search algorithm to move among equivalENCE classes are introduced.

Finding the k-best Equivalence Classes of Bayesian Network Structures for Model Averaging

This algorithm goes beyond the maximum-a-posteriori (MAP) model by listing the most likely network structures and their relative likelihood and therefore has important applications in causal structure discovery.

Learning Bayesian Networks That Enable Full Propagation of Evidence

The results suggest that the proposed algorithm discovers satisfactorily accurate connected DAGs in cases where other algorithms produce multiple disjoint subgraphs that often underfit the true graph.