• Corpus ID: 244773620

Effective and efficient structure learning with pruning and model averaging strategies

@article{Constantinou2021EffectiveAE,
  title={Effective and efficient structure learning with pruning and model averaging strategies},
  author={Anthony C. Constantinou and Yang Liu and Neville Kenneth Kitson and Kiattikun Chobtham and Zhi-gao Guo},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.00398}
}
: Learning the structure of a Bayesian Network (BN) with score-based solutions involves exploring the search space of possible graphs and moving towards the graph that maximises a given objective function. Some algorithms offer exact solutions that guarantee to return the graph with the highest objective score, while others offer approximate solutions in exchange for reduced computational complexity. This paper describes an approximate BN structure learning algorithm, which we call Model… 
Information fusion between knowledge and data in Bayesian network structure learning
TLDR
The overall results show that knowledge becomes less important with big data due to higher learning accuracy rendering knowledge less important, but some of the knowledge approaches are actually found to be more important withbig data.
The impact of prior knowledge on causal structure learning
TLDR
The main conclusions are the observation that reduced search space obtained from knowledge does not always imply reduced computational complexity, and some of the knowledge approaches are actually found to be more important with big data.

References

SHOWING 1-10 OF 34 REFERENCES
The max-min hill-climbing Bayesian network structure learning algorithm
TLDR
The first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other are presented, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search.
Evaluating structure learning algorithms with a balanced scoring function
TLDR
A Balanced Scoring Function (BSF) is proposed that eliminates this bias by adjusting the reward function based on the difficulty of discovering an edge, or no edge, proportional to their occurrence rate in the ground truth graph.
Maximal ancestral graph structure learning via exact search
TLDR
This work develops a methodology for score-based structure learning of directed maximal ancestral graphs employing a linear Gaussian BIC score, as well as score pruning techniques, which are essential for exact structure learning approaches.
Structure learning of Bayesian networks using constraints
  • Cassio Polpo de Campos, Zhi Zeng, Q. Ji
  • Computer Science
    ICML '09
  • 2009
TLDR
This paper addresses exact learning of Bayesian network structure from data and expert's knowledge based on score functions that are decomposable by presenting a branch and bound algorithm that integrates parameter and structural constraints with data in a way to guarantee global optimality with respect to the score function.
Bayesian network learning with cutting planes
TLDR
The problem of learning the structure of Bayesian networks from complete discrete data with a limit on parent set size is considered and it is shown that this is a particularly fast method for exact BN learning.
A Gibbs Sampler for Learning DAGs
TLDR
The proposed Gibbs sampler for structure learning in directed acyclic graph (DAG) models gives robust results in diverse settings, outperforming several existing Bayesian and frequentist methods.
Learning Equivalence Classes of Bayesian-Network Structures
TLDR
It is argued that it is often appropriate to search among equivalence classes of network structures as opposed to the more common approach of searching among individual Bayesian-network structures, and a convenient graphical representation for an equivalence class of structures is described and a set of operators that can be applied to that representation by a search algorithm to move among equivalENCE classes are introduced.
On scoring Maximal Ancestral Graphs with the Max-Min Hill Climbing algorithm
Finding the k-best Equivalence Classes of Bayesian Network Structures for Model Averaging
TLDR
An algorithm to find the k- best equivalence classes of Bayesian networks goes beyond the maximum-aposteriori (MAP) model by listing the most likely network structures and their relative likelihood and therefore has important applications in causal structure discovery.
Bayesian Network Structure Learning
TLDR
A set of criteria for comparison of structure learning algorithms: time and space complexity, completeness of search space, search optimality, structural correctness and classification accuracy are distills.
...
1
2
3
4
...