• Corpus ID: 7483686

Evaluating Anytime Algorithms for Learning Optimal Bayesian Networks

@article{Malone2013EvaluatingAA,
  title={Evaluating Anytime Algorithms for Learning Optimal Bayesian Networks},
  author={Brandon M. Malone and Changhe Yuan},
  journal={ArXiv},
  year={2013},
  volume={abs/1309.6844}
}
Exact algorithms for learning Bayesian networks guarantee to find provably optimal networks. However, they may fail in difficult learning tasks due to limited time or memory. In this research we adapt several anytime heuristic search-based algorithms to learn Bayesian networks. These algorithms find high-quality solutions quickly, and continually improve the incumbent solution or prove its optimality before resources are exhausted. Empirical results show that the anytime window A* algorithm… 

Figures and Tables from this paper

An Experimental Analysis of Anytime Algorithms for Bayesian Network Structure Learning

TLDR
An extensive evaluation of the anytime behavior of the current state-of-the-art algorithms for Bayesian network structure learning (BNSL) finds that a local search algorithm based on memetic search dominates the performance of other state of theart algorithms when considering anytime behavior.

Predicting the Hardness of Learning Bayesian Networks

TLDR
The empirical results, based on the largest evaluation of state-of-the-art BNS learning algorithms to date, demonstrate that they can predict the runtimes to a reasonable degree of accuracy, and effectively select algorithms that perform well on a particular instance.

Empirical hardness of finding optimal Bayesian network structures: algorithm selection and runtime prediction

TLDR
It is shown that for a given solver the hardness of a problem instance can be efficiently predicted based on a collection of non-trivial features which go beyond the basic parameters of instance size, which enables effective selection of solvers that perform well in terms of runtimes on a particular instance.

Finding Optimal Bayesian Network Structures with Constraints Learned from Data

TLDR
The observation that there is useful information implicit in the POPS is made, which shows that solving the constrained subproblems significantly improves the efficiency and scalability of heuristic search-based structure learning algorithms.

An Improved Lower Bound for Bayesian Network Structure Learning

TLDR
A new partition method based on information extracted from the potential optimal parent sets (POPS) of the variables of a data set can significantly improve the efficiency and scalability of heuristic search-based structure learning algorithms.

Tightening Bounds for Bayesian Network Structure Learning

TLDR
Methods for tightening the bounds of a breadth-first branch and bound algorithm by using more informed variablegroupings when creating the pattern databases and using an anytime learning algorithm are introduced.

Improved Local Search with Momentum for Bayesian Networks Structure Learning

TLDR
This paper designs a framework, incorporating an influential perturbation factor integrated by three proposed operators, to escape current local optimal and improve the dilemma that outcomes trap in local optimal.

Empirical Behavior of Bayesian Network Structure Learning Algorithms

TLDR
Empirical results show that machine learning techniques based on problem-dependent characteristics can often be used to accurately predict the algorithms' running times, and a comparison of exact and approximate search techniques is reviewed.

References

SHOWING 1-10 OF 25 REFERENCES

An Improved Admissible Heuristic for Learning Optimal Bayesian Networks

TLDR
An improved admissible heuristic that tries to avoid directed cycles within small groups of variables is introduced and a sparse representation is also introduced to store only the unique optimal parent choices.

Ordering-Based Search: A Simple and Effective Algorithm for Learning Bayesian Networks

TLDR
It is shown that ordering-based search outperforms the standard baseline, and is competitive with recent algorithms that are much harder to implement.

A Simple Approach for Finding the Globally Optimal Bayesian Network Structure

TLDR
It is shown that it is possible to learn the best Bayesian network structure with over 30 variables, which covers many practically interesting cases and offers a possibility for efficient exploration of the best networks consistent with different variable orderings.

Improving the Scalability of Optimal Bayesian Network Learning with External-Memory Frontier Breadth-First Branch and Bound Search

TLDR
This paper develops a memory-efficient heuristic search algorithm for learning the structure of a Bayesian network that leverages the layered structure of the search graph of this problem so that no more than two layers of the graph need to be stored in memory at a time.

Efficient Structure Learning of Bayesian Networks using Constraints

  • Cassio Polpo de CamposQ. Ji
  • Computer Science
    J. Mach. Learn. Res.
  • 2011
TLDR
A branch-and-bound algorithm is presented that integrates structural constraints with data in a way to guarantee global optimality and the benefits of using the properties with state-of-the-art methods and with the new algorithm, able to handle larger data sets than before.

Learning Bayesian Network Structure using LP Relaxations

TLDR
This work proposes to solve the combinatorial problem ofding the highest scoring Bayesian network structure from data by maintaining an outer bound approximation to the polytope and iteratively tighten it by searching over a new class of valid constraints.

Learning Optimal Bayesian Networks Using A* Search

TLDR
Empirical results show that the A* search algorithm significantly improves the time and space efficiency of existing methods on a set of benchmark datasets.

Learning Bayesian Networks is NP-Complete

TLDR
It is shown that the search problem of identifying a Bayesian network—among those where each node has at most K parents—that has a relative posterior probability greater than a given constant is NP-complete, when the BDe metric is used.

Optimal Reinsertion: A New Search Operator for Accelerated and More Accurate Bayesian Network Structure Learning

TLDR
A new algorithm called ORSearch is introduced which allows each optimal reinsertion step to be computed efficiently on large datasets and compares Optimal Reinsertion against a highly tuned implementation of multirestart hill climbing.

Memory-Efficient Dynamic Programming for Learning Optimal Bayesian Networks

TLDR
A memory-efficient implementation of a dynamic programming algorithm for learning the optimal structure of a Bayesian network from training data that runs up to an order of magnitude faster and scales to datasets with more variables than previous approaches.