Learning Bayesian Networks with Non-Decomposable Scores

@inproceedings{Chen2015LearningBN,
  title={Learning Bayesian Networks with Non-Decomposable Scores},
  author={Eunice Yuh-Jie Chen and Arthur Choi and Adnan Darwiche},
  booktitle={GKR},
  year={2015}
}
Modern approaches for optimally learning Bayesian network structures require decomposable scores. Such approaches include those based on dynamic programming and heuristic search methods. These approaches operate in a search space called the order graph, which has been investigated extensively in recent years. In this paper, we break from this tradition, and show that one can effectively learn structures using non-decomposable scores by exploring a more complex search space that leverages state… 
On pruning with the MDL Score
Finding All Bayesian Network Structures within a Factor of Optimal
TLDR
This paper proposes a novel approach to model averaging inspired by performance guarantees in approximation algorithms that is more efficient and scales to significantly larger Bayesian networks than existing approaches.
Learning All Credible Bayesian Network Structures for Model Averaging
TLDR
This paper proposes a novel approach to model averaging inspired by performance guarantees in approximation algorithms that is more efficient and scales to significantly larger Bayesian networks than existing approaches.
Learning Bayesian Networks Using Fast Heuristics
TLDR
This thesis addresses score-based learning of Bayesian networks from data using a few fast heuristics and reveals a computationally simple ordering strategy called Randomised Pairing Greedy Weight that works well as an adjunct along with ASOBS.
Enumerating Equivalence Classes of Bayesian Networks using EC Graphs
TLDR
A new search space for A* search is proposed, called the EC graph, that facilitates the enumeration of equivalence classes, by representing the space of completed, partially directed acyclic graphs, which improves the efficiency of enumeration.
Batch Bayesian Optimization on Permutations using Acquisition Weighted Kernels
TLDR
A batch Bayesian optimization method for combinatorial problems on permutation problems, which is well suited for expensive cost functions on permutations, and provides a regret analysis for the method to gain insight in its theoretical properties.
Bayesian Network Model Averaging Classifiers by Subbagging
TLDR
The main idea of this study is to improve the classification accuracy using subbagging, which is modified bagging using random sampling without replacement, to reduce the posterior standard error of each structure in model averaging, using the K-best method with the ML score.
Bayesian Network Model Averaging Classifiers by Subbagging Bayesian Network Model Averaging Classifiers by Subbagging
TLDR
This study improves the classification accuracy using the subbagging to reduce the posterior standard error of the structures in the model averaging and uses the K-best method with the ML score, which provides more accurate classification for small data than earlier methods do.

References

SHOWING 1-10 OF 36 REFERENCES
A Simple Approach for Finding the Globally Optimal Bayesian Network Structure
TLDR
It is shown that it is possible to learn the best Bayesian network structure with over 30 variables, which covers many practically interesting cases and offers a possibility for efficient exploration of the best networks consistent with different variable orderings.
Efficient Structure Learning of Bayesian Networks using Constraints
  • Cassio Polpo de Campos, Q. Ji
  • Computer Science
    J. Mach. Learn. Res.
  • 2011
TLDR
A branch-and-bound algorithm is presented that integrates structural constraints with data in a way to guarantee global optimality and the benefits of using the properties with state-of-the-art methods and with the new algorithm, able to handle larger data sets than before.
Ordering-Based Search: A Simple and Effective Algorithm for Learning Bayesian Networks
TLDR
It is shown that ordering-based search outperforms the standard baseline, and is competitive with recent algorithms that are much harder to implement.
Learning Bayesian Network Structure using LP Relaxations
TLDR
This work proposes to solve the combinatorial problem ofding the highest scoring Bayesian network structure from data by maintaining an outer bound approximation to the polytope and iteratively tighten it by searching over a new class of valid constraints.
Predicting the Hardness of Learning Bayesian Networks
TLDR
The empirical results, based on the largest evaluation of state-of-the-art BNS learning algorithms to date, demonstrate that they can predict the runtimes to a reasonable degree of accuracy, and effectively select algorithms that perform well on a particular instance.
Annealed Importance Sampling for Structure Learning in Bayesian Networks
TLDR
This work presents a new sampling approach to Bayesian learning of the Bayesian network structure that replaces the usual Markov chain Monte Carlo method by the method of annealed importance sampling (AIS), and shows that AIS is not only competitive to MCMC in exploring the posterior, but also superior in two ways.
Learning Bayesian Networks: Search Methods and Experimental Results
TLDR
A metric for computing the relative posterior probability of a network structure given data developed by Heckerman et al. (1994a,b,c) has a property useful for inferring causation from data and is described.
Being Bayesian about Network Structure
TLDR
This paper shows how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed ordering over network variables, and uses this result as the basis for an algorithm that approximates the Bayesian posterior of a feature.
Learning Optimal Bayesian Networks: A Shortest Path Perspective
TLDR
An A* search algorithm that learns an optimal Bayesian network structure by only searching the most promising part of the solution space and a heuristic function that reduces the amount of relaxation by avoiding directed cycles within some groups of variables.
Exact Bayesian Structure Discovery in Bayesian Networks
TLDR
This work presents an algorithm that computes the exact posterior probability of a subnetwork, e.g., a directed edge, and shows that also in domains with a large number of variables, exact computation is feasible, given suitable a priori restrictions on the structures.
...
...