Corpus ID: 238634741

Efficient Bayesian network structure learning via local Markov boundary search

@article{Gao2021EfficientBN,
  title={Efficient Bayesian network structure learning via local Markov boundary search},
  author={Ming Gao and Bryon Aragam},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.06082}
}
  • Ming Gao, Bryon Aragam
  • Published 12 October 2021
  • Mathematics, Computer Science
  • ArXiv
We analyze the complexity of learning directed acyclic graphical models from observational data in general settings without specific distributional assumptions. Our approach is information-theoretic and uses a local Markov boundary search procedure in order to recursively construct ancestral sets in the underlying graphical model. Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i.e. without a backward pruning phase) suffices to learn the… Expand

Figures from this paper

References

SHOWING 1-10 OF 53 REFERENCES
Parameterized Complexity Results for Exact Bayesian Network Structure Learning
TLDR
This paper studies the computational worst-case complexity of exact Bayesian network structure learning under graph theoretic restrictions on the (directed) super-structure and introduces the directed super-Structure as a natural generalization of its undirected counterpart. Expand
Learning Identifiable Gaussian Bayesian Networks in Polynomial Time and Sample Complexity
TLDR
A provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance --- a class ofBayesian networks for which the DAG structure can be uniquely identified from observational data --- under high-dimensional settings is proposed. Expand
Provable Efficient Skeleton Learning of Encodable Discrete Bayes Nets in Poly-Time and Sample Complexity
  • Adarsh Bank, J. Honorio
  • Computer Science
  • 2020 IEEE International Symposium on Information Theory (ISIT)
  • 2020
TLDR
A primal-dual witness construction is used to prove that, under some technical conditions on the interaction between node pairs, the authors can do exact recovery of the parents and children of a node by performing group ℓ12-regularized multivariate regression, and recover the true Bayesian network skeleton. Expand
Optimal Structure Identification With Greedy Search
TLDR
This paper proves the so-called "Meek Conjecture", which shows that if a DAG H is an independence map of another DAG G, then there exists a finite sequence of edge additions and covered edge reversals in G such that H remains anindependence map of G and after all modifications G =H. Expand
Learning Bayesian Network Structure from Massive Datasets: The "Sparse Candidate" Algorithm
TLDR
An algorithm that achieves faster learning by restricting the search space, which restricts the parents of each variable to belong to a small subset of candidates and is evaluated both on synthetic and real-life data. Expand
Learning Bayesian Networks with Low Rank Conditional Probability Tables
TLDR
This paper proposes a method which learns the exact directed structure of a `low rank` Bayesian network using very few queries and formally proves that this method correctly recovers the true directed structure, runs in polynomial time and only needsPolynomial samples with respect to the number of nodes. Expand
Learning Bayesian Networks: The Combination of Knowledge and Statistical Data
TLDR
A methodology for assessing informative priors needed for learning Bayesian networks from a combination of prior knowledge and statistical data is developed and how to compute the relative posterior probabilities of network structures given data is shown. Expand
Learning linear structural equation models in polynomial time and sample complexity
TLDR
A new algorithm is developed that recovers the directed acyclic graph (DAG) structure of the SEM under an identifiability condition that is more general than those considered in the literature, and without faithfulness assumptions. Expand
Globally optimal score-based learning of directed acyclic graphs in high-dimensions
We prove that $\Omega(s\log p)$ samples suffice to learn a sparse Gaussian directed acyclic graph (DAG) from data, where $s$ is the maximum Markov blanket size. This improves upon recent results thatExpand
SparsityBoost: A New Scoring Function for Learning Bayesian Network Structure
TLDR
A polynomial sample complexity result is proved, showing that maximizing this score is guaranteed to correctly learn a structure with no false edges and a distribution close to the generating distribution, whenever there exists a Bayesian network which is a perfect map for the data generating distribution. Expand
...
1
2
3
4
5
...