Learning Bayesian Networks with Local Structure

@article{Friedman1996LearningBN,
  title={Learning Bayesian Networks with Local Structure},
  author={Nir Friedman and Mois{\'e}s Goldszmidt},
  journal={ArXiv},
  year={1996},
  volume={abs/1302.3577}
}
In this paper we examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability tables (CPTs), that quantify these networks. This increases the space of possible models, enabling the representation of CPTs with a variable number of parameters that depends on the learned local structures. The resulting learning procedure is… 

Learning Bayesian networks with local structure, mixed variables, and exact algorithms

A Bayesian Approach to Learning Bayesian Networks with Local Structure

A Bayesian approach to learning Bayesian networks that contain the more general decision-graph representations of the CPDs is investigated, and how to evaluate the posterior probability-- that is, the Bayesian score--of such a network, given a database of observed cases is described.

Scalable Bayesian Network Structure Learning with Splines

A novel approach capable of learning the graph of a BN and simultaneously modelling linear and non-linear local probabilistic relationships between variables as Multivariate Adaptive Regression Splines (MARS), which are polynomial regression models represented as piecewise spline functions.

Learning Module Networks

Evaluation on real data in the domains of gene expression and the stock market shows that module networks generalize better than Bayesian networks, and that the learned module network structure reveals regularities that are obscured in learnedBayesian networks.

Full Bayesian network classifiers

This paper presents a novel, efficient decision tree learning, which is also effective in the context of FBC learning and demonstrates better performance in both classification and ranking compared with other state-of-the-art learning algorithms.

Learning with Bayesian networks and probability trees to approximate a joint distribution

This paper carries out an experimental evaluation of several combined strategies based on trees and tables using a greedy hill climbing algorithm and compares the results with a restricted search procedure (the Max-Min hill climb algorithm).

Finding Optimal Bayesian Networks with Local Structure

A class of dyadic decision trees—proposed previously only for continuous conditioning variables—is augmented by incorporating categorical variables with arbitrary context-specific recursive splitting of their state spaces, and it is shown that the resulting model class admits computationally feasible maximization of a Bayes score in a range of moderate-size problem instances.

Sequential Update of Bayesian Network Structure

This paper introduces a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations and evaluates its effectiveness through and empirical study.

On the Sample Complexity of Learning Bayesian Networks

The sample complexity of MDL based learning procedures for Bayesian networks is examined and the number of samples needed to learn an e-close approximation with confidence δ is shown, which means that the sample complexity is a low-order polynomial in the error threshold and sub-linear in the confidence bound.
...

References

SHOWING 1-10 OF 45 REFERENCES

On the Sample Complexity of Learning Bayesian Networks

The sample complexity of MDL based learning procedures for Bayesian networks is examined and the number of samples needed to learn an e-close approximation with confidence δ is shown, which means that the sample complexity is a low-order polynomial in the error threshold and sub-linear in the confidence bound.

Learning Bayesian Networks is NP-Complete

It is shown that the search problem of identifying a Bayesian network—among those where each node has at most K parents—that has a relative posterior probability greater than a given constant is NP-complete, when the BDe metric is used.

A Tutorial on Learning with Bayesian Networks

  • D. Heckerman
  • Computer Science
    Innovations in Bayesian Networks
  • 1998
Methods for constructing Bayesian networks from prior knowledge are discussed and methods for using data to improve these models are summarized, including techniques for learning with incomplete data.

Context-Specific Independence in Bayesian Networks

This paper proposes a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node, and proposes a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network.

Local Learning in Probabilistic Networks with Hidden Variables

It is shown that networks with fixed structure containing hidden variables can be learned automatically from data using a gradient-descent mechanism similar to that used in neural networks, which is extended to networks with intensionally represented distributions.

LEARNING BAYESIAN BELIEF NETWORKS: AN APPROACH BASED ON THE MDL PRINCIPLE

A new approach for learning Bayesian belief networks from raw data is presented, based on Rissanen's minimal description length (MDL) principle, which can learn unrestricted multiply‐connected belief networks and allows for trade off accuracy and complexity in the learned model.

Connectionist Learning of Belief Networks

Belief network induction

This dissertation describes BNI (Belief Network Inductor), a tool that automatically induces a belief network from a database to provide a theoretically sound method of inducing a model from data, and performing inference over that model.

Theory Refinement on Bayesian Networks

A theory of learning classification rules

A Bayesian theory of learning classi cation rules, the comparison and comparison of this theory with some previous theories of learning, and two extensive applications of the theory to the problems of learningclass probability trees and bounding error when learning logical rules are reported.