Learning Probabilistic Relational Models

Abstract

A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with “flat” data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases.

DOI: 10.1007/3-540-44914-0_25

Extracted Key Phrases

2 Figures and Tables

050100'00'02'04'06'08'10'12'14'16
Citations per Year

1,157 Citations

Semantic Scholar estimates that this publication has 1,157 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Friedman1999LearningPR, title={Learning Probabilistic Relational Models}, author={Nir Friedman and Lise Getoor and Daphne Koller and Avi Pfeffer}, booktitle={IJCAI}, year={1999} }