Learning Statistical Models of Time-Varying Relational Data

Abstract

Formalisms that can represent objects and relations, as opposed to just variables, have a long history in AI. Recently, significant progress has been made in combining them with a principled treatment of uncertainty. In particular, probabilistic relational models or PRMs [4] are an extension of Bayesian networks that allows reasoning with classes, objects and relations. Although PRMs have been successfully applied to a lot of different domains, they lack the temporal dynamics of the real world. In most real world systems, objects get created, modified and even deleted over time. Similarly, the relationships between objects change as time progresses. For example, consider the problem of predicting the set of research topics that become “hot” (e.g., as measured by the number of papers published about them) over time, the changing distribution of these topics among conferences, and the interests and collaborations between authors. It would be difficult to learn a PRM that modeled this time-varying behavior. Currently the most powerful representation available for capturing sequential phenomena is dynamic Bayesian networks (DBNs) [1], but DBNs are unable to compactly represent many real-world domains that contain multiple objects and classes of objects, as well as multiple kinds of relations among them. DBNs are even more awkward if one wishes to model objects and relations that appear and disappear over time. Thus, our research has focused on a new representation, dynamic probabilistic relational models (DPRMs) which combines PRMs with DBNs. Previously, we have explored the problem of efficient inference [8]; this paper outlines our thoughts on learning DPRMs.

Cite this paper

@inproceedings{Sanghai2003LearningSM, title={Learning Statistical Models of Time-Varying Relational Data}, author={Sumit K. Sanghai and Pedro Domingos and Daniel S. Weld}, year={2003} }