Formalisms that can represent objects and relations, as opposed to just variables, have a long history in AI. Recently, significant progress has been made in combining them with a principled treatment of uncertainty. In particular, probabilistic relational models or PRMs  are an extension of Bayesian networks that allows reasoning with classes, objects and relations. Although PRMs have been successfully applied to a lot of different domains, they lack the temporal dynamics of the real world. In most real world systems, objects get created, modified and even deleted over time. Similarly, the relationships between objects change as time progresses. For example, consider the problem of predicting the set of research topics that become “hot” (e.g., as measured by the number of papers published about them) over time, the changing distribution of these topics among conferences, and the interests and collaborations between authors. It would be difficult to learn a PRM that modeled this time-varying behavior. Currently the most powerful representation available for capturing sequential phenomena is dynamic Bayesian networks (DBNs) , but DBNs are unable to compactly represent many real-world domains that contain multiple objects and classes of objects, as well as multiple kinds of relations among them. DBNs are even more awkward if one wishes to model objects and relations that appear and disappear over time. Thus, our research has focused on a new representation, dynamic probabilistic relational models (DPRMs) which combines PRMs with DBNs. Previously, we have explored the problem of efficient inference ; this paper outlines our thoughts on learning DPRMs.