• Publications
  • Influence
Gradient-based boosting for statistical relational learning: The relational dependency network case
TLDR
We propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. Expand
  • 121
  • 16
  • PDF
Counting Belief Propagation
TLDR
We present a new and simple BP algorithm, called counting BP, that exploits additional symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques. Expand
  • 166
  • 10
  • PDF
Dynamic preferences in multi-criteria reinforcement learning
TLDR
In this paper, we consider the problem of learning in the presence of time-varying preferences among multiple objectives, using numeric weights to represent their importance. Expand
  • 97
  • 9
  • PDF
Statistical Relational Artificial Intelligence: Logic, Probability, and Computation
TLDR
An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Expand
  • 143
  • 8
  • PDF
A Decision-Theoretic Model of Assistance
TLDR
We formulate the problem of intelligent assistance in a decision-theoretic framework, and present both theoretical and empirical results. Expand
  • 87
  • 8
  • PDF
Exploiting symmetries for scaling loopy belief propagation and relational training
TLDR
In this paper, we show that inference and training can indeed benefit from exploiting symmetries. Expand
  • 68
  • 8
  • PDF
Learning Markov Logic Networks via Functional Gradient Boosting
TLDR
We present a MLN-learning approach that learns both the weights and the structure of the MLN simultaneously. Expand
  • 84
  • 7
  • PDF
Transfer in variable-reward hierarchical reinforcement learning
TLDR
We introduce the problem of Variable-Reward Transfer Learning where the objective is to speed up learning in a new SMDP by transferring experience from previous MDPs that share the same dynamics but have different rewards. Expand
  • 94
  • 5
  • PDF
Learning first-order probabilistic models with combining rules
TLDR
This paper presents algorithms for learning with combining rules in first-order relational probabilistic models. Expand
  • 81
  • 5
  • PDF
Mixed Sum-Product Networks: A Deep Architecture for Hybrid Domains
TLDR
We propose the first trainable probabilistic deep architecture for hybrid domains that features tractable queries. Expand
  • 51
  • 4
  • PDF