• Publications
  • Influence
Most likely heteroscedastic Gaussian process regression
TLDR
This paper follows Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value, using a Markov chain Monte Carlo method to approximate the posterior noise variance.
Lifted Probabilistic Inference with Counting Formulas
TLDR
This paper presents a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials, and achieves asymptotic speed improvements compared to FOVE.
TUDataset: A collection of benchmark datasets for learning with graphs
TLDR
The TUDataset for graph classification and regression is introduced, which consists of over 120 datasets of varying sizes from a wide range of applications and provides Python-based data loaders, kernel and graph neural network baseline implementations, and evaluation tools.
Bayesian Logic Programs
TLDR
This work introduces a generalization of Bayesian networks, called Bayesian logic programs, to overcome some of the limitations of propositional logic, and combines Bayesian Networks with definite clause logic by establishing a one-to-one mapping between ground atoms and random variables.
Probabilistic Inductive Logic Programming
TLDR
This chapter outlines three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and shows how they can be adapted to cover state-of-the-art statistical relational learning approaches.
Gradient-based boosting for statistical relational learning: The relational dependency network case
TLDR
This work proposes to turn the problem of relational Dependency Networks into a series of relational function-approximation problems using gradient-based boosting, and shows that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.
DeepDB: Learn from Data, not from Queries!
TLDR
The results of the empirical evaluation demonstrate that the data-driven approach not only provides better accuracy than state-of-the-art learned components but also generalizes better to unseen queries.
Towards Combining Inductive Logic Programming with Bayesian Networks
TLDR
This paper positively answers Koller and Pfeffer's question, whether techniques from ILP could help to learn the logical component of first order probabilistic models.
Statistical Relational Artificial Intelligence: Logic, Probability, and Computation
TLDR
This book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extensions of Bayesian networks.
Propagation kernels: efficient graph kernels from propagated information
TLDR
It is shown that if the graphs at hand have a regular structure, one can exploit this regularity to scale the kernel computation to large databases of graphs with thousands of nodes, and can be considerably faster than state-of-the-art approaches without sacrificing predictive performance.
...
...