Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Most likely heteroscedastic Gaussian process regression
This paper follows Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value, using a Markov chain Monte Carlo method to approximate the posterior noise variance.
Lifted Probabilistic Inference with Counting Formulas
- Brian Milch, Luke Zettlemoyer, K. Kersting, Michael Haimes, L. Kaelbling
- Computer ScienceAAAI
- 13 July 2008
This paper presents a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials, and achieves asymptotic speed improvements compared to FOVE.
TUDataset: A collection of benchmark datasets for learning with graphs
- Christopher Morris, Nils M. Kriege, Franka Bause, K. Kersting, Petra Mutzel, Marion Neumann
- Computer ScienceArXiv
- 16 July 2020
The TUDataset for graph classification and regression is introduced, which consists of over 120 datasets of varying sizes from a wide range of applications and provides Python-based data loaders, kernel and graph neural network baseline implementations, and evaluation tools.
Bayesian Logic Programs
This work introduces a generalization of Bayesian networks, called Bayesian logic programs, to overcome some of the limitations of propositional logic, and combines Bayesian Networks with definite clause logic by establishing a one-to-one mapping between ground atoms and random variables.
Probabilistic Inductive Logic Programming
This chapter outlines three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and shows how they can be adapted to cover state-of-the-art statistical relational learning approaches.
Gradient-based boosting for statistical relational learning: The relational dependency network case
- Sriraam Natarajan, Tushar Khot, K. Kersting, Bernd Gutmann, J. Shavlik
- Computer ScienceMachine Learning
This work proposes to turn the problem of relational Dependency Networks into a series of relational function-approximation problems using gradient-based boosting, and shows that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.
DeepDB: Learn from Data, not from Queries!
- Benjamin Hilprecht, Andreas Schmidt, Moritz Kulessa, Alejandro Molina, K. Kersting, Carsten Binnig
- Computer ScienceProc. VLDB Endow.
- 2 September 2019
The results of the empirical evaluation demonstrate that the data-driven approach not only provides better accuracy than state-of-the-art learned components but also generalizes better to unseen queries.
Towards Combining Inductive Logic Programming with Bayesian Networks
This paper positively answers Koller and Pfeffer's question, whether techniques from ILP could help to learn the logical component of first order probabilistic models.
Statistical Relational Artificial Intelligence: Logic, Probability, and Computation
- L. D. Raedt, K. Kersting, Sriraam Natarajan, D. Poole
- Computer ScienceStatistical Relational Artificial Intelligence…
- 24 March 2016
This book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extensions of Bayesian networks.
Propagation kernels: efficient graph kernels from propagated information
- Marion Neumann, R. Garnett, C. Bauckhage, K. Kersting
- Computer ScienceMachine Learning
- 1 February 2016
It is shown that if the graphs at hand have a regular structure, one can exploit this regularity to scale the kernel computation to large databases of graphs with thousands of nodes, and can be considerably faster than state-of-the-art approaches without sacrificing predictive performance.