Markov logic networks

  title={Markov logic networks},
  author={Matthew Richardson and Pedro M. Domingos},
  journal={Machine Learning},
We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC… 

Learning the structure of Markov logic networks

An algorithm for learning the structure of MLNs from relational databases is developed, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks.

Scalable Learning for Structure in Markov Logic Networks

This paper proposes a random walk-based approach to learn MLN structure in a scalable manner that uses the interactions existing among the objects to constrain the search space of candidate clauses.

Bottom-up learning of Markov logic network structure

This work presents a novel algorithm for learning MLN structure that follows a more bottom-up approach and significantly improves accuracy and learning time over the existing topdown approach in three real-world domains.

Structure learning in markov logic networks

A series of algorithms that efficiently and accurately learn MLN structure by combining ideas from inductive logic programming (ILP) and feature induction in Markov networks in the MSL system and applying a variant of MRC to the long-standing AI problem of extracting knowledge from text are presented.

Integrating Logic and Probability: Algorithmic Improvements in Markov Logic Networks

This dissertation proposes a novel generative structure learning algorithm based on the iterated local search metaheuristic and extends IRoTS by proposing MC-IRoTS, an algorithm that combines MCMC methods and SAT solvers for the problem of conditional inference in MLNs.

Discriminative Training of Markov Logic Networks

This paper extends Collins’s (2002) voted perceptron algorithm for HMMs to MLNs by replacing the Viterbi algorithm with a weighted satisfiability solver, and proposes a discriminative approach to training MLNs.

Coherence and Compatibility of Markov Logic Networks

This paper develops a general framework for measuring the coherence of Markov logic networks by comparing the resulting probabilities in the model with the weights given to the formulas, and takes the interdependence of different formulas into account.

Improving Learning of Markov Logic Networks using Transfer and Bottom-Up Induction

The main contribution of this proposal are two algorithms for learning the structure of MLNs that proceed in a more data-driven fashion, in contrast to most existing SRL algorithms.

Modelling (Bio)Logical Sequences through Markov Logic Networks

A simple temporal extension of MLN that is able to deal with sequences of logical atoms and iterated robust tabu search (IRoTS) for MAP inference and Markov Chain-IRo TS for conditional inference in the new framework are proposed.

Practical Markov Logic Containing First-Order Quantifiers with Application to Identity Uncertainty

Markov logic is a highly expressive language recently introduced to specify the connectivity of a Markov network using first-order logic. While Markov logic is capable of constructing arbitrary



Markov Logic

Markov logic accomplishes this by attaching weights to first-order formulas and viewing them as templates for features of Markov networks, and is the basis of the open-source Alchemy system.

Towards Combining Inductive Logic Programming with Bayesian Networks

This paper positively answers Koller and Pfeffer's question, whether techniques from ILP could help to learn the logical component of first order probabilistic models.

A Comparison of Stochastic Logic Programs and Bayesian Logic Programs

Relations between SLPs’ and BLP’ semantics are demonstrated, and it is argued that SLPs can encode the same knowledge as a subclass of BLPs, and extended SLPs are introduced which lift the latter result to any BLP.

Stochastic Logic Programs

Stochastic logic programs are introduced as a means of providing a structured deenition of such a probability distribution and it is shown that the probabilities can be computed directly for fail-free logic programs and by normalisation for arbitrary logic programs.

Approximate inference for first-order probabilistic languages

This work considers two extensions to the basic relational probability models (RPMs) defined by Koller and Pfeffer, and identifies types of probability distributions that allow local decomposition of inference while encoding possible domains in a plausible way.

Probabilistic Constraint Logic Programming

An algorithm to estimate the parameters and to select the properties of log-linear models from incomplete data and an approach for searching for most probable analyses of the probabilistic constraint logic programming model are presented.

Probabilistic Horn Abduction and Bayesian Networks

  • D. Poole
  • Computer Science
    Artif. Intell.
  • 1993

Feature Extraction Languages for Propositionalized Relational Learning

This work develops and study a flexible knowledge representation for structured data, with an associated language that provides the syntax and a well defined equivalent semantics for expressing complex structured data succinctly, and uses this language to automate the process of feature construction.

Dynamic Probabilistic Relational Models

This paper successfully applies dynamic probabilistic relational models (DPRMs) to execution monitoring and fault diagnosis of an assembly plan, in which a complex product is gradually constructed from subparts.

Probabilistic Inductive Logic Programming

This chapter outlines three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and shows how they can be adapted to cover state-of-the-art statistical relational learning approaches.