• Corpus ID: 1712787

Understanding the Complexity of Lifted Inference and Asymmetric Weighted Model Counting

@inproceedings{Gribkoff2014UnderstandingTC,
  title={Understanding the Complexity of Lifted Inference and Asymmetric Weighted Model Counting},
  author={Eric Gribkoff and Guy Van den Broeck and Dan Suciu},
  booktitle={AAAI Workshop: Statistical Relational Artificial Intelligence},
  year={2014}
}
We highlight our work on lifted inference for the asymmetric Weighted First-Order Model Counting problem (WFOMC), which counts the assignments that satisfy a given sentence in first-order logic. [] Key Method First, we discuss how adding negation can lower the query complexity, and describe the essential element (resolution) necessary to extend a previous algorithm for positive queries to handle queries with negation. Second, we describe our novel dichotomy result for a non-trivial fragment of first-order…

Figures from this paper

Lifted Inference with Tree Axioms
TLDR
Any two-variable sentence ϕ with the addition of a tree axiom is extended, stating that some distinguished binary relation in ϕ forms a tree in the graph-theoretic sense.
Symmetric Weighted First-Order Model Counting
TLDR
This paper proves that all γ-acyclic queries have polynomial time data complexity, and proves that, for every fragment FOk, k ≥ 2, the combined complexity of FOMC (or WFOMC) is #P-complete.
Open-World Probabilistic Databases: An Abridged Report
TLDR
This paper lifts the existing data complexity dichotomy of probabilistic databases, and proposes an efficient evaluation algorithm for unions of conjunctive queries, which shows that query evaluation can become harder for non-monotone queries.
Dichotomies for Queries with Negation in Probabilistic Databases
TLDR
The tractability frontier of two classes of relational algebra queries in tuple-independent probabilistic databases is charted, which consists of queries with join, projection, selection, and negation but without repeating relation symbols and union.
A Tutorial on Query Answering and Reasoning over Probabilistic Knowledge Bases
TLDR
This tutorial is dedicated to give an understanding of various query answering and reasoning tasks that can be used to exploit the full potential of probabilistic knowledge bases.
On Constrained Open-World Probabilistic Databases
TLDR
This work provides an algorithm for one class of queries, and establishes a basic hardness result for another, and proposes an efficient and tight approximation for a largeclass of queries.
A Query Engine for Probabilistic Preferences
TLDR
An implementation of a query engine that supports querying probabilistic preferences alongside relational data and a novel inference algorithm for conjunctive queries over RIM, which significantly outperforms the state of the art in terms of both asymptotic and empirical execution cost.
Lifted Probabilistic Inference for Asymmetric Graphical Models
TLDR
This work presents a framework for probabilistic sampling-based inference that only uses the induced approximate symmetries to propose steps in a Metropolis-Hastings style Markov chain, which leads to improved probability estimates while remaining unbiased.
SlimShot: In-Database Probabilistic Inference for Knowledge Bases
TLDR
SlimShot is described, a probabilistic inference engine for knowledge bases that uses a simple Monte Carlo-based inference, with three key enhancements: it combines sampling with safe query evaluation, and it estimates a conditional probability by jointly computing the numerator and denominator.
...
1
2
3
...

References

SHOWING 1-10 OF 43 REFERENCES
Lifted Inference and Learning in Statistical Relational Models
TLDR
A new method for logical inference, called first-order knowledge compilation, is proposed, which shows that by compiling relational models into a new circuit language, hard inference problems become tractable to solve and leads to a new state-of-the-art lifted probabilistic inference algorithm.
Lifted Inference Seen from the Other Side : The Tractable Features
TLDR
This paper defines a set of rules that look only at the logical representation to identify models for which exact efficient inference is possible and yields new tractable classes that could not be solved efficiently by any of the existing techniques.
On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference
TLDR
This paper introduces a formal definition of lifted inference that allows us to reason about the completeness of lifting inference algorithms relative to a particular class of probabilistic models, and shows how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables.
Liftability of Probabilistic Inference: Upper and Lower Bounds
TLDR
Main results are that lifted inference is infeasible for general quantifier-free first-order probabilistic knowledge bases, but becomes tractable when formulas are restricted to the 2-variable fragment of quantifiers-freefirst-order logic.
Lifted Probabilistic Inference by First-Order Knowledge Compilation
TLDR
This work develops a model theoretic approach to probabilistic lifted inference, which compiles a first-order probabilism theory into aFirst-order deterministic decomposable negation normal form (d-DNNF) circuit and effectively exploits the logical structure within the first- order model, which allows more computation to be done at the lifted level.
On the Complexity and Approximation of Binary Evidence in Lifted Inference
  • Guy Van den Broeck
  • Computer Science
    AAAI Workshop: Statistical Relational Artificial Intelligence
  • 2013
TLDR
This paper identifies Boolean rank of the evidence as a key parameter in the complexity of conditioning and opens up the possibility of approximating evidence by a low-rank Boolean matrix factorization that maintains the model's symmetries and admits efficient lifted inference.
The dichotomy of probabilistic inference for unions of conjunctive queries
TLDR
This work considers unions of conjunctive queries, UCQ, which are equivalent to positive, existential First Order Logic sentences, and also to nonrecursive datalog programs, and proves the following dichotomy theorem.
Skolemization for Weighted First-Order Model Counting
TLDR
A Skolemization algorithm for model counting problems that eliminates existential quantifiers from a first-order logic theory without changing its weighted model count, and simplifies the design of lifted model counting algorithms.
Inference in Probabilistic Logic Programs using Weighted CNF's
TLDR
This paper develops efficient inference algorithms for classical probabilistic inference tasks such as MAP and computing marginals based on a conversion of the Probabilistic logic program and the query and evidence to a weighted CNF formula.
On probabilistic inference by weighted model counting
...
1
2
3
4
5
...