• Corpus ID: 159041355

Strong equivalence for $\rm LP^{MLN}$ programs

@inproceedings{Luo2019StrongEF,
  title={Strong equivalence for \$\rm LP^\{MLN\}\$ programs},
  author={Man Luo},
  year={2019}
}
  • Man Luo
  • Published 18 May 2019
  • Computer Science
Strong equivalence is a well-studied and important concept in answer set programming (ASP). LP is a probabilistic extension of answer set programs with the weight scheme adapted from Markov Logic. Because of the semantic differences, strong equivalence for ASP does not simply carry over to LP. I study the concept of strong equivalence in LP with the goal of extending strong equivalence to LP programs. My study shows that the verification of strong equivalence in LP can be reduced to equivalence… 

Tables from this paper

On the Strong Equivalences for LPMLN Programs

The results presented in this paper provide a better understanding of LPMLN programming and open a way to study the strong equivalences for some logic formalisms by translating intoLPMLN.

A Syntactic Approach to Studying Strongly Equivalent Logic Programs

This paper presents a syntactic approach to studying the strong equivalence of logic programs, and presents a fully automatic algorithm to discover syntactic conditions that preserve strong equivalences (SE) of ASP and LPMLN programs.

References

SHOWING 1-10 OF 14 REFERENCES

Answer Sets for Propositional Theories

A new definition of equilibrium logic is proposed which uses the concept of a reduct, as in the standard definition of an answer set, and a semantics for weight constraints that covers the problematic case of negative weights is proposed.

Strongly equivalent logic programs

The main theorem shows that the verification of strong equivalence can be accomplished by cheching the equivalence of formulas in a monotonic logic, called the logic of here-and-there, which is intermediate between classical logic and intuitionistic logic.

TensorLog: A Differentiable Deductive Database

A probabilistic deductive database, called TensorLog, in which reasoning uses a differentiable process, and it is shown that these functions can be composed recursively to perform inference in non-trivial logical theories containing multiple interrelated clauses and predicates.

Equilibrium logic

  • D. Pearce
  • Economics
    Annals of Mathematics and Artificial Intelligence
  • 2006
This work presents an overview of equilibrium logic and its main properties and uses and states it provides a logical foundation for ASP as an extension of the basic syntax of answer set programs.

Logic Tensor Networks for Semantic Image Interpretation

Experiments show that the use of background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-the-art Fast Region-based Convolutional Neural Networks (Fast R-CNN) and add robustness to the learning system when errors are present in the labels of the training data.

Logic Tensor Networks for Semantic Image Interpretation

Experiments show that the use of background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-the-art Fast Region-based Convolutional Neural Networks (Fast R-CNN) and add robustness to the learning system when errors are present in the labels of the training data.

TensorLog: A Differentiable Deductive Database

A probabilistic deductive database, called TensorLog, in which reasoning uses a differentiable process, and it is shown that these functions can be composed recursively to perform inference in non-trivial logical theories containing multiple interrelated clauses and predicates.

Strongly equivalent logic programs

The main theorem shows that the verification of strong equivalence can be accomplished by cheching the equivalence of formulas in a monotonic logic, called the logic of here-and-there, which is intermediate between classical logic and intuitionistic logic.

Markov logic networks

Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach to combining first-order logic and probabilistic graphical models in a single representation.