# Exploring Markov Logic Networks for Question Answering

@inproceedings{Khot2015ExploringML, title={Exploring Markov Logic Networks for Question Answering}, author={Tushar Khot and Niranjan Balasubramanian and Eric Gribkoff and Ashish Sabharwal and Peter E. Clark and Oren Etzioni}, booktitle={EMNLP}, year={2015} }

Elementary-level science exams pose significant knowledge acquisition and reasoning challenges for automatic question answering. [...] Key Method First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network. Expand

#### 34 Citations

Question Answering via Integer Programming over Semi-Structured Knowledge

- Computer Science
- IJCAI
- 2016

This work proposes a structured inference system for this task, formulated as an Integer Linear Program (ILP), that answers natural language questions using a semi-structured knowledge base derived from text, including questions requiring multi-step inference and a combination of multiple facts. Expand

What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams

- Computer Science
- COLING
- 2016

This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions. Expand

Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions

- Computer Science
- AAAI
- 2016

This paper evaluates the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam, and shows that the overall system's score is 71.3%, an improvement of 23.8% (absolute) over the MLN-based method described in previous work. Expand

Reasoning-Driven Question-Answering for Natural Language Understanding

- Computer Science
- ArXiv
- 2019

This thesis proposes a formulation for abductive reasoning in natural language and shows its effectiveness, especially in domains with limited training data, and presents the first formal framework for multi-step reasoning algorithms, in the presence of a few important properties of language use. Expand

WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference

- Computer Science
- LREC
- 2018

A corpus of explanations for standardized science exams, a recent challenge task for question answering, is presented and an explanation-centered tablestore is provided, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Expand

Taking a Closed-Book Examination: Decoupling KB-Based Inference by Virtual Hypothesis for Answering Real-World Questions

- Medicine, Computer Science
- Comput. Intell. Neurosci.
- 2021

This work proposes decoupling KB-based inference by transforming a question into a high-level triplet in the KB, which makes it possible to apply KB- based inference methods to answer complex questions. Expand

Answering Complex Questions Using Open Information Extraction

- Computer Science
- ACL
- 2017

This work develops a new inference model for Open IE that can work effectively with multiple short facts, noise, and the relational structure of tuples, and significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty. Expand

Tables as Semi-structured Knowledge for Question Answering

- Computer Science
- ACL
- 2016

This paper first uses the structure of tables to guide the construction of a dataset of over 9000 multiple-choice questions with rich alignment annotations, then uses this annotated data to train a semi-structured feature-driven model for question answering that uses tables as a knowledge base. Expand

Fine-Grained Explanations Using Markov Logic

- Computer Science
- ECML/PKDD
- 2019

Ground-explanations are extracted from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results and are richer than state-of-the-art non-relational explainers such as LIME. Expand

A Study of Automatically Acquiring Explanatory Inference Patterns from Corpora of Explanations: Lessons from Elementary Science Exams

- Computer Science
- AKBC@NIPS
- 2017

The possibility of generating large explanations with an average of six facts by automatically extracting common explanatory patterns from a corpus of manually authored elementary science explanations represented as lexically-connected explanation graphs grounded in a semi-structured knowledge base of tables is explored. Expand

#### References

SHOWING 1-10 OF 25 REFERENCES

Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network

- Mathematics, Computer Science
- IJCAI
- 2009

A preprocessing algorithm is proposed that can substantially reduce the effective size of Markov Logic Networks (MLNs) by rapidly counting how often the evidence satisfies each formula, regardless of the truth values of the query literals. Expand

Memory-Efficient Inference in Relational Domains

- Computer Science
- AAAI
- 2006

LazySAT is proposed, a variation of the Walk-SAT solver that avoids this blowup by taking advantage of the extreme sparseness that is typical of relational domains and reduces memory usage by orders of magnitude. Expand

Efficient Markov Logic Inference for Natural Language Semantics

- Computer Science
- AAAI Workshop: Statistical Relational Artificial Intelligence
- 2014

A new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms and introduces a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Expand

Tuffy: Scaling up Statistical Inference in Markov Logic Networks using an RDBMS

- Computer Science
- Proc. VLDB Endow.
- 2011

This work presents Tuffy, a scalable Markov Logic Networks framework that achieves scalability via three novel contributions: a bottom-up approach to grounding, a novel hybrid architecture that allows to perform AI-style local search efficiently using an RDBMS, and a theoretical insight that shows when one can improve the efficiency of stochastic local search. Expand

Constraint Propagation for Efficient Inference in Markov Logic

- Mathematics, Computer Science
- CP
- 2011

This work proposes a generalized arc consistency algorithm that prunes the domains of predicates by propagating hard constraints, avoiding the need to explicitly ground the hard constraints during the pre-processing phase, yielding a potentially exponential savings in space and time. Expand

A Study of the AKBC Requirements for Passing an Elementary Science Test

- Computer Science
- 2013

The result of the analysis suggests that as well as fact extraction from text and statistically driven rule extraction, three other styles of AKBC would be useful: acquiring definitional knowledge, direct “reading” of rules from texts that state them, and, given a particular representational framework, acquisition of specific instances of those models from text. Expand

Sound and Efficient Inference with Probabilistic and Deterministic Dependencies

- Computer Science
- AAAI
- 2006

MC-SAT is an inference algorithm that combines ideas from MCMC and satisfiability, based on Markov logic, which defines Markov networks using weighted clauses in first-order logic and greatly outperforms Gibbs sampling and simulated tempering over a broad range of problem sizes and degrees of determinism. Expand

Predicting Learnt Clauses Quality in Modern SAT Solvers

- Computer Science
- IJCAI
- 2009

A key observation of CDCL solvers behavior on this family of benchmarks is reported and an unsuspected side effect of their particular Clause Learning scheme is explained, allowing this work to solve an important, still open, question: How to designing a fast, static, accurate, and predictive measure of new learnt clauses pertinence. Expand

Lifted First-Order Probabilistic Inference

- Computer Science
- IJCAI
- 2005

This paper presents the first exact inference algorithm that operates directly on a first- order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Expand

Entity Resolution with Markov Logic

- Computer Science
- Sixth International Conference on Data Mining (ICDM'06)
- 2006

A well-founded, integrated solution to the entity resolution problem based on Markov logic, which combines first-order logic and probabilistic graphical models by attaching weights to first- order formulas, and viewing them as templates for features of Markov networks. Expand