Exploring Markov Logic Networks for Question Answering

@inproceedings{Khot2015ExploringML,
  title={Exploring Markov Logic Networks for Question Answering},
  author={Tushar Khot and Niranjan Balasubramanian and Eric Gribkoff and Ashish Sabharwal and Peter Clark and Oren Etzioni},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
  year={2015}
}
Elementary-level science exams pose significant knowledge acquisition and reasoning challenges for automatic question answering. [] Key Method First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network.

Figures and Tables from this paper

Question Answering via Integer Programming over Semi-Structured Knowledge

This work proposes a structured inference system for this task, formulated as an Integer Linear Program (ILP), that answers natural language questions using a semi-structured knowledge base derived from text, including questions requiring multi-step inference and a combination of multiple facts.

What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams

This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions.

Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions

This paper describes an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results.

WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference

A corpus of explanations for standardized science exams, a recent challenge task for question answering, is presented and an explanation-centered tablestore is provided, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations.

Taking a Closed-Book Examination: Decoupling KB-Based Inference by Virtual Hypothesis for Answering Real-World Questions

This work proposes decoupling KB-based inference by transforming a question into a high-level triplet in the KB, which makes it possible to apply KB- based inference methods to answer complex questions.

Answering Complex Questions Using Open Information Extraction

This work develops a new inference model for Open IE that can work effectively with multiple short facts, noise, and the relational structure of tuples, and significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty.

Tables as Semi-structured Knowledge for Question Answering

This paper first uses the structure of tables to guide the construction of a dataset of over 9000 multiple-choice questions with rich alignment annotations, then uses this annotated data to train a semi-structured feature-driven model for question answering that uses tables as a knowledge base.

Fine-Grained Explanations Using Markov Logic

Ground-explanations are extracted from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results and are richer than state-of-the-art non-relational explainers such as LIME.

A Study of Automatically Acquiring Explanatory Inference Patterns from Corpora of Explanations: Lessons from Elementary Science Exams

The possibility of generating large explanations with an average of six facts by automatically extracting common explanatory patterns from a corpus of manually authored elementary science explanations represented as lexically-connected explanation graphs grounded in a semi-structured knowledge base of tables is explored.

Learning What is Essential in Questions

This paper develops a classifier that reliably identifies and ranks essential terms in questions and demonstrates that the notion of question term essentiality allows state-of-the-art QA solver for elementary-level science questions to make better and more informed decisions, improving performance by up to 5%.

References

SHOWING 1-10 OF 25 REFERENCES

Speeding Up Inference in Markov Logic Networks by Preprocessing to Reduce the Size of the Resulting Grounded Network

A preprocessing algorithm is proposed that can substantially reduce the effective size of Markov Logic Networks (MLNs) by rapidly counting how often the evidence satisfies each formula, regardless of the truth values of the query literals.

Memory-Efficient Inference in Relational Domains

LazySAT is proposed, a variation of the Walk-SAT solver that avoids this blowup by taking advantage of the extreme sparseness that is typical of relational domains and reduces memory usage by orders of magnitude.

Efficient Markov Logic Inference for Natural Language Semantics

A new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms and introduces a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible.

Tuffy: Scaling up Statistical Inference in Markov Logic Networks using an RDBMS

This work presents Tuffy, a scalable Markov Logic Networks framework that achieves scalability via three novel contributions: a bottom-up approach to grounding, a novel hybrid architecture that allows to perform AI-style local search efficiently using an RDBMS, and a theoretical insight that shows when one can improve the efficiency of stochastic local search.

Sound and Efficient Inference with Probabilistic and Deterministic Dependencies

MC-SAT is an inference algorithm that combines ideas from MCMC and satisfiability, based on Markov logic, which defines Markov networks using weighted clauses in first-order logic and greatly outperforms Gibbs sampling and simulated tempering over a broad range of problem sizes and degrees of determinism.

Predicting Learnt Clauses Quality in Modern SAT Solvers

A key observation of CDCL solvers behavior on this family of benchmarks is reported and an unsuspected side effect of their particular Clause Learning scheme is explained, allowing this work to solve an important, still open, question: How to designing a fast, static, accurate, and predictive measure of new learnt clauses pertinence.

Lifted First-Order Probabilistic Inference

This paper presents the first exact inference algorithm that operates directly on a first- order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models).

Entity Resolution with Markov Logic

A well-founded, integrated solution to the entity resolution problem based on Markov logic, which combines first-order logic and probabilistic graphical models by attaching weights to first- order formulas, and viewing them as templates for features of Markov networks.

Probabilistic theorem proving

This work proposes the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations), and proposes an algorithm for approximate PTP, which shows that it is superior to lifted belief propagation.

Parameter Learning of Logic Programs for Symbolic-Statistical Modeling

A logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution, and a new EM algorithm that can significantly outperform the Inside-Outside algorithm.