• Corpus ID: 7145148

Question Answering via Integer Programming over Semi-Structured Knowledge

@inproceedings{Khashabi2016QuestionAV,
  title={Question Answering via Integer Programming over Semi-Structured Knowledge},
  author={Daniel Khashabi and Tushar Khot and Ashish Sabharwal and Peter Clark and Oren Etzioni and Dan Roth},
  booktitle={IJCAI},
  year={2016}
}
Answering science questions posed in natural language is an important AI challenge. [] Key Result Finally, we show our approach is substantially more robust to a simple answer perturbation compared to statistical correlation methods.

Figures and Tables from this paper

Answering Complex Questions Using Open Information Extraction

TLDR
This work develops a new inference model for Open IE that can work effectively with multiple short facts, noise, and the relational structure of tuples, and significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty.

Reasoning-Driven Question-Answering for Natural Language Understanding

TLDR
This thesis proposes a formulation for abductive reasoning in natural language and shows its effectiveness, especially in domains with limited training data, and presents the first formal framework for multi-step reasoning algorithms, in the presence of a few important properties of language use.

Answering Science Exam Questions Using Query Rewriting with Background Knowledge

TLDR
A system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text is presented and is able to outperform several strong baselines on the ARC dataset.

Question Answering as Global Reasoning Over Semantic Abstractions

TLDR
This work presents the first system that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pre-trained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers.

Improving Retrieval-Based Question Answering with Deep Inference Models

TLDR
This proposed two-step model outperforms the best retrieval-based solver by over 3% in absolute accuracy and can answer both simple, factoid questions and more complex questions that require reasoning or inference.

Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge

TLDR
A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI.

QASC: A Dataset for Question Answering via Sentence Composition

TLDR
This work presents a multi-hop reasoning dataset, Question Answering via Sentence Composition (QASC), that requires retrieving facts from a large corpus and composing them to answer a multiple-choice question, and provides annotation for supporting facts as well as their composition.

Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?

TLDR
This work empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three free-text corpora.

KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings

TLDR
This paper proposes a novel framework for answering science exam questions, which mimics human solving process in an open-book exam and outperforms the previous state-of-the-art QA systems.

Assertion-based QA with Question-Aware Open Information Extraction

TLDR
A new dataset called WebAssertions is introduced, which includes hand-annotated QA labels for 358,427 assertions in 55,960 web passages and shows that ABQA features significantly improve the accuracy on passage-based QA.
...

References

SHOWING 1-10 OF 51 REFERENCES

Exploring Markov Logic Networks for Question Answering

TLDR
A system that reasons with knowledge derived from textbooks, represented in a subset of firstorder logic, called Praline, which demonstrates a 15% accuracy boost and a 10x reduction in runtime as compared to other MLNbased methods, and comparable accuracy to word-based baseline approaches.

Open question answering over curated and extracted knowledge bases

TLDR
This paper presents OQA, the first approach to leverage both curated and extracted KBs, and demonstrates that it achieves up to twice the precision and recall of a state-of-the-art Open QA system.

Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions

TLDR
This paper describes an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results.

Natural language inference

TLDR
This dissertation explores a range of approaches to NLI, beginning with methods which are robust but approximate, and proceeding to progressively more precise approaches, and greatly extends past work in natural logic to incorporate both semantic exclusion and implicativity.

Natural language question answering over RDF: a graph data driven approach

TLDR
A semantic query graph is proposed to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem and resolves the ambiguity of natural language questions at the time when matches of query are found.

Scaling question answering to the web

TLDR
Mulder is introduced, which is believed to be the first general-purpose, fully-automated question-answering system available on the web, and its architecture is described, which relies on multiple search-engine queries, natural-language parsing, and a novel voting procedure to yield reliable answers coupled with high recall.

Information Extraction over Structured Data: Question Answering with Freebase

TLDR
It is shown that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.

Automatic Construction of Inference-Supporting Knowledge Bases

TLDR
This paper describes the work on automatically constructing an inferential knowledge base, and applying it to a question-answering task, and suggests several challenges that this approach poses, and innovative, partial solutions that have been developed.

Semantic Parsing for Single-Relation Question Answering

TLDR
A semantic parsing framework based on semantic similarity for open domain question answering (QA) that achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.

A Linear Programming Formulation for Global Inference in Natural Language Tasks

TLDR
This work develops a linear programing formulation for this problem and evaluates it in the context of simultaneously learning named entities and relations to efficiently incorporate domain and task specific constraints at decision time, resulting in significant improvements in the accuracy and the "human-like" quality of the inferences.
...