Controlling Information Aggregation for Complex Question Answering

@inproceedings{Kwon2018ControllingIA,
  title={Controlling Information Aggregation for Complex Question Answering},
  author={Heeyoung Kwon and H. Trivedi and Peter Jansen and M. Surdeanu and Niranjan Balasubramanian},
  booktitle={ECIR},
  year={2018}
}
Complex question answering, the task of answering complex natural language questions that rely on inference, requires the aggregation of information from multiple sources. [...] Key Method In particular, the paper develops unsupervised and supervised mechanisms to control random walks on Open Information Extraction (OIE) knowledge graphs. Empirical evaluation on an elementary science exam benchmark shows that the proposed methods enables effective aggregation even over larger graphs and demonstrates the…Expand
Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?
TLDR
This work empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three free-text corpora. Expand
Extracting Common Inference Patterns from Semi-Structured Explanations
TLDR
This work presents a prototype tool for identifying common inference patterns from corpora of semi-structured explanations, and uses it to successfully extract 67 inference patternsfrom a “matter” subset of standardized elementary science exam questions that span scientific and world knowledge. Expand
Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Scientific Question Answering
TLDR
This paper proposes a retriever-reader model that learns to attend on essential terms during the question answering process and builds an essential-term-aware ‘retriever’ and an enhanced ‘reader’ to distinguish between essential terms and distracting words to predict the answer. Expand
A Knowledge-based Approach for Answering Complex Questions in Persian
TLDR
This work handles multi-constraint and multi-hop questions by building their set of possible corresponding logical forms, then Multilingual-BERT is used to select the logical form that best describes the input complex question syntactically and semantically. Expand
Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question Answering
TLDR
This paper proposes a retriever-reader model that learns to attend on essential terms during the question answering process, and builds an essential term selector which first identifies the most important words in a question, then reformulates the query and searches for related evidence. Expand
Improving Question Answering with External Knowledge
TLDR
This work explores simple yet effective methods for exploiting two sources of externalknowledge for exploiting unstructured external knowledge for subject-area QA on multiple-choice question answering tasks in subject areas such as science. Expand
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation
The TextGraphs-13 Shared Task on Explanation Regeneration (Jansen and Ustalov, 2019) asked participants to develop methods to reconstruct gold explanations for elementary science questions. RedExpand
ChiSquareX at TextGraphs 2020 Shared Task: Leveraging Pretrained Language Models for Explanation Regeneration
In this work, we describe the system developed by a group of undergraduates from the Indian Institutes of Technology, for the Shared Task at TextGraphs-14 on Multi-Hop Inference ExplanationExpand

References

SHOWING 1-10 OF 19 REFERENCES
Answering Complex Questions Using Open Information Extraction
TLDR
This work develops a new inference model for Open IE that can work effectively with multiple short facts, noise, and the relational structure of tuples, and significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty. Expand
Exploring Markov Logic Networks for Question Answering
TLDR
A system that reasons with knowledge derived from textbooks, represented in a subset of firstorder logic, called Praline, which demonstrates a 15% accuracy boost and a 10x reduction in runtime as compared to other MLNbased methods, and comparable accuracy to word-based baseline approaches. Expand
Learning What is Essential in Questions
TLDR
This paper develops a classifier that reliably identifies and ranks essential terms in questions and demonstrates that the notion of question term essentiality allows state-of-the-art QA solver for elementary-level science questions to make better and more informed decisions, improving performance by up to 5%. Expand
Higher-order Lexical Semantic Models for Non-factoid Answer Reranking
TLDR
This work introduces a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations. Expand
Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions
TLDR
This paper evaluates the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam, and shows that the overall system's score is 71.3%, an improvement of 23.8% (absolute) over the MLN-based method described in previous work. Expand
Framing QA as Building and Ranking Intersentence Answer Justifications
TLDR
A question answering approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct is proposed, and it is shown that information aggregation is key to addressing the information need in complex questions. Expand
What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
TLDR
This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions. Expand
Open Language Learning for Information Extraction
Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitraryExpand
Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering
TLDR
It is shown that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data. Expand
Semi-supervised ranking on very large graphs with rich metadata
TLDR
This paper defines a semi-supervised learning framework for ranking of nodes on a very large graph and derives within this framework an efficient algorithm called Semi-Supervised PageRank, which can outperform previous algorithms on several tasks. Expand
...
1
2
...