Learning Answer-Entailing Structures for Machine Comprehension

@inproceedings{Sachan2015LearningAS,
  title={Learning Answer-Entailing Structures for Machine Comprehension},
  author={Mrinmaya Sachan and Kumar Avinava Dubey and Eric P. Xing and Matthew Richardson},
  booktitle={ACL},
  year={2015}
}
Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system’s ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the… 

Figures from this paper

Machine Comprehension using Rich Semantic Representations
TLDR
A unified max-margin framework is presented that learns to find a latent mapping of the question-answer mean representation graph onto the text meaning representation graph that explains the answer, and uses what it learns to answer questions on novel texts.
Robust Question Answering Through Sub-part Alignment
TLDR
This work model question answering as an alignment problem, decomposing both the question and context into smaller units based on off-the-shelf semantic representations, and align the question to a subgraph of the context in order to find the answer.
Attention-Based Convolutional Neural Network for Machine Comprehension
TLDR
This work comes up with a neural network framework, named hierarchical attention-based convolutional neural network (HABCNN), to address this task without any manually designed features, and shows that HABCNN outperforms prior deep learning approaches by a big margin.
Deep Learning in Question Answering
TLDR
This chapter briefly introduces the recent advances in deep learning methods on two typical and popular question answering tasks: deep learning in question answering over knowledge base (KBQA) and Deep learning in machine comprehension (MC).
A Constituent-Centric Neural Architecture for Reading Comprehension
TLDR
A constituent-centric neural architecture is designed where the generation of candidate answers and their representation learning are both based on constituents and guided by the parse tree, which contributes to better representation learning of the candidate answers.
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).
A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
TLDR
The Parallel-Hierarchical model sets a new state of the art for {\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).
Evidence Sentence Extraction for Machine Reading Comprehension
TLDR
This paper focuses on extracting evidence sentences that can explain or support the answers of multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents.
Self-Training for Jointly Learning to Ask and Answer Questions
TLDR
This work proposes a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning, and demonstrates significant improvements over a number of established baselines.
Deep neural networks for identification of sentential relations
TLDR
This dissertation presents some deep neural networks (DNNs) which are developed to handle sentential relation identification problem, and proposes a dynamic ``attentive pooling'' to select phrase alignments of different intensities for different task categories.
...
...

References

SHOWING 1-10 OF 45 REFERENCES
MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
TLDR
MCTest is presented, a freely available set of stories and associated questions intended for research on the machine comprehension of text that requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
TLDR
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems.
A Neural Network for Factoid Question Answering over Paragraphs
TLDR
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Semantic Parsing on Freebase from Question-Answer Pairs
TLDR
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.
Question Answering Using Enhanced Lexical Semantic Models
TLDR
This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.
Learning Question Classifiers
TLDR
A hierarchical classifier is learned that is guided by a layered semantic hierarchy of answer types, and eventually classifies questions into fine-grained classes.
A Phrase-Based Alignment Model for Natural Language Inference
TLDR
The MANLI system is presented, a new NLI aligner designed to address the alignment problem, which uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data.
Performance Issues and Error Analysis in an Open-Domain Question Answering System
TLDR
The overall performance of a state-of-the-art Question Answering system depends on the depth of natural language processing resources and the tools used for answer finding.
Information Extraction over Structured Data: Question Answering with Freebase
TLDR
It is shown that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.
Factoid Question Answering over Unstructured and Structured Web Content
TLDR
Two new, builtfrom-scratch, web-based question answering systems applied to the TREC 2005 Main Question Answering task, which use complementary models of answering questions over both structured and unstructured content on the Web are described.
...
...