Learning Answer-Entailing Structures for Machine Comprehension

@inproceedings{Sachan2015LearningAS,
title={Learning Answer-Entailing Structures for Machine Comprehension},
author={Mrinmaya Sachan and Kumar Avinava Dubey and Eric P. Xing and Matthew Richardson},
booktitle={ACL},
year={2015}
}
• Published in ACL 1 July 2015
• Computer Science
Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system’s ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the…

Figures from this paper

Machine Comprehension using Rich Semantic Representations
• Computer Science
ACL
• 2016
A unified max-margin framework is presented that learns to find a latent mapping of the question-answer mean representation graph onto the text meaning representation graph that explains the answer, and uses what it learns to answer questions on novel texts.
Attention-Based Convolutional Neural Network for Machine Comprehension
• Computer Science
ArXiv
• 2016
This work comes up with a neural network framework, named hierarchical attention-based convolutional neural network (HABCNN), to address this task without any manually designed features, and shows that HABCNN outperforms prior deep learning approaches by a big margin.
• Computer Science
• 2018
This chapter briefly introduces the recent advances in deep learning methods on two typical and popular question answering tasks: deep learning in question answering over knowledge base (KBQA) and Deep learning in machine comprehension (MC).
A Constituent-Centric Neural Architecture for Reading Comprehension
• Computer Science
ACL
• 2017
A constituent-centric neural architecture is designed where the generation of candidate answers and their representation learning are both based on constituents and guided by the parse tree, which contributes to better representation learning of the candidate answers.
SQuAD: 100,000+ Questions for Machine Comprehension of Text
• Computer Science
EMNLP
• 2016
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).
A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
• Computer Science
ACL
• 2016
The Parallel-Hierarchical model sets a new state of the art for {\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).
Evidence Sentence Extraction for Machine Reading Comprehension
• Computer Science
CoNLL
• 2019
This paper focuses on extracting evidence sentences that can explain or support the answers of multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents.
• Computer Science, Education
NAACL
• 2018
This work proposes a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning, and demonstrates significant improvements over a number of established baselines.
Deep neural networks for identification of sentential relations
This dissertation presents some deep neural networks (DNNs) which are developed to handle sentential relation identification problem, and proposes a dynamic attentive pooling'' to select phrase alignments of different intensities for different task categories.
• Computer Science
ACL
• 2016
A thorough examination of this new reading comprehension task by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and showing that a neural network can be trained to give good performance on this task.

References

SHOWING 1-10 OF 45 REFERENCES
MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
• Computer Science
EMNLP
• 2013
MCTest is presented, a freely available set of stories and associated questions intended for research on the machine comprehension of text that requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension.
• Computer Science
ICLR
• 2016
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems.
A Neural Network for Factoid Question Answering over Paragraphs
• Computer Science
EMNLP
• 2014
This work introduces a recursive neural network model, qanta, that can reason over question text input by modeling textual compositionality and applies it to a dataset of questions from a trivia competition called quiz bowl.
Semantic Parsing on Freebase from Question-Answer Pairs
• Computer Science
EMNLP
• 2013
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.
Question Answering Using Enhanced Lexical Semantic Models
• Computer Science
ACL
• 2013
This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.
Learning Question Classifiers
• Computer Science
COLING
• 2002
A hierarchical classifier is learned that is guided by a layered semantic hierarchy of answer types, and eventually classifies questions into fine-grained classes.
A Phrase-Based Alignment Model for Natural Language Inference
• Computer Science
EMNLP
• 2008
The MANLI system is presented, a new NLI aligner designed to address the alignment problem, which uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data.
Information Extraction over Structured Data: Question Answering with Freebase
• Computer Science
ACL
• 2014
It is shown that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.
Factoid Question Answering over Unstructured and Structured Web Content
• Computer Science
TREC
• 2005
Two new, builtfrom-scratch, web-based question answering systems applied to the TREC 2005 Main Question Answering task, which use complementary models of answering questions over both structured and unstructured content on the Web are described.
Relation Alignment for Textual Entailment Recognition
• Computer Science
TAC
• 2009
An approach to textual entailment recognition is presented, in which inference is based on a shallow semantic representation of relations in the text and hypothesis of the entailment pair, and in which specialized knowledge is encapsulated in modular components with very simple interfaces.