Modeling Biological Processes for Reading Comprehension
@inproceedings{Berant2014ModelingBP, title={Modeling Biological Processes for Reading Comprehension}, author={Jonathan Berant and Vivek Srikumar and Pei-Chun Chen and Abby Vander Linden and Brittany Harding and Brad Huang and Peter Clark and Christopher D. Manning}, booktitle={Conference on Empirical Methods in Natural Language Processing}, year={2014} }
Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. [] Key Method To answer the questions, we first predict a rich structure representing the process in the paragraph. Then, we map the question to a formal query, which is executed against the predicted structure. We demonstrate that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations.
184 Citations
Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension
- Computer ScienceICLR
- 2019
A neural machine-reading model that constructs dynamic knowledge graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities to present some evidence that the model’s knowledge graphs help it to impose commonsense constraints on its predictions.
Multi Document Reading Comprehension
- Computer Science, EducationArXiv
- 2022
A recently proposed model for Multi-Document Reading Comprehension - RE3QA that is comprised of a Reader, Retriever, and a Re-ranker based network to fetch the best possible answer from a given set of passages is proposed.
Question Answering as Global Reasoning Over Semantic Abstractions
- Computer ScienceAAAI
- 2018
This work presents the first system that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pre-trained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers.
SQuAD Reading Comprehension
- Computer Science
- 2018
Using Stanford Question Answering Dataset (SQuAD), which is a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, this work built a reading comprehension model that attains 75.2% F1 score and 65.0% Exact Match on the test set.
Machine Comprehension with Discourse Relations
- Computer ScienceACL
- 2015
This approach enables the model to benefit from discourse information without relying on explicit annotations of discourse structure during training, and demonstrates that the discourse aware model outperforms state-of-the-art machine comprehension systems.
Reading Comprehension with Graph-based Temporal-Casual Reasoning
- Computer ScienceCOLING
- 2018
This work generates event graphs from text based on dependencies, and rank answers by aligning event graphs that are constrained by graph-based reasoning to ensure temporal and causal agreement.
SQuAD: 100,000+ Questions for Machine Comprehension of Text
- Computer ScienceEMNLP
- 2016
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).
Learning Knowledge Graphs for Question Answering through Conversational Dialog
- Computer ScienceNAACL
- 2015
This work is the first to acquire knowledge for question-answering from open, natural language dialogs without a fixed ontology or domain model that predetermines what users can say.
Recent Trends in Natural Language Understanding for Procedural Knowledge
- Computer Science2019 International Conference on Computational Science and Computational Intelligence (CSCI)
- 2019
This paper seeks to provide an overview of the work in procedural knowledge understanding, and information extraction, acquisition, and representation with procedures, to promote discussion and provide a better understanding of procedural knowledge applications and future challenges.
ListReader: Extracting List-form Answers for Opinion Questions
- Computer ScienceArXiv
- 2021
ListReader is proposed, a neural extractive QA model for list-form answer that adopts a co-extraction setting that can extract either spanor sentence-level answers, allowing better applicability and Experimental results show that the model considerably outperforms various strong baselines.
References
SHOWING 1-10 OF 42 REFERENCES
Deep Read: A Reading Comprehension System
- EducationACL
- 1999
Initial work on Deep Read, an automated reading comprehension system that accepts arbitrary text input (a story) and answers questions about it is described, with a baseline system that retrieves the sentence containing the answer 30--40% of the time.
Semantic Parsing on Freebase from Question-Answer Pairs
- Computer ScienceEMNLP
- 2013
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.
COGEX: A Logic Prover for Question Answering
- Computer ScienceNAACL
- 2003
The idea of automated reasoning applied to question answering is introduced and the feasibility of integrating a logic prover into a Question Answering system is shown.
Driving Semantic Parsing from the World’s Response
- Computer ScienceCoNLL
- 2010
This paper develops two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world and reformulates the semantic parsing problem to reduce the dependency of the model on syntactic patterns, thus allowing the parser to scale better using less supervision.
Paraphrase-Driven Learning for Open Question Answering
- Computer ScienceACL
- 2013
This work demonstrates that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions and automatically generalizes a seed lexicon, and includes a scalable, parallelized perceptron parameter estimation scheme.
Learning for Semantic Parsing with Statistical Machine Translation
- Computer ScienceNAACL
- 2006
It is shown that WASP performs favorably in terms of both accuracy and coverage compared to existing learning methods requiring similar amount of supervision, and shows better robustness to variations in task complexity and word order.
Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars
- Computer ScienceUAI
- 2005
A learning algorithm is described that takes as input a training set of sentences labeled with expressions in the lambda calculus and induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence.
Learning to Automatically Solve Algebra Word Problems
- Computer ScienceACL
- 2014
An approach for automatically learning to solve algebra word problems by reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering an alignment of the variables and numbers to the problem text.
Machine Reading
- Computer ScienceAAAI
- 2006
This paper investigates how to leverage advances in machine learning and probabilistic reasoning to understand text.
Identifying Relations for Open Information Extraction
- Computer ScienceEMNLP
- 2011
Two simple syntactic and lexical constraints on binary relations expressed by verbs are introduced in the ReVerb Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TextRunner and woepos.