Procedural Reading Comprehension with Attribute-Aware Context Flow

  title={Procedural Reading Comprehension with Attribute-Aware Context Flow},
  author={Aida Amini and Antoine Bosselut and Bhavana Dalvi and Yejin Choi and Hannaneh Hajishirzi},
Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading comprehension by translating the text into a general formalism that represents processes as a sequence of transitions over entity attributes (e.g., location, temperature). Leveraging pre-trained language models, our model obtains entity-aware and attribute-aware representations of the text by joint prediction… Expand
Time-Stamped Language Model: Teaching Language Models to Understand The Flow of Events
A Time-Stamped Language Model (TSLM) is proposed to encode event information in LMs architecture by introducing the timestamp encoding to enable pre-trained transformer-based language models to be used on other QA benchmarks by adapting those to the procedural text understanding. Expand
Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding
  • Shane Storks, Qiaozi Gao, Yichi Zhang, Joyce Chai
  • Computer Science
  • ArXiv
  • 2021
Large-scale, pre-trained language models (LMs) have achieved human-level performance on a breadth of language understanding tasks. However, evaluations only based on end task performance shed littleExpand
Factoring Statutory Reasoning as Language Understanding Challenges
Models for statutory reasoning are shown to benefit from the additional structure found in Prolog programs, improving on prior baselines, and the decomposition into subtasks facilitates finer-grained model diagnostics and clearer incremental progress. Expand
FaVIQ: FAct Verification from Information-seeking Questions
This paper constructs a challenging, realistic, and largescale fact verification dataset called FAVIQ, using information-seeking questions posed by real users who do not know how to answer, which will serve as a challenging benchmark for natural language understanding and support future progress in professional fact checking. Expand
Tracking entities in technical procedures - a new dataset and baselines
It is described how TechTrack can be used to take forward the research on understanding procedures from temporal texts, and the performance of state-of-the-art models on the entity-tracking task is evaluated and finds that they are well below the human annotation performance. Expand
Knowledge-Aware Procedural Text Understanding with Multi-Stage Training
A novel KnOwledge-Aware proceduraL text understAnding (KoaLa) model is proposed, which effectively leverages multiple forms of external knowledge in this task of procedural text understanding and achieves state-of-the-art performance in comparison to various baselines. Expand


Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension
A neural machine-reading model that constructs dynamic knowledge graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities to present some evidence that the model’s knowledge graphs help it to impose commonsense constraints on its predictions. Expand
Tracking State Changes in Procedural Text: a Challenge Dataset and Models for Process Paragraph Comprehension
A new dataset and models for comprehending paragraphs about processes, an important genre of text describing a dynamic world, are presented and two new neural models that exploit alternative mechanisms for state prediction are introduced, in particular using LSTM input encoding and span prediction. Expand
Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text
This work presents a new model (XPAD) that biases effect predictions towards those that explain more of the actions in the paragraph and are more plausible with respect to background knowledge, and extends an existing benchmark dataset for procedural text comprehension, ProPara, by adding the new task of explaining actions by predicting their dependencies. Expand
Reasoning about Actions and State Changes by Injecting Commonsense Knowledge
This paper shows how the predicted effects of actions in the context of a paragraph can be improved in two ways: by incorporating global, commonsense constraints (e.g., a non-existent entity cannot be destroyed), and by biasing reading with preferences from large-scale corpora. Expand
Reasoning Over Paragraph Effects in Situations
This work presents ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations, and targets expository language describing causes and effects, as they have clear implications for new situations. Expand
Bidirectional Attention Flow for Machine Comprehension
The BIDAF network is introduced, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Expand
Effective Use of Transformer Networks for Entity Tracking
This paper tests standard lightweight approaches for prediction with pre-trained transformers, and finds that these approaches underperforms even simple baselines, and shows that much stronger results can be attained by restructuring the input to guide the model to focus on a particular entity. Expand
The NarrativeQA Reading Comprehension Challenge
A new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts are presented, designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. Expand
Tracking Discrete and Continuous Entity State for Process Understanding
A structured neural architecture is proposed that reflects the dual nature of entity evolution, updating its hidden continuous representation at each step to contain relevant state information and evaluates the performance of the model on QA tasks over process paragraphs in the ProPara dataset. Expand
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems. Expand