Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text

@article{Dalvi2019EverythingHF,
  title={Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text},
  author={Bhavana Dalvi and Niket Tandon and Antoine Bosselut and Wen-tau Yih and P. Clark},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.04745}
}
Our goal is to better comprehend procedural text, e.g., a paragraph about photosynthesis, by not only predicting what happens, but why some actions need to happen before others. [...] Key Method We present our new model (XPAD) that biases effect predictions towards those that (1) explain more of the actions in the paragraph and (2) are more plausible with respect to background knowledge.Expand

Figures, Tables, and Topics from this paper

Procedural Reading Comprehension with Attribute-Aware Context Flow
TLDR
An algorithm for procedural reading comprehension is introduced by translating the text into a general formalism that represents processes as a sequence of transitions over entity attributes (e.g., location, temperature). Expand
Recent Trends in Natural Language Understanding for Procedural Knowledge
TLDR
This paper seeks to provide an overview of the work in procedural knowledge understanding, and information extraction, acquisition, and representation with procedures, to promote discussion and provide a better understanding of procedural knowledge applications and future challenges. Expand
Relational Gating for "What If" Reasoning
TLDR
A novel relational gating network that learns to filter the key entities and relationships and learns contextual and cross representations of both procedure and question for finding the answer to the ”What if...” questions. Expand
Time-Stamped Language Model: Teaching Language Models to Understand The Flow of Events
TLDR
A Time-Stamped Language Model (TSLM) is proposed to encode event information in LMs architecture by introducing the timestamp encoding to enable pre-trained transformer-based language models to be used on other QA benchmarks by adapting those to the procedural text understanding. Expand
Factoring Statutory Reasoning as Language Understanding Challenges
TLDR
Models for statutory reasoning are shown to benefit from the additional structure found in Prolog programs, improving on prior baselines, and the decomposition into subtasks facilitates finer-grained model diagnostics and clearer incremental progress. Expand
Towards Modeling Revision Requirements in wikiHow Instructions
TLDR
This work extends an existing resource of textual edits with a complementary set of approx. Expand
EXAMS: A Multi-subject High School Examinations Dataset for Cross-lingual and Multilingual Question Answering
TLDR
This work proposes EXAMS -- a new benchmark dataset for cross-lingual and multilingual question answering for high school examinations and performs various experiments with existing top-performing multilingual pre-trained models to show that EXAMS offers multiple challenges that require multilingual knowledge and reasoning in multiple domains. Expand
Knowledge-Aware Procedural Text Understanding with Multi-Stage Training
TLDR
A novel KnOwledge-Aware proceduraL text understAnding (KoaLa) model is proposed, which effectively leverages multiple forms of external knowledge in this task of procedural text understanding and achieves state-of-the-art performance in comparison to various baselines. Expand

References

SHOWING 1-10 OF 35 REFERENCES
What Happened? Leveraging VerbNet to Predict the Effects of Actions in Procedural Text
TLDR
This work leverages VerbNet to build a rulebase of the preconditions and effects of actions, and uses it along with commonsense knowledge of persistence to answer questions about change in paragraphs describing processes. Expand
Reasoning about Actions and State Changes by Injecting Commonsense Knowledge
TLDR
This paper shows how the predicted effects of actions in the context of a paragraph can be improved in two ways: by incorporating global, commonsense constraints (e.g., a non-existent entity cannot be destroyed), and by biasing reading with preferences from large-scale corpora. Expand
Tracking State Changes in Procedural Text: a Challenge Dataset and Models for Process Paragraph Comprehension
TLDR
A new dataset and models for comprehending paragraphs about processes, an important genre of text describing a dynamic world, are presented and two new neural models that exploit alternative mechanisms for state prediction are introduced, in particular using LSTM input encoding and span prediction. Expand
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
TLDR
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems. Expand
Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension
TLDR
A neural machine-reading model that constructs dynamic knowledge graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities to present some evidence that the model’s knowledge graphs help it to impose commonsense constraints on its predictions. Expand
Bidirectional Attention Flow for Machine Comprehension
TLDR
The BIDAF network is introduced, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Expand
Learning Procedures from Text: Codifying How-to Procedures in Deep Neural Networks
TLDR
To identify the relationships, this paper proposes an end-to-end neural network architecture, which can selectively learn important procedure-specific relationships and outperforms the existing entity relationship extraction algorithms. Expand
Creating Causal Embeddings for Question Answering with Minimal Supervision
TLDR
This work argues that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings, and implements causality as a use case. Expand
Inducing Neural Models of Script Knowledge
TLDR
This work induces common sense knowledge about prototypical sequence of events in the form of graphs based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. Expand
Modeling Biological Processes for Reading Comprehension
TLDR
This paper focuses on a new reading comprehension task that requires complex reasoning over a single document, and demonstrates that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations. Expand
...
1
2
3
4
...