Temporal Reasoning on Implicit Events from Distant Supervision

  title={Temporal Reasoning on Implicit Events from Distant Supervision},
  author={Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish Sabharwal and Dan Roth},
We propose TRACIE, a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events—events that are not mentioned explicitly in natural language text but can be inferred from it. This introduces a new challenge in temporal reasoning research, where prior work has focused on explicitly mentioned events. Human readers can infer implicit events via commonsense reasoning, resulting in a more comprehensive understanding of the situation and, consequently… Expand

Figures and Tables from this paper

Time-Aware Language Models as Temporal Knowledge Bases
A simple technique for jointly modeling text with its timestamp is proposed that improves memorization of seen facts from the training time period, as well as calibration on predictions about unseen facts from future time periods and shows that models trained with temporal context can be efficiently “refreshed” as new data arrives. Expand
Salience-Aware Event Chain Modeling for Narrative Understanding
This work introduces methods for extracting a principal chain of events from natural language text, by filtering away non-salient events and supportive sentences and demonstrates the effectiveness of these methods at isolating critical event chains by comparing their effect on downstream tasks. Expand
SituatedQA: Incorporating Extra-Linguistic Contexts into QA
This study introduces SITUATEDQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context, and shows that existing models struggle with producing answers that are frequently updated or from uncommon locations. Expand
Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning
  • 2021
Recent years have seen impressive performance of transformer-based models on different natural language processing tasks. However, it is not clear to what degree the transformers can reason onExpand
Think you have Solved Direct-Answer Question Answering? Try ARC-DA, the Direct-Answer AI2 Reasoning Challenge
The ARC-DA dataset is presented, a direct-answer (“open response”, “freeform”) version of the ARC (AI2 Reasoning Challenge) multiple-choice dataset, one of the first DA datasets of natural questions that often require reasoning, and where appropriate question decompositions are not evident from the questions themselves. Expand
RESIN: A Dockerized Schema-Guided Cross-document Cross-lingual Cross-media Information Extraction and Event Tracking System
A new information extraction system that can automatically construct temporal event graphs from a collection of news documents from multiple sources, multiple languages, and multiple data modalities is presented. Expand
VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer
VIDLANKD is presented, a video-language knowledge distillation method for improving language understanding that achieves consistent improvements over text-only language models and vokenization models, on several downstream language understanding tasks including GLUE, SQuAD, and SWAG. Expand
Conditional Generation of Temporally-ordered Event Sequences
This work presents a a BARTbased conditional generation model capable of capturing event cooccurrence as well as temporality of event sequences, and can address both temporal ordering and event infilling, predicting new events which fit into a temporallyordered sequence of existing ones. Expand


Reasoning about Actions and State Changes by Injecting Commonsense Knowledge
This paper shows how the predicted effects of actions in the context of a paragraph can be improved in two ways: by incorporating global, commonsense constraints (e.g., a non-existent entity cannot be destroyed), and by biasing reading with preferences from large-scale corpora. Expand
Neural Module Networks for Reasoning over Text
This work extends Neural module networks by introducing modules that reason over a paragraph of text, performing symbolic reasoning over numbers and dates in a probabilistic and differentiable manner, and proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Expand
Joint Constrained Learning for Event-Event Relation Extraction
This work proposes a joint constrained learning framework for modeling event-event relations that enforces logical constraints within and across multiple temporal and subevent relations by converting these constraints into differentiable learning objectives. Expand
Temporal Common Sense Acquisition with Minimal Supervision
This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TacoLM, a temporal commonsense language model. Expand
Question Answering as Global Reasoning Over Semantic Abstractions
This work presents the first system that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pre-trained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers. Expand
A Structured Learning Approach to Temporal Relation Extraction
It is suggested that it is important to take dependencies into account while learning to identify temporal relations between events and a structured learning approach is proposed to address this challenge. Expand
Temporal Reasoning in Natural Language Inference
Five new natural language inference (NLI) datasets focused on temporal reasoning are introduced and four existing datasets annotated for event duration and event ordering are recast into more than one million NLI examples. Expand
Using Query Patterns to Learn the Duration of Events
This work describes and improves a supervised baseline that relies on event duration annotations, and shows how web queries for linguistic patterns can help learn the duration of events without labeled data, producing fine-grained duration judgments that surpass the supervised system. Expand
Context-Aware Neural Model for Temporal Information Extraction
The GCL model has long-term memory and attention mechanisms to resolve irregular long-distance dependencies that regular RNNs such as LSTM cannot recognize, and is the first model to use NTM-like architecture to process the information from global context in discourse-scale natural text processing. Expand
Joint Inference for Event Timeline Construction
An algorithmic approach is presented that jointly optimizes the temporal structure of a news article based on time intervals by coupling local classifiers that predict associations and temporal relations between pairs of temporal entities with global constraints. Expand