Temporal Reasoning on Implicit Events from Distant Supervision

@article{Zhou2020TemporalRO,
  title={Temporal Reasoning on Implicit Events from Distant Supervision},
  author={Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish Sabharwal and Dan Roth},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.12753}
}
We propose TRACIE, a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events—events that are not mentioned explicitly in natural language text but can be inferred from it. This introduces a new challenge in temporal reasoning research, where prior work has focused on explicitly mentioned events. Human readers can infer implicit events via commonsense reasoning, resulting in a more comprehensive understanding of the situation and, consequently… 

Figures and Tables from this paper

Conditional Generation of Temporally-ordered Event Sequences

A single model is proposed that addresses both temporal ordering, sorting given events into the order they occurred, and event infilling, predicting new events which fit into an existing temporally-ordered sequence.

Time-Aware Language Models as Temporal Knowledge Bases

This work proposes a simple technique for jointly modeling text with its timestamp that improves memorization of seen facts from the training time period, as well as calibration on predictions about unseen facts from future time periods and shows that models trained with temporal context can be efficiently "refreshed" as new data arrives.

Salience-Aware Event Chain Modeling for Narrative Understanding

This work introduces methods for extracting a principal chain of events from natural language text, by filtering away non-salient events and supportive sentences, and demonstrates the effectiveness of these methods at isolating critical event chains.

ALICE++: Adversarial Training for Robust and Effective Temporal Reasoning

An enhanced adversarial training algorithm for fine-tuning transformer-based language models (i.e., RoBERTa) and applies it to the temporal reasoning task, which generates and adds the perturbation to a combination of layers during adversarialTraining.

SituatedQA: Incorporating Extra-Linguistic Contexts into QA

This study introduces SituatedQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context, and shows that existing models struggle with producing answers that are frequently updated or from uncommon locations.

Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning

This survey paper discusses the performance of transformers on different reasoning tasks, including mathematical reasoning, commonsense reasoning, and logical reasoning.

Think you have Solved Direct-Answer Question Answering? Try ARC-DA, the Direct-Answer AI2 Reasoning Challenge

The ARC-DA dataset is presented, a direct-answer (“open response”, “freeform”) version of the ARC (AI2 Reasoning Challenge) multiple-choice dataset, one of the first DA datasets of natural questions that often require reasoning, and where appropriate question decompositions are not evident from the questions themselves.

VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer

This work presents V ID L AN KD, a video-language knowledge distillation method for improving language understanding, which achieves consistent improvements over text-only language models and vokenization models, on several downstream language understanding tasks including GLUE, SQuAD, and SWAG.

RESIN: A Dockerized Schema-Guided Cross-document Cross-lingual Cross-media Information Extraction and Event Tracking System

A new information extraction system that can automatically construct temporal event graphs from a collection of news documents from multiple sources, multiple languages, and multiple data modalities is presented.

Mitigating Reporting Bias in Semi-supervised Temporal Commonsense Inference with Probabilistic Soft Logic

A novel neural-logic based Soft Logic Enhanced Event Temporal Reasoning (SLEER) model is proposed for acquiring unbiased TCS knowledge, in which the complementary relationship among dimensions are explicitly represented as logic rules and modeled by t-norm fuzzy logics.

References

SHOWING 1-10 OF 55 REFERENCES

Reasoning about Actions and State Changes by Injecting Commonsense Knowledge

This paper shows how the predicted effects of actions in the context of a paragraph can be improved in two ways: by incorporating global, commonsense constraints (e.g., a non-existent entity cannot be destroyed), and by biasing reading with preferences from large-scale corpora.

Neural Module Networks for Reasoning over Text

This work extends Neural module networks by introducing modules that reason over a paragraph of text, performing symbolic reasoning over numbers and dates in a probabilistic and differentiable manner, and proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text.

Joint Constrained Learning for Event-Event Relation Extraction

This work proposes a joint constrained learning framework for modeling event-event relations that enforces logical constraints within and across multiple temporal and subevent relations by converting these constraints into differentiable learning objectives.

Temporal Common Sense Acquisition with Minimal Supervision

This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TacoLM, a temporal commonsense language model.

Question Answering as Global Reasoning Over Semantic Abstractions

This work presents the first system that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pre-trained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers.

A Structured Learning Approach to Temporal Relation Extraction

It is suggested that it is important to take dependencies into account while learning to identify temporal relations between events and a structured learning approach is proposed to address this challenge.

Temporal Reasoning in Natural Language Inference

Five new natural language inference (NLI) datasets focused on temporal reasoning are introduced and four existing datasets annotated for event duration and event ordering are recast into more than one million NLI examples.

Using Query Patterns to Learn the Duration of Events

This work describes and improves a supervised baseline that relies on event duration annotations, and shows how web queries for linguistic patterns can help learn the duration of events without labeled data, producing fine-grained duration judgments that surpass the supervised system.

Context-Aware Neural Model for Temporal Information Extraction

The GCL model has long-term memory and attention mechanisms to resolve irregular long-distance dependencies that regular RNNs such as LSTM cannot recognize, and is the first model to use NTM-like architecture to process the information from global context in discourse-scale natural text processing.

Joint Inference for Event Timeline Construction

An algorithmic approach is presented that jointly optimizes the temporal structure of a news article based on time intervals by coupling local classifiers that predict associations and temporal relations between pairs of temporal entities with global constraints.
...