Temporal Common Sense Acquisition with Minimal Supervision

@article{Zhou2020TemporalCS,
  title={Temporal Common Sense Acquisition with Minimal Supervision},
  author={Ben Zhou and Qiang Ning and Daniel Khashabi and Dan Roth},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.04304}
}
Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TacoLM, a temporal common sense language model. Our method… 

Figures and Tables from this paper

Extraction of Common-Sense Relations from Procedural Task Instructions using BERT
TLDR
This work investigates whether common-sense information can directly be extracted from semi-structured text with an acceptable annotation effort and proposes a scoring function, based on the WordNet taxonomy, to match specific terms to more general ones, enabling a rich evaluation against a set of ground-truth relations.
Modality and Negation in Event Extraction
TLDR
This work presents an open-domain, lexicon-based event extraction system that captures various types of modality, valuable for Question Answering, Knowledge Graph construction and Fact-checking tasks, and the evaluation shows that the system is sufficiently strong to be used in downstream applications.
Open Temporal Relation Extraction for Question Answering
TLDR
This paper decomposes each question into a question event and an open temporal relation (OTR) which is not pre-defined nor with timestamps, and ground the former in the context while sharing the representation of the latter across contexts.
Joint Constrained Learning for Event-Event Relation Extraction
TLDR
This work proposes a joint constrained learning framework for modeling event-event relations that enforces logical constraints within and across multiple temporal and subevent relations by converting these constraints into differentiable learning objectives.
Event-Centric Natural Language Processing
TLDR
This tutorial will provide audience with a systematic introduction of knowledge representations of events, various methods for automated extraction, conceptualization and prediction of events and their relations, and a wide range of NLU and commonsense understanding tasks that benefit from aforementioned techniques.
Towards a Language Model for Temporal Commonsense Reasoning
TLDR
This work proposes an ensemble model for temporal commonsense reasoning that greatly outperforms the standard fine-tuning approach and strong baselines on the MC-TACO dataset.
What do Large Language Models Learn about Scripts?
TLDR
A pipeline-based script induction framework (SIF) which can generate good quality ESDs for unseen scenarios and demonstrate that SIF yields substantial improvements over a fine-tuned LM is proposed, offering a new research direction for inducing script knowledge.
ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning
TLDR
A continual pre-training approach that equips PTLMs with targeted knowledge about event temporal relations and design self-supervised learning objectives to recover masked-out event and temporal indicators and to discriminate sentences from their corrupted counterparts.
TIMEDIAL: Temporal Commonsense Reasoning in Dialog
TLDR
This paper presents the first study to investigate pre-trained LMs for their temporal reasoning capabilities in dialogs by introducing a new task and a crowd-sourced English challenge set, TimeDial, and reveals that the models fail to reason about dialog context correctly; instead, they rely on shallow cues based on existing temporal patterns in context.
Improving Event Duration Prediction via Time-aware Pre-training
TLDR
This work introduces two effective models for duration prediction, which incorporate external knowledge by reading temporal-related news sentences (time-aware pre-training), and demonstrates these models are capable of duration prediction in the unsupervised setting, outperforming the baselines.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 46 REFERENCES
Improving Temporal Relation Extraction with a Globally Acquired Statistical Resource
TLDR
It is shown that existing temporal extraction systems can be improved via this resource and that interesting statistics can be retrieved from this resource, which can potentially benefit other time-aware tasks.
Ordinal Common-sense Inference
TLDR
This work describes a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task, and annotates subsets of previously established datasets via the ordinal annotation protocol in order to analyze the distinctions between these and what is constructed.
A Structured Learning Approach to Temporal Relation Extraction
TLDR
It is suggested that it is important to take dependencies into account while learning to identify temporal relations between events and a structured learning approach is proposed to address this challenge.
CogCompTime: A Tool for Understanding Time in Natural Language
TLDR
This paper introduces CogCompTime, a system that has these two important functionalities and incorporates the most recent progress, achieves state-of-the-art performance, and is publicly available at http://cogcomp.org/page/publication_view/844.
Using Query Patterns to Learn the Duration of Events
TLDR
This work describes and improves a supervised baseline that relies on event duration annotations, and shows how web queries for linguistic patterns can help learn the duration of events without labeled data, producing fine-grained duration judgments that surpass the supervised system.
Temporal Information Extraction by Predicting Relative Time-lines
TLDR
This work proposes a novel paradigm in which it directly predict start and end-points for events from the text, constituting a time-line without going through the intermediate step of prediction of temporal relations as in earlier work.
KnowSemLM: A Knowledge Infused Semantic Language Model
TLDR
The proposed method, KnowSemLM, infuses this knowledge into a semantic LM by joint training and inference, and is shown to be effective on both the event cloze test and story/referent prediction tasks.
Deep Contextualized Word Representations
TLDR
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.
Extracting implicit knowledge from text
TLDR
This work considers the extraction of knowledge that is conveyed implicitly, both within everyday texts and queries posed to internet search engines, and shows that a significant amount of general knowledge can be gleaned based on how the authors talk about the world.
Fine-Grained Temporal Relation Extraction
TLDR
A novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real-valued scales is presented and the efficacy of a transfer-learning approach for predicting categorical relations is shown.
...
1
2
3
4
5
...