• Corpus ID: 233296195

ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning

@article{Han2021ESTERAM,
  title={ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning},
  author={Rujun Han and I-Hung Hsu and Jiao Sun and J.C.D. Bayl{\'o}n and Qiang Ning and Dan Roth and Nanyun Pen},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.08350}
}
Understanding how events are semantically related to each other is the essence of reading comprehension. Recent event-centric reading comprehension datasets focus mostly on event arguments or temporal relations. While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning. For example, to understand causality between events, we need to infer… 

Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding

TLDR
Curriculum is introduced as a new format of NLI benchmark for evaluation of broad-coverage linguistic phenomena and it is shown that this linguistic-phenomena-driven benchmark can serve as an effective tool for diagnosing model behavior and verifying model learning quality.

Learning Action Conditions from Instructional Manuals for Instruction Understanding

TLDR
This work proposes a weakly supervised approach to automatically construct large-scale training in-stances from online instructional manuals, and curates a densely human-annotated and vali-dated dataset to study how well the current NLP models can infer action-condition dependencies in the instruction texts.

TVShowGuess: Character Comprehension in Stories as Speaker Guessing

TLDR
A new task for assessing machines’ skills of understanding fictional characters in narrative stories, TVShowGuess, takes the form of guessing the anonymous main characters based on the backgrounds of the scenes and the dialogues, and proposes new model architectures to support the contextualized encoding of long scene texts.

Learning Constraints and Descriptive Segmentation for Subevent Detection

TLDR
This work proposes an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction, as well as guiding the model to make globally consistent inference using Rectifier Networks for constraint learning.

References

SHOWING 1-10 OF 38 REFERENCES

Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

TLDR
This paper introduces Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions, and proposes a new architecture that improves over the competitive baselines.

CaTeRS: Causal and Temporal Relation Scheme for Semantic Annotation of Event Structures

TLDR
A novel semantic annotation framework, called Causal and Temporal Relation Scheme (CaTeRS), which is unique in simultaneously capturing a comprehensive set of temporal and causal relations between events.

TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

TLDR
TORQUE is introduced, a new English reading comprehension benchmark built on 3.2k news snippets with 21k human-generated questions querying temporal relationships, and results show that RoBERTa-large achieves an exact-match score of 51% on the test set of TORQUE, about 30% behind human performance.

Event Extraction as Machine Reading Comprehension

TLDR
This paper proposes a new learning paradigm of EE, by explicitly casting it as a machine reading comprehension problem (MRC), which includes an unsupervised question generation process, which can transfer event schema into a set of natural questions, followed by a BERT-based question-answering process to retrieve answers as EE results.

Richer Event Description: Integrating event coreference with temporal, causal and bridging annotation

TLDR
The annotation methodology for the Richer Event Descriptions corpus is described, which annotates entities, events, times, their coreference and partial coreference relations, and the temporal, causal and subevent relationships between the events.

Document-Level Event Argument Extraction by Conditional Generation

TLDR
A document-level neural event argument extraction model is proposed by formulating the task as conditional generation following event templates by creating the first end-to-end zero-shot event extraction framework.

Event Extraction by Answering (Almost) Natural Questions

TLDR
This work introduces a new paradigm for event extraction by formulating it as a question answering (QA) task, which extracts the event arguments in an end-to-end manner and outperforms prior methods substantially.

Dense Event Ordering with a Multi-Pass Architecture

TLDR
New experiments on strongly connected event graphs that contain ∼10 times more relations per document than the TimeBank are presented and a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves is described.

A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories

TLDR
A new framework for evaluating story understanding and script learning: the `Story Cloze Test’, which requires a system to choose the correct ending to a four-sentence story, and a new corpus of 50k five- Sentence commonsense stories, ROCStories, to enable this evaluation.

Weakly Supervised Subevent Knowledge Acquisition

TLDR
A weakly supervised approach to extract subevent relation tuples from text and build the first large scale subevent knowledge base, which has been shown useful for discourse analysis and identifying a range of event-event relations.