• Corpus ID: 233289763

proScript: Partially Ordered Scripts Generation via Pre-trained Language Models

@article{Sakaguchi2021proScriptPO,
  title={proScript: Partially Ordered Scripts Generation via Pre-trained Language Models},
  author={Keisuke Sakaguchi and Chandra Bhagavatula and Ronan Joseph Le Bras and Niket Tandon and Peter Clark and Yejin Choi},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.08251}
}
Scripts standardized event sequences describing typical everyday activities have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information. However, to date they have proved hard to author or extract from text. In this work, we demonstrate for the first time that pre-trained neural language models (LMs) can be be finetuned to generate high-quality scripts, at varying levels of granularity, for a wide range of everyday scenarios… 

Figures and Tables from this paper

Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedback

TLDR
The goal is for an LM to continue to improve after deployment, without retraining, using feedback from the user, and this approach pairs an LM with a corrector model, trained to translate general feedback into specific edits to repair the model output.

What do Large Language Models Learn about Scripts?

TLDR
This work introduces the task of generating full event sequence descriptions (ESDs) given a scenario as a natural language prompt and proposes a pipeline-based script induction framework (SIF) which can generate good quality ESDs for unseen scenarios.

JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents

TLDR
JARVIS, a neuro-symbolic commonsense reasoning framework for modular, generalizable, and interpretable conversational embodied agents, is proposed, which achieves state-of-the-art (SOTA) results on all three dialogbased embodied tasks.

Embodied Multi-Agent Task Planning from Ambiguous Instruction

TLDR
An embodied multi-agent task planning framework is proposed to utilize external knowledge sources and dynamically perceived visual information to resolve the high-level instructions, and dynamically allocate the decomposed tasks to multiple agents and generate sub-goals to achieve the navigation motion.

Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning

TLDR
This work studies pre-trained language models that generate explanation graphs in an end-to-end manner and analyzes their ability to learn the structural constraints and semantics of such graphs and proposes simple yet effective ways of graph perturbations via node and edge edit operations that lead to structurally and semantically positive and negative graphs.

Improving scripts with a memory of natural feedback

TLDR
A dynamic memory architecture with a growing memory of feedbacks about errors in the output to allow users to correct errors directly through interaction, without retraining, by giving feedback on the model’s output.

Interscript: A dataset for interactive learning of scripts through error feedback

TLDR
A new dataset, INTERSCRIPT, is presented, containing user feedback on a deployed model that generates complex everyday tasks, and two use-cases are posited that might significantly advance the state-of-the-art in interactive models.

Think about it! Improving defeasible reasoning by first modeling the question scenario.

TLDR
The CURIOUS system achieves a new state-of-the-art on three different defeasible reasoning datasets, illustrating that performance can be improved by guiding a system to “think about” a question and explicitly model the scenario, rather than answering reflexively.

Formulating Neural Sentence Ordering as the Asymmetric Traveling Salesman Problem

TLDR
This work proposes an alternate formulation of this task as a classic combinatorial optimization problem popular as the Traveling Salesman Problem (or TSP in short) that gracefully handles the presence of cycles and is more expressive since it takes into account real-valued constraint/edge scores rather than just the presence/absence of edges.

References

SHOWING 1-10 OF 36 REFERENCES

RoBERTa: A Robustly Optimized BERT Pretraining Approach

TLDR
It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Behind the Scenes of an Evolving Event Cloze Test

TLDR
It is argued that the narrative event cloze test has slowly/unknowingly been altered to accommodate LMs, and recommended recommendations on how to return to the test’s original intent are offered.

Machine-Assisted Script Curation

TLDR
Machine-Aided Script Curator automates portions of the script creation process with suggestions for event types, links to Wikidata, and sub-events that may have been forgotten.

A Crowdsourced Database of Event Sequence Descriptions for the Acquisition of High-quality Script Knowledge

TLDR
A large-scale crowdsourced collection of explicit linguistic descriptions of script-specific event sequences is presented, enriched with crowdsourced alignment annotation on a subset of the event descriptions, to be used in future work as seed data for automatic alignment of event descriptions (for example via clustering).

InScript: Narrative texts annotated with script information

TLDR
The InScript corpus is a corpus of 1,000 stories centered around 10 different scenarios that shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.

A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories

TLDR
A new framework for evaluating story understanding and script learning: the `Story Cloze Test’, which requires a system to choose the correct ending to a four-sentence story, and a new corpus of 50k five- Sentence commonsense stories, ROCStories, to enable this evaluation.

Learning Script Knowledge with Web Experiments

TLDR
A novel approach to unsupervised learning of the events that make up a script, along with constraints on their temporal ordering, is described, including a graph representation of the script's temporal structure using a multiple sequence alignment algorithm.

Script Knowledge for Natural Language Understanding

Neural Language Modeling for Contextualized Temporal Graph Generation

TLDR
This paper uses existing IE/NLP tools to automatically generate a large quantity of system-produced document-graph pairs, and proposes a novel formulation of the contextualized graph generation problem as a sequence-to-sequence mapping task that outperforms the closest existing method by a large margin.