• Corpus ID: 235898976

Wordcraft: a Human-AI Collaborative Editor for Story Writing

@article{Coenen2021WordcraftAH,
  title={Wordcraft: a Human-AI Collaborative Editor for Story Writing},
  author={Andy Coenen and Luke Davis and Daphne Ippolito and Emily Reif and Ann Yuan},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.07430}
}
As neural language models grow in effectiveness, they are increasingly being applied in real-world settings. However these applications tend to be limited in the modes of interaction they support. In this extended abstract, we propose Wordcraft, an AI-assisted editor for story writing in which a writer and a dialog system collaborate to write a story. Our novel interface uses few-shot learning and the natural affordances of conversation to support a variety of interactions. Our editor provides… 

Figures from this paper

CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities

It is argued that by curating and analyzing large interaction datasets, the HCI community can foster more incisive examinations of LMs’ generative capabilities, and presents CoAuthor, a dataset designed for revealing GPT-3’s capabilities in assisting creative and argumentative writing.

Machine-in-the-Loop Rewriting for Creative Image Captioning

A rewriting model is trained that modifies specified spans of text within the user’s original draft to introduce descriptive and figurative elements in the text to allow the user to retain control over the content.

The Detection of Machine Generated Text

  • Computer Science
A research program focused on advancing state-of-the-art NLG while pursuing thoughtful analysis of its limitations and ramifications, which has found that minor changes to the algorithms used to generate text can significantly impact the manner in which it deviates from genuine human text.

Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision

This work presents a human-in-the-loop iterative text revision system, Read, Revise, Repeat (R3), which aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions.

Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing

Recent work in training large language models (LLMs) to follow natural language instructions has opened up exciting opportunities for natural language interface design. Building on the prior success

AI as an Active Writer: Interaction Strategies with Generated Text in Human-AI Collaborative Fiction Writing 56-65

A web-based human-AI collaborative writing tool that allows writers to shorten, edit, summarize, and regenerate text produced by AI, and finds that users took inspiration from unexpected text generated by the machine, and expected reduced fluency and coherence in the machine text when allowed to edit the output.

From Tool to Companion: Storywriters Want AI Writers to Respect Their Personal Values and Writing Strategies

Modern large-scale language models approach the quality of human-level writing. This promises the advent of AI writing companions performing AI-led writing under human control, surpassing traditional

TaleBrush: Visual Sketching of Story Generation with Pretrained Language Models

Advancing text generation algorithms (e.g., GPT-3) have led to new kinds of human-AI story co-creation tools. However, it is difficult for authors to guide this generation and understand the

LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia

Surprisingly, it is found that participants’ awareness of the AI had no impact on their perception of the system, nor on their feelings of autonomy, expression, and self-efcacy when writing emails.

User or Labor: An Interaction Framework for Human-Machine Relationships in NLP

Through a systematic literature review and thematic analysis, an interaction framework for understanding human-machine relationships in NLP is presented and it is shown that the type of interaction is not specific but can change across tasks as the relationship between the human and the machine develops.

References

SHOWING 1-10 OF 12 REFERENCES

Hierarchical Neural Story Generation

This work collects a large dataset of 300K human-written stories paired with writing prompts from an online forum that enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text.

STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation

A dataset and evaluation platform built from STORIUM, an online collaborative storytelling community that contains 6K lengthy stories with fine-grained natural language annotations interspersed throughout each narrative, forming a robust source for guiding models.

CTRL: A Conditional Transformer Language Model for Controllable Generation

CTRL is released, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior, providing more explicit control over text generation.

Story Realization: Expanding Plot Events into Sentences

An ensemble-based model that generates natural language guided by events is presented that generates more coherent and plausible stories than baseline approaches 1.

Language Models are Few-Shot Learners

GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.

Unsupervised Hierarchical Story Infilling

This work proposes a hierarchical model which first selects a set of rare words and then generates text conditioned on that set, and relegates the high entropy task of picking rare words to a word-sampling model, so that the second-stage model can achieve high fluency and coherence by searching for likely sentences, without sacrificing diversity.

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.

BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation

To systematically study and benchmark social biases in open-ended language generation, the Bias in Open-Ended Language Generation Dataset (BOLD) is introduced, a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology.

Towards a Human-like Open-Domain Chatbot

Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations, is presented and a human evaluation metric called Sensibleness and Specificity Average (SSA) is proposed, which captures key elements of a human-like multi- turn conversation.

Learning to Speak and Act in a Fantasy Text Adventure Game

This work introduces a large-scale crowdsourced text adventure game as a research platform for studying grounded dialogue, and describes the results of training state-of-the-art generative and retrieval models in this setting.