IGA: An Intent-Guided Authoring Assistant

@article{Sun2021IGAAI,
  title={IGA: An Intent-Guided Authoring Assistant},
  author={Simeng Sun and Wenlong Zhao and Varun Manjunatha and R. Jain and Vlad I. Morariu and Franck Dernoncourt and Balaji Vasan Srinivasan and Mohit Iyyer},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.07000}
}
While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored. We leverage advances in language modeling to build an interactive writing assistant that generates and rephrases text according to fine-grained author specifications. Users provide input to our Intent-Guided Assistant (IGA) in the form of text interspersed with tags that correspond to specific… 

TaleBrush: Sketching Stories with Generative Pretrained Language Models

TaleBrush is introduced, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist’s fortune in co-created stories and a reflection on how Sketching interactions can facilitate the iterative human-AI co-creation process.

FRUIT: Faithfully Reflecting Updated Information in Text

The novel generation task of *faithfully reflecting updated information in text* (FRUIT) where the goal is to update an existing article given new evidence, and shows that developing models that can update articles faithfully requires new capabilities for neural generation models, and opens doors to many new applications.

Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision

This work presents a human-in-the-loop iterative text revision system, Read, Revise, Repeat (R3), which aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions.

Is This Abstract Generated by AI? A Research for the Gap between AI-generated Scientific Text and Human-written Scientific Text

There exists a “writing style” gap between AI-generated scientific text and human-written scientific text, which suggests that while AI has the potential to generate scientific content that is as accurate as human- written content, there is still a gap in terms of depth and overall quality.

References

SHOWING 1-10 OF 51 REFERENCES

Controllable Story Generation with External Knowledge Using Large-Scale Language Models

MEGATRON-CNTRL is a novel framework that uses large-scale language models and adds control to text generation by incorporating an external knowledge base and showcases the controllability of the model by replacing the keywords used to generate stories and re-running the generation process.

STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation

A dataset and evaluation platform built from STORIUM, an online collaborative storytelling community that contains 6K lengthy stories with fine-grained natural language annotations interspersed throughout each narrative, forming a robust source for guiding models.

Enabling Language Models to Fill in the Blanks

It is shown that humans have difficulty identifying sentences infilled by the approach, which can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

The Plug and Play Language Model (PPLM) for controllable language generation is proposed, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM.

Reformulating Unsupervised Style Transfer as Paraphrase Generation

This paper reformulates unsupervised style transfer as a paraphrase generation problem, and presents a simple methodology based on fine-tuning pretrained language models on automatically generated paraphrase data that significantly outperforms state-of-the-art style transfer systems on both human and automatic evaluations.

POINTER: Constrained Progressive Text Generation via Insertion-based Generative Pre-training

POINTER (PrOgressive INsertion-based TransformER), a simple yet novel insertion-based approach for hard-constrained text generation, which achieves state-of-the-art performance on constrained text generation.

GeDi: Generative Discriminator Guided Sequence Generation

GeDi is proposed as an efficient method for using smaller LMs as generative discriminators to guide generation from large LMs to make them safer and more controllable, and is found that GeDi gives stronger controllability than the state of the art method while also achieving generation speeds more than 30 times faster.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

PoMo: Generating Entity-Specific Post-Modifiers in Context

PoMo, a post-modifier dataset created automatically from news articles reflecting a journalistic need for incorporating entity information that is relevant to a particular news event, is built.

Hierarchical Neural Story Generation

This work collects a large dataset of 300K human-written stories paired with writing prompts from an online forum that enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text.
...