Context-Driven Satirical News Generation

@inproceedings{Horvitz2020ContextDrivenSN,
  title={Context-Driven Satirical News Generation},
  author={Zachary Horvitz and Nam Do and Michael L. Littman},
  booktitle={FIGLANG},
  year={2020}
}
While mysterious, humor likely hinges on an interplay of entities, their relationships, and cultural connotations. Motivated by the importance of context in humor, we consider methods for constructing and leveraging contextual representations in generating humorous text. Specifically, we study the capacity of transformer-based architectures to generate funny satirical headlines, and show that both language models and summarization models can be fine-tuned to regularly generate headlines that… 

Figures and Tables from this paper

Laughing Heads: Can Transformers Detect What Makes a Sentence Funny?

Clear evidence is found that one single attention head learns to recognize the words that make a test sentence humorous, even without access to this information at training time, and important insights are obtained into the mechanisms by which transformers recognize humor.

Survival of the Wittiest: Evolving Satire with Language Models

GALMET is a model that generates text by using genetic algorithms with BERT-like language models for evolving text by transforming text guided by scores from another language model, and it is found that while humans generally outperform the model, generations by GALMET are also often preferred over human-edited headlines.

References

SHOWING 1-10 OF 26 REFERENCES

“President Vows to Cut Hair”: Dataset and Analysis of Creative Text Editing for Humorous Headlines

We introduce, release, and analyze a new dataset, called Humicroedit, for research in computational humor. Our publicly available data consists of regular English news headlines paired with versions

Reverse-Engineering Satire, or "Paper on Computational Humor Accepted Despite Making Serious Advances"

Overall, this paper deepens the understanding of the syntactic and semantic structure of satirical news headlines and provides insights for building humor-producing systems.

Pun Generation with Surprise

An unsupervised approach to pun generation based on lots of raw (unhumorous) text and a surprisal principle is proposed, which posit that in a pun sentence, there is a strong association between the pun word and the distant context, but a strong associations between the alternativeword and the immediate context.

A Modular Architecture for Unsupervised Sarcasm Generation

Qualitative and quantitative performance analyses on the data reveal the system’s superiority over baselines built using known unsupervised statistical and neural machine translation and style transfer techniques.

Towards a General Framework for Humor Generation from Rated Examples

GOOFER, a general framework for computational humor that learns joke structures and parameterizations from rated example jokes is proposed and it is shown that this framework cannot only generate this type of jokes well, but also finds the importance of specific humor metrics for template values.

Inside Jokes: Identifying Humorous Cartoon Captions

This work describes how judgments about the humorousness of different captions are acquired and builds a classifier to identify funnier captions automatically, and uses it to find the best captions and study how its predictions could be used to significantly reduce the load on the cartoon contest's judges.

A Computational Model of Linguistic Humor in Puns

This work is the first, to the knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine‐grained level and is presented as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena.

Text Summarization with Pretrained Encoders

This paper introduces a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences and proposes a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two.

A Neural Approach to Pun Generation

This paper proposes neural network models for homographic pun generation, and they can generate puns without requiring any pun data for training and are able to generate homographicpuns of good readability and quality.

Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time.