Hooks in the Headline: Learning to Generate Headlines with Controlled Styles

  title={Hooks in the Headline: Learning to Generate Headlines with Controlled Styles},
  author={Di Jin and Zhijing Jin and Joey Tianyi Zhou and Lisa Orii and Peter Szolovits},
  booktitle={Annual Meeting of the Association for Computational Linguistics},
Current summarization systems only produce plain, factual headlines, far from the practical needs for the exposure and memorableness of the articles. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), thus attracting more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates stylistic headlines by… 

Figures and Tables from this paper

The Style-Content Duality of Attractiveness: Learning to Write Eye-Catching Headlines via Disentanglement

A Disentanglement-based Attractive Headline Generator (DAHG) that generates headline which captures the attractive content following the attractive style and takes the polished document as input to generate headline under the guidance of the attractive Style.

News Headline Grouping as a Challenging NLU Task

A novel unsupervised Headline Generator Swap model is proposed for the task of HeadLine Grouping that achieves within 3 F-1 of the best supervised model, and high-performing models are analyzed with consistency tests, finding that models are not consistent in their predictions, revealing modeling limits of current architectures.

Text Style Transfer: A Review and Experiment Evaluation

A taxonomy to organize the TST models is created and a comprehensive summary of the state of the art is provided, which expands on current trends and provides new perspectives on the new and exciting developments in the T ST field.

Deep Learning for Text Attribute Transfer: A Survey.

This article collects all related academic works since the first appearance in 2017, and reveals that existing methods are indeed based on a combination of several loss functions with each of which serving a certain goal.

Neural Language Generation: Formulation, Methods, and Evaluation

There is no standard way to assess the quality of text produced by these generative models, which constitutes a serious bottleneck towards the progress of the field, so this survey will provide an informative overview of formulations, methods, and assessments of neural natural language generation.

Stage-wise Stylistic Headline Generation: Style Generation and Summarized Content Insertion

This work proposes an end-to-end stage-wise SHG model containing the style generation component and the content insertion component, where the former generates stylistic-relevant intermediate outputs and the latter receives these outputs then inserts the summarized content.

Contrastive Learning enhanced Author-Style Headline Generation

Experimental results show that historical headlines of the same user can improve the headline generation significantly, and both the contrastive learning module and the two style features fusion methods can further boost the performance.

Noisy Pairing and Partial Supervision for Opinion Summarization

Experimental results show consistent improvements in automatic evaluation metrics, and qualitative analysis shows that the weakly supervised opinion summarization system can generate summaries that look more like those written by professional reviewers.

StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer

A large-scale benchmark, StylePTB, is introduced with paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as compositions of multiple transfers which allow modeling of fine- grained styling changes as building blocks for more complex, high-level transfers.

Deep Learning for Text Style Transfer: A Survey

A systematic survey of the research on neuralText style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017, is presented.



Extractive Headline Generation Based on Learning to Rank for Community Question Answering

An extractive headline generation method based on learning to rank for CQA that extracts the most informative substring from each question as its headline is proposed.

Review Headline Generation with User Embedding

A review headline generation task that produces a short headline from a review post by a user is conducted, and it is argued that this task is more challenging than document summarization, because the headlines generated by users vary from person to person.

Zero-Shot Cross-Lingual Neural Headline Generation

This work proposes to deal with the cross-lingual neural headline generation (CNHG) under the zero-shot scenario, and lets a parameterized CNHG model (student model) mimic the output of a pretrained translation or headline generation model (teacher model).

From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach

A coarse-to-fine approach is proposed, which first identifies the important sentences of a document using document summarization techniques, and then exploits a multi-sentence summarization model with hierarchical attention to leverage the important sentence for headline generation.

Multiple News Headlines Generation using Page Metadata

This work uses a single model to generate multiple headlines based on the Pointer-Generator Network, using page metadata on a newspaper which can change headline generation behavior.

Question Headline Generation for News Articles

A novel dual-attention sequence-to-sequence model (DASeq2Seq) that can significantly outperform the state-of-the-art question generation or headline generation models and employs a vocabulary gate over both generic and question vocabularies to better capture the question patterns.

Deep Headline Generation for Clickbait Detection

A novel solution, named as Stylized Headline Generation (SHG), that can not only generate readable and realistic headlines to enlarge original training data, but also help improve the classification capacity of supervised learning is proposed.

Faithful to the Original: Fact Aware Neural Abstractive Summarization

This work argues that faithfulness is also a vital prerequisite for a practical abstractive summarization system and proposes a dual-attention sequence-to-sequence framework to force the generation conditioned on both the source text and the extracted fact descriptions.

Neural Headline Generation with Sentence-wise Optimization

This paper employs minimum risk training strategy in this paper, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation.

Style Transfer in Text: Exploration and Evaluation

This work proposes two novel evaluation metrics that measure two aspects of style transfer: transfer strength and content preservation, and shows that the proposed content preservation metric is highly correlate to human judgments.