From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach

@inproceedings{Tan2017FromNS,
  title={From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach},
  author={Jiwei Tan and Xiaojun Wan and J. Xiao},
  booktitle={IJCAI},
  year={2017}
}
Headline generation is a task of abstractive text summarization, and previously suffers from the immaturity of natural language generation techniques. Recent success of neural sentence summarization models shows the capacity of generating informative, fluent headlines conditioned on selected recapitulative sentences. In this paper, we investigate the extension of sentence summarization models to the document headline generation task. The challenge is that extending the sentence summarization… Expand
Experiment on Using Topic Sentence for Neural News Headline Generation
News headline generation is one variant of summarization tasks, which is introduced in DUC-2003 and DUC2004 (Task 1) [15]. In this study, we are interested in news headline generation usingExpand
Unleashing the Potential of Attention Model for News Headline Generation
TLDR
This paper proposes a novel model Transformer(XL)-CC to generate headline from the perspective of understanding the whole text, the segment-level recurrence mechanism and relative positional encoding make the model learn ultra-long-term dependencies. Expand
Incorporating Topic Sentence on Neural News Headline Generation
Most past studies on neural news headline generation trained the encoder-decoder model using the first sentence of a document aligned with a headline. However, it is found that the first sentenceExpand
Improving Pointer-Generator Network with Keywords Information for Chinese Abstractive Summarization
TLDR
This work proposes a novel approach to improve the summary’s informativeness by explicitly incorporating topical keywords information from the original document into a pointer-generator network via a new attention mechanism so that a topic-oriented summary can be generated in a context-aware manner with guidance. Expand
SHEG: summarization and headline generation of news articles using deep learning
TLDR
This paper proposes a novel methodology known as SHEG, which works by integrating both extractive and abstractive mechanisms using a pipelined approach to produce a concise summary, which is then used for headline generation. Expand
Keeping Consistency of Sentence Generation and Document Classification with Multi-Task Learning
TLDR
A multi-task learning model with a shared encoder and multiple decoders for each task is introduced and a novel loss function called hierarchical consistency loss is proposed to maintain consistency among the attention weights of the decoder. Expand
Improving Neural Abstractive Document Summarization with Structural Regularization
TLDR
This paper proposes to leverage the structural information of both documents and multi-sentence summaries to improve the document summarization performance and imports both structural-compression and structural-coverage regularization into the summarization process in order to capture the information compression and information coverage properties. Expand
Incorporating word attention with convolutional neural networks for abstractive summarization
TLDR
The combined word attention and multilayer CNNs modules provide a better-learned representation of the input document, which helps the model generate interpretable, coherent and informative summaries in an abstractive summarization task. Expand
Structure Learning for Headline Generation
TLDR
This paper proposes to incorporate structure learning into the graph-based neural models for headline generation, to automatically learn the sentence graph using a data-driven way, so that the document structure flexibly without prior heuristics or rules. Expand
To what extent does content selection affect surface realization in the context of headline generation?
TLDR
This paper analyzes and evaluates the effectiveness of NLG techniques for generating headlines and takes HanaNLG—a hybrid surface realization approach—as a basis, and analyzes the effect in the generated text when different content selection strategies are integrated at macroplanning stage. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 52 REFERENCES
A Neural Attention Model for Abstractive Sentence Summarization
TLDR
This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence. Expand
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
TLDR
A conditional recurrent neural network (RNN) which generates a summary of an input sentence which significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task. Expand
HEADS: Headline Generation as Sequence Prediction Using an Abstract Feature-Rich Space
TLDR
This study presents a sequence-prediction technique for learning how editors title their news stories, and trains and tests the model on an extensive corpus of financial news, and compares it against a number of baselines. Expand
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
TLDR
This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time. Expand
Distraction-Based Neural Networks for Document Summarization
TLDR
This paper proposes neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Expand
A Hierarchical Neural Autoencoder for Paragraphs and Documents
TLDR
This paper introduces an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph and evaluates the reconstructed paragraph using standard metrics to show that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. Expand
Headline Generation Based on Statistical Translation
TLDR
This paper presents results on experiments using this approach, in which statistical models of the term selection and term ordering are jointly applied to produce summaries in a style learned from a training corpus. Expand
Abstractive headline generation for spoken content by attentive recurrent neural networks with ASR error modeling
TLDR
This paper proposes an AsR error modeling approach to learn the underlying structure of ASR error patterns and incorporate this model in an Attentive Recurrent Neural Network (ARNN) architecture, so that the model for abstractive headline generation for spoken content can be learned from abundant text data and the ASR data for some recognizers. Expand
Event-Driven Headline Generation
TLDR
This work proposes an event-driven model for headline generation that can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Expand
Distraction-based neural networks for modeling documents
TLDR
This paper proposes neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Expand
...
1
2
3
4
5
...