• Corpus ID: 121326909

Headline Generation: Learning from Decomposed Document Titles

  title={Headline Generation: Learning from Decomposed Document Titles},
  author={Oleg V. Vasilyev and Tom Grek and John Bohannon},
We propose a novel method for generating titles for unstructured text documents. [...] Key Method To train the model we use a corpus of millions of publicly available document-title pairs: news articles and headlines. We present the results of a randomized double-blind trial in which subjects were unaware of which titles were human or machine-generated. When trained on approximately 1.5 million news articles, the model generates headlines that humans judge to be as good or better than the original human-written…Expand
Zero-shot topic generation
The results show that the zero-shot model generates topic labels for news documents that are on average equal to or higher quality than those written by humans, as judged by humans.
Using Pre-Trained Transformer for Better Lay Summarization
This paper presents the approach of using Pre-training with Extracted Gap-sentences for Abstractive Summarization to produce the lay summary and combining those with the extractive summarization model using Bidirectional Encoder Representations from Transformers and readability metrics that measure the readability of the sentence to further improve the quality of the summary.
Sensitivity of BLANC to human-scored qualities of text summaries
The case is made for optimal BLANC parameters, at which the BLANC sensitivity to almost all of summary qualities is about as good as the sensitivity of a human annotator.
Artificial Intelligence Strategies for National Security and Safety Standards
This paper explores how the application of standards during each stage of the development of an AI system deployed and used in a national security environment would help enable trust and focuses on the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to subject machine outputs to the same rigorous standards as analysis performed by humans.
Play the Shannon Game With Language Models: A Human-Free Approach to Summary Evaluation
New reference-free summary evaluation metrics that use a pretrained language model to estimate the information shared between a document and its summary are introduced, a modern take on the Shannon Game.
Fill in the BLANC: Human-free quality estimation of document summaries
Evidence is presented that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements, and the method does not require human-written reference summaries, allowing for fully human-free summary quality estimation.
TLDR: Extreme Summarization of Scientific Documents
This work introduces SCITLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers, and proposes CATTS, a simple yet effective learning strategy for generatingTLDRs that exploits titles as an auxiliary training signal.


Source-side Prediction for Neural Headline Generation
The experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task and has an ability to learn a reasonable token-wise correspondence without knowing any true alignments.
Get To The Point: Summarization with Pointer-Generator Networks
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.
Neural Abstractive Text Summarization and Fake News Detection
The authors' text summarization model is applied as a feature extractor for a fake news detection task where the news articles prior to classification will be summarized and the results are compared against the classification using only the original news text.
Neural Headline Generation with Sentence-wise Optimization
This paper employs minimum risk training strategy in this paper, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation.
Abstractive and Extractive Text Summarization using Document Context Vector and Recurrent Neural Networks
It is proposed that Seq2Seq models should be started with contextual information at the first time-step of the input to obtain better summaries, and the output summaries are more document centric, than being generic, overcoming one of the major hurdles of using generative models.
Generating News Headlines with Recurrent Neural Networks
We describe an application of an encoder-decoder recurrent neural network with LSTM units and attention to generating headlines from the text of news articles. We find that the model is quite
Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs
This work proposes a general technique for replacing the softmax layer with a continuous embedding layer, and introduces a novel probabilistic loss, and a training and inference procedure in which it generates a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
ROUGE: A Package for Automatic Evaluation of Summaries
Four different RouGE measures are introduced: ROUGE-N, ROUge-L, R OUGE-W, and ROUAGE-S included in the Rouge summarization evaluation package and their evaluations.
LexRank: Graph-based Lexical Centrality as Salience in Text Summarization
A new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences is considered and the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank.