• Corpus ID: 150373844

Headline Generation: Learning from Decomposable Document Titles

@inproceedings{Preprint2019HeadlineGL,
  title={Headline Generation: Learning from Decomposable Document Titles},
  author={A Preprint and Oleg V. Vasilyev and Tom Grek and John Bohannon},
  year={2019}
}
We propose a novel method for generating titles for unstructured text documents. We reframe the problem as a sequential question-answering task. A deep neural network is trained on document-title pairs with decomposable titles, meaning that the vocabulary of the title is a subset of the vocabulary of the document. To train the model we use a corpus of millions of publicly available document-title pairs: news articles and headlines. We present the results of a randomized double-blind trial in… 
Zero-shot topic generation
TLDR
The results show that the zero-shot model generates topic labels for news documents that are on average equal to or higher quality than those written by humans, as judged by humans.
DeepTitle - Leveraging BERT to generate Search Engine Optimized Headlines
TLDR
This paper showcases how a pre-trained language model can be leveraged to create an abstractive news headline generator for German language and incorporates state of the art fine-tuning techniques for abstractive text summarization.
Using Pre-Trained Transformer for Better Lay Summarization
TLDR
This paper presents the approach of using Pre-training with Extracted Gap-sentences for Abstractive Summarization to produce the lay summary and combining those with the extractive summarization model using Bidirectional Encoder Representations from Transformers and readability metrics that measure the readability of the sentence to further improve the quality of the summary.
Sensitivity of BLANC to human-scored qualities of text summaries
TLDR
The case is made for optimal BLANC parameters, at which the BLANC sensitivity to almost all of summary qualities is about as good as the sensitivity of a human annotator.
Artificial Intelligence Strategies for National Security and Safety Standards
TLDR
This paper explores how the application of standards during each stage of the development of an AI system deployed and used in a national security environment would help enable trust and focuses on the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to subject machine outputs to the same rigorous standards as analysis performed by humans.
Play the Shannon Game With Language Models: A Human-Free Approach to Summary Evaluation
TLDR
New reference-free summary evaluation metrics that use a pretrained language model to estimate the information shared between a document and its summary are introduced, a modern take on the Shannon Game.
Fill in the BLANC: Human-free quality estimation of document summaries
TLDR
Evidence is presented that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements, and the method does not require human-written reference summaries, allowing for fully human-free summary quality estimation.
TLDR: Extreme Summarization of Scientific Documents
TLDR
This work introduces SCITLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers, and proposes CATTS, a simple yet effective learning strategy for generatingTLDRs that exploits titles as an auxiliary training signal.

References

SHOWING 1-10 OF 14 REFERENCES
Neural Abstractive Text Summarization and Fake News Detection
TLDR
The authors' text summarization model is applied as a feature extractor for a fake news detection task where the news articles prior to classification will be summarized and the results are compared against the classification using only the original news text.
Source-side Prediction for Neural Headline Generation
TLDR
The experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task and has an ability to learn a reasonable token-wise correspondence without knowing any true alignments.
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.
Neural Headline Generation with Sentence-wise Optimization
TLDR
This paper employs minimum risk training strategy in this paper, which directly optimizes model parameters in sentence level with respect to evaluation metrics and leads to significant improvements for headline generation.
Abstractive and Extractive Text Summarization using Document Context Vector and Recurrent Neural Networks
TLDR
It is proposed that Seq2Seq models should be started with contextual information at the first time-step of the input to obtain better summaries, and the output summaries are more document centric, than being generic, overcoming one of the major hurdles of using generative models.
Generating News Headlines with Recurrent Neural Networks
We describe an application of an encoder-decoder recurrent neural network with LSTM units and attention to generating headlines from the text of news articles. We find that the model is quite
Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs
TLDR
This work proposes a general technique for replacing the softmax layer with a continuous embedding layer, and introduces a novel probabilistic loss, and a training and inference procedure in which it generates a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
ROUGE: A Package for Automatic Evaluation of Summaries
TLDR
Four different RouGE measures are introduced: ROUGE-N, ROUge-L, R OUGE-W, and ROUAGE-S included in the Rouge summarization evaluation package and their evaluations.
LexRank: Graph-based Lexical Centrality as Salience in Text Summarization
TLDR
A new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences is considered and the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank.
...
1
2
...