Corpus ID: 202541012

On Extractive and Abstractive Neural Document Summarization with Transformer Language Models

@article{Subramanian2019OnEA,
  title={On Extractive and Abstractive Neural Document Summarization with Transformer Language Models},
  author={Sandeep Subramanian and Raymond Li and Jonathan Pilault and C. Pal},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.03186}
}
  • Sandeep Subramanian, Raymond Li, +1 author C. Pal
  • Published 2019
  • Computer Science
  • ArXiv
  • We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. [...] Key Method We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared…Expand Abstract

    Figures, Tables, and Topics from this paper.

    Explore Further: Topics Discussed in This Paper

    Big Bird: Transformers for Longer Sequences
    • 21
    • PDF
    Transform and Tell: Entity-Aware News Image Captioning
    • 4
    • PDF
    Combination of abstractive and extractive approaches for summarization of long scientific texts
    SEAL: Segment-wise Extractive-Abstractive Long-form Text Summarization
    Dimsum @LaySumm 20: BART-based Approach for Scientific Document Summarization

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 38 REFERENCES
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 10,617
    • PDF
    A Neural Attention Model for Abstractive Sentence Summarization
    • 1,546
    • PDF
    Sequence to Sequence Learning with Neural Networks
    • 10,670
    • PDF
    Attention is All you Need
    • 12,282
    • Highly Influential
    • PDF
    Neural Machine Translation by Jointly Learning to Align and Translate
    • 13,100
    • Highly Influential
    • PDF
    Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
    • 976
    • Highly Influential
    • PDF
    Long Short-Term Memory
    • 31,885
    • PDF
    Get To The Point: Summarization with Pointer-Generator Networks
    • 1,348
    • Highly Influential
    • PDF
    Effective Approaches to Attention-based Neural Machine Translation
    • 4,016
    • PDF