Corpus ID: 173990766

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

@article{Hoang2019EfficientAO,
  title={Efficient Adaptation of Pretrained Transformers for Abstractive Summarization},
  author={Andrew Pau Hoang and Antoine Bosselut and Asli Çelikyilmaz and Yejin Choi},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.00138}
}
  • Andrew Pau Hoang, Antoine Bosselut, +1 author Yejin Choi
  • Published 2019
  • Computer Science
  • ArXiv
  • Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks. [...] Key Result Finally, we show that these improvements are achieved by producing more focused summaries with fewer superfluous and that performance improvements are more pronounced on more abstractive datasets.Expand Abstract

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-9 OF 9 CITATIONS

    Cooperative Generator-Discriminator Networks for Abstractive Summarization with Narrative Flow

    VIEW 8 EXCERPTS
    CITES BACKGROUND, METHODS & RESULTS

    Few-Shot Learning for Abstractive Multi-Document Opinion Summarization

    VIEW 2 EXCERPTS
    CITES BACKGROUND & METHODS

    Extremely Low Resource Text simplification with Pre-trained Transformer Language Model

    VIEW 2 EXCERPTS
    CITES METHODS

    Abstractive Dialog Summarization with Semantic Scaffolds

    VIEW 1 EXCERPT
    CITES METHODS

    Keyphrase Generation with Cross-Document Attention

    VIEW 4 EXCERPTS
    CITES BACKGROUND, RESULTS & METHODS
    HIGHLY INFLUENCED

    Summarization Corpora of Wikipedia Articles

    VIEW 2 EXCERPTS
    CITES METHODS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 28 REFERENCES

    Bottom-Up Abstractive Summarization

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL