Corpus ID: 173990766

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

@article{Hoang2019EfficientAO,
  title={Efficient Adaptation of Pretrained Transformers for Abstractive Summarization},
  author={Andrew Hoang and Antoine Bosselut and A. Çelikyilmaz and Yejin Choi},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.00138}
}
Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks. [...] Key Result Finally, we show that these improvements are achieved by producing more focused summaries with fewer superfluous and that performance improvements are more pronounced on more abstractive datasets.Expand
Summary Level Training of Sentence Rewriting for Abstractive Summarization
Understanding Neural Abstractive Summarization Models via Uncertainty
Few-Shot Learning for Abstractive Multi-Document Opinion Summarization
Extremely Low Resource Text simplification with Pre-trained Transformer Language Model
Abstractive Dialog Summarization with Semantic Scaffolds
Keyphrase Generation with Cross-Document Attention
Nutri-bullets Hybrid: Consensual Multi-document Summarization
Summarization Corpora of Wikipedia Articles
Truth or Error? Towards systematic analysis of factual errors in abstractive summaries
...
1
2
...

References

SHOWING 1-10 OF 28 REFERENCES
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
Bottom-Up Abstractive Summarization
Neural Summarization by Extracting Sentences and Words
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
A Deep Reinforced Model for Abstractive Summarization
Abstractive Document Summarization with a Graph-Based Attentional Neural Model
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Abstractive Summarization of Reddit Posts with Multi-level Memory Networks
...
1
2
3
...