Planning with Learned Entity Prompts for Abstractive Summarization

@article{Narayan2021PlanningWL,
  title={Planning with Learned Entity Prompts for Abstractive Summarization},
  author={Shashi Narayan and Yao Zhao and Joshua Maynez and Gonçalo Sim{\~o}es and Vitaly Nikolaev and Ryan T. McDonald},
  journal={Transactions of the Association for Computational Linguistics},
  year={2021},
  volume={9},
  pages={1475-1492}
}
Abstract We introduce a simple but flexible mechanism to learn an intermediate plan to ground the generation of abstractive summaries. Specifically, we prepend (or prompt) target summaries with entity chains—ordered sequences of entities mentioned in the summary. Transformer-based sequence-to-sequence models are then trained to generate the entity chain and then continue generating the summary conditioned on the entity chain and the input. We experimented with both pretraining and finetuning… 
Question-Based Salient Span Selection for More Controllable Text Summarization
TLDR
A method for incorporating question-answering (QA) signals into a summarization model that identifies salient noun phrases in the input document by automatically generating wh-questions that are answered by the NPs and automatically determining whether those questions are answered in the gold summaries.

References

SHOWING 1-10 OF 57 REFERENCES
Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation
Summaries generated by abstractive summarization are supposed to only contain statements entailed by the source documents. However, state-of-the-art abstractive methods are still prone to hallucinate
CTRLsum: Towards Generic Controllable Text Summarization
TLDR
CTRLsum enables users to control multiple aspects of generated summaries by interacting with the summarization system through textual input in the form of a set of keywords or descriptive prompts, and achieves state-of-the-art results on the CNN/DailyMail dataset.
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.
PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation
TLDR
This work presents a novel content-controlled text generation framework, PAIR, with planning and iterative refinement, which is built upon a large model, BART, and proposes a refinement algorithm to gradually enhance the generation quality within the sequence-to-sequence framework.
On Faithfulness and Factuality in Abstractive Summarization
TLDR
It is found that neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document and textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.
Multi-Reward Reinforced Summarization with Saliency and Entailment
TLDR
This work addresses three important aspects of a good summary via a reinforcement learning approach with two novel reward functions: ROUGESal and Entail, on top of a coverage-based baseline, and shows superior performance improvement when these rewards are combined with traditional metric (ROUGE) based rewards.
Data-to-Text Generation with Content Selection and Planning
TLDR
This work presents a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training and shows that this model outperforms strong baselines improving the state-of-the-art on the recently released RotoWire dataset.
Controllable Abstractive Summarization
TLDR
A neural summarization model with a simple but effective mechanism to enable users to specify high level attributes in order to control the shape of the final summaries to better suit their needs.
Evaluating the Factual Consistency of Abstractive Text Summarization
TLDR
A weakly-supervised, model-based approach for verifying factual consistency and identifying conflicts between source documents and a generated summary substantially outperforms previous models, including those trained with strong supervision using standard datasets for natural language inference and fact checking.
Sample Efficient Text Summarization Using a Single Pre-Trained Transformer
TLDR
This work uses a pre- trained decoder-only network, where the same Transformer LM both encodes the source and generates the summary, and substantially improves over pre-trained Transformer encoder-decoder networks in limited-data settings.
...
1
2
3
4
5
...