The Summary Loop: Learning to Write Abstractive Summaries Without Examples

@article{Laban2020TheSL,
  title={The Summary Loop: Learning to Write Abstractive Summaries Without Examples},
  author={Philippe Laban and Andrew Hsi Bloomberg and John F. Canny and Marti A. Hearst},
  journal={ArXiv},
  year={2020},
  volume={abs/2105.05361}
}
This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a… Expand
Better Highlighting: Creating Sub-Sentence Summary Highlights
TLDR
This paper presents a new method to produce self-contained highlights that are understandable on their own to avoid confusion, and combines determinantal point processes and deep contextualized representations to identify an optimal set of sub-sentence segments that are both important and non-redundant to form summary highlights. Expand
Transductive Learning for Abstractive News Summarization
TLDR
This work proposes the first application of transductive learning to summarization by utilizing input document summarizing sentences to construct references for learning in test time and shows that its summaries become more abstractive and coherent. Expand
Keep it Simple: Unsupervised Simplification of Multi-Paragraph Text
TLDR
A new approach to unsupervised text simplification which learns to balance a reward across three properties: fluency, salience and simplicity, which can help people complete a comprehension task an average of 18% faster while retaining accuracy. Expand
Unsupervised Class-Specific Abstractive Summarization of Customer Reviews
Large-scale unsupervised abstractive summarization is sorely needed to automatically scan millions of customer reviews in today’s fast-paced e-commerce landscape. We address a key challenge inExpand
Controllable Summarization with Constrained Markov Decision Process
TLDR
This work proposes a novel training framework based on Constrained Markov Decision Process (CMDP), which conveniently includes a reward function along with a set of constraints, to facilitate better summarization control. Expand
ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining
TLDR
Annotation protocols motivated by an issues–viewpoints–assertions framework are designed to crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads, and benchmark state-of-the-art models on these datasets and analyze characteristics associated with the data. Expand
Multi-Perspective Abstractive Answer Summarization
TLDR
This work introduces a novel dataset creation method to automatically create multi-perspective, bulletpoint abstractive summaries from an existing CQA forum and proposes a multi-reward optimization technique coupled with a sentencerelevance prediction multi-task loss. Expand
Few-Shot Text Generation with Natural Language Instructions
Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learningExpand
Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards
TLDR
A reinforcement learning-based framework for abstractive question summarization is introduced and two novel rewards obtained from the downstream tasks of question-type identification and question-focus recognition are proposed to regularize the question generation model. Expand
Learning Opinion Summarizers by Selecting Informative Reviews
TLDR
The task is formulated as jointly learning to select informative subsets of reviews and summarizing the opinions expressed in these subsets, with the importance of selecting informative reviews resulting in improved quality of summaries and reduced hallucinations. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 32 REFERENCES
Reinforced Extractive Summarization with Question-Focused Rewards
TLDR
This paper converts human abstracts to a set of Cloze-style comprehension questions and introduces a question-focused reward function to promote concise, fluent, and informative summaries. Expand
Bottom-Up Abstractive Summarization
TLDR
This work explores the use of data-efficient content selectors to over-determine phrases in a source document that should be part of the summary, and shows that this approach improves the ability to compress text, while still generating fluent summaries. Expand
Concept Pointer Network for Abstractive Summarization
TLDR
A concept pointer network that leverages knowledge-based, context-aware conceptualizations to derive an extended set of candidate concepts and points to the most appropriate choice using both the concept set and original source text. Expand
Abstractive Document Summarization without Parallel Data
TLDR
This work develops an abstractive summarization system that relies only on large collections of example summaries and non-matching articles, consisting of an unsupervised sentence extractor that selects salient sentences to include in the final summary, as well as a sentence abstractor that is trained on pseudo-parallel and synthetic data. Expand
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
TLDR
An accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively to generate a concise overall summary is proposed, which achieves the new state-of-the-art on all metrics on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Expand
Answers Unite! Unsupervised Metrics for Reinforced Summarization Models
TLDR
This work explores and proposes alternative evaluation measures and reports that the reported human-evaluation analysis shows that the proposed metrics, based on Question Answering, favorably compare to ROUGE – with the additional property of not requiring reference summaries. Expand
A Deep Reinforced Model for Abstractive Summarization
TLDR
A neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL) that produces higher quality summaries. Expand
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
TLDR
This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time. Expand
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Expand
BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle
TLDR
A novel approach to unsupervised sentence summarization is proposed by mapping the Information Bottleneck principle to a conditional language modelling objective: given a sentence, the approach seeks a compressed sentence that can best predict the next sentence. Expand
...
1
2
3
4
...