A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents

@inproceedings{Cohan2018ADA,
  title={A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents},
  author={Arman Cohan and Franck Dernoncourt and Doo Soon Kim and Trung Bui and Seokhwan Kim and W. Chang and Nazli Goharian},
  booktitle={NAACL},
  year={2018}
}
Neural abstractive summarization models have led to promising results in summarizing relatively short documents. [] Key Method Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.

Figures and Tables from this paper

Abstractive Summarization: A Survey of the State of the Art

TLDR
This paper surveys existing approaches to abstractive summarization, focusing on the recently developed neural approaches.

Discourse-Aware Unsupervised Summarization for Long Scientific Documents

TLDR
This work proposes an unsupervised graph-based ranking model for extractive summarization of long scientific documents, and suggests that patterns in the discourse structure are a strong signal for determining importance in scientific articles.

StructSum: Summarization via Structured Representations

TLDR
This work proposes a framework based on document-level structure induction for summarization to address challenges of abstractive summarization, and improves the coverage of content in the source documents, generates more abstractive summaries by generating more novel n-grams, and incorporates interpretable sentence-level structures, while performing on par with standard baselines.

Extractive Summarization of Long Documents by Combining Global and Local Context

TLDR
A novel neural single-document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic, where it outperforms previous work, both extractive and abstractive models.

Neural Abstractive Summarization with Structural Attention

TLDR
This work presents a hierarchical encoder based on structural attention to model such inter-sentence and inter-document dependencies within the sentences of a document and shows that the proposed model achieves significant improvement over the baseline in both single and multi-document summarization settings.

Predicting Discourse Trees from Transformer-based Neural Summarizers

TLDR
Experiments across models and datasets reveal that the summarizer learns both, dependency- and constituency-style discourse information, which is typically encoded in a single head, covering long- and short-distance discourse dependencies.

About ? Extreme Summarization with Topic-Aware Convolutional Neural Networks

  • Shashi Narayan
  • Computer Science
  • 2019
TLDR
This work introduces extreme summarization, a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question “What is the article about?” and proposes a novel abstractive model which is conditioned on the article’s topics and based entirely on convolutional neural networks.

Enriching and Controlling Global Semantics for Text Summarization

TLDR
This paper introduces a neural topic model empowered with normalizing flow to capture the global semantics of the document, which are then integrated into the summarization model, and introduces a mechanism to control the amount of global semantics supplied to the text generation module.

Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization

TLDR
This paper employs attention mechanisms to interact between different granularity of semantic representations, which helps to capture multi-granularity key information and improves the performance of both abstractive and extractive summarization.

BASS: Boosting Abstractive Summarization with Unified Semantic Graph

TLDR
BASS is presented, a novel framework for Boosting Abstractive Summarization based on a unified Semantic graph, which aggregates co-referent phrases distributing across a long range of context and conveys rich relations between phrases.
...

References

SHOWING 1-10 OF 31 REFERENCES

A Neural Attention Model for Abstractive Sentence Summarization

TLDR
This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.

Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

TLDR
This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time.

Generating Wikipedia by Summarizing Long Sequences

TLDR
It is shown that generating English Wikipedia articles can be approached as a multi- document summarization of source documents and a neural abstractive model is introduced, which can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles.

A Deep Reinforced Model for Abstractive Summarization

TLDR
A neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL) that produces higher quality summaries.

Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization

TLDR
A cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization and achieves better results than the state-of-the-art methods.

Coarse-to-Fine Attention Models for Document Summarization

TLDR
A novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks, which achieves the desired behavior of sparsely attending to subsets of the document for generation.

Abstractive Sentence Summarization with Attentive Recurrent Neural Networks

TLDR
A conditional recurrent neural network (RNN) which generates a summary of an input sentence which significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.

Beyond SumBasic: Task-focused summarization with sentence simplification and lexical expansion

Challenges in Data-to-Document Generation

TLDR
A new, large-scale corpus of data records paired with descriptive documents is introduced, a series of extractive evaluation methods for analyzing performance are proposed, and baseline results are obtained using current neural generation methods.

Using Hidden Markov Modeling to Decompose Human-Written Summaries

TLDR
A hidden Markov model solution to the decomposition problem of summary sentence decomposition is proposed, which can lead to better text generation techniques for summarization.