Neural Abstractive Text Summarization with Sequence-to-Sequence Models

@article{Shi2021NeuralAT,
  title={Neural Abstractive Text Summarization with Sequence-to-Sequence Models},
  author={Tian Shi and Yaser Keneshloo and Naren Ramakrishnan and Chandan K. Reddy},
  journal={ACM Transactions on Data Science},
  year={2021},
  volume={2},
  pages={1 - 37}
}
In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. [] Key Method Many models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Therefore, we also provide a brief review of these models.

Bidirectional LSTM Networks for Abstractive Text Summarization

This paper focuses on implementing a variant of abstractive summarization using Long Short-Term Memory (LSTM) networks by modeling the summarization task as a Sequence-to-Sequence (Seq2Seq) problem.

Abstractive Text Summarization: Enhancing Sequence-to-Sequence Models Using Word Sense Disambiguation and Semantic Content Generalization

A novel framework that combines sequence-to-sequence neural-based text summarization along with structure and semantic-based methodologies is presented, capable of dealing with the problem of out-of-vocabulary or rare words, and improving the performance of the deep learning models.

Faster Transformers for Document Summarization

This paper introduces two novel architectural changes to attentions in thee encoder: a multi-head compressed attention module that groups words using convolutions, and a strided neighborhood attention that relaxes and reduces long term dependencies.

Abstractive Text Summarization based on Language Model Conditioning and Locality Modeling

A new method of BERT-windowing, which allows chunk-wise processing of texts longer than the BERT window size and how locality modeling, i.e., the explicit restriction of calculations to the local context, can affect the summarization ability of the Transformer.

Abstractive Text Summarization Using Attention-based Stacked LSTM

  • Mimansha SinghVrinda Yadav
  • Computer Science
    2022 Fifth International Conference on Computational Intelligence and Communication Technologies (CCICT)
  • 2022
Stacked LSTM based on attention mechanism using Sequence-to-Sequence model is proposed, to generate the summary using abstractive approach for Amazon reviews of fine foods dataset to obtain a short understandable and fluent abstractive summary of any given text.

Deep Reinforcement Learning for Sequence-to-Sequence Models

This work provides the source code for implementing most of the RL models discussed in this paper to support the complex task of abstractive text summarization and provides some targeted experiments for these RL models, both in terms of performance and training time.

Using Question Answering Rewards to Improve Abstractive Summarization

Results show that the question-answering rewards can be used as a general framework to improve neural abstractive summarization and are preferred over 30% of the time over the summaries generated by general abstractive summary models.

Multi-level shared-weight encoding for abstractive sentence summarization

The proposed model encapsulates the idea of re-examining a piece of text multiple times to grasp the underlying theme and aspects of English grammar before formulating a summary, and generates a more readable(fluent) summary (Rouge-L score) as compared to multiple benchmark models while preserving similar levels of informativeness.

Seq2seq Deep Learning Method for Summary Generation by LSTM with Two-way Encoder and Beam Search Decoder

  • G. SzücsDorottya Huszti
  • Computer Science
    2019 IEEE 17th International Symposium on Intelligent Systems and Informatics (SISY)
  • 2019
A deep neural network architecture is proposed for abstractive summarization, which aims generating brief summaries from long text documents with a two-way encoder with state-of-the-art recurrent neural networks type, LSTM (Long Short-Term Memory).

Improving the readability and saliency of abstractive text summarization using combination of deep neural networks equipped with auxiliary attention mechanism

A novel abstractive summarization model is proposed in this paper which utilized the combination of convolutional neural network and long short-term memory integrated with auxiliary attention in its encoder to increase the saliency and coherency of generated summaries.
...

References

SHOWING 1-10 OF 164 REFERENCES

Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time.

Diverse Beam Search for Increased Novelty in Abstractive Summarization

A novel method is presented, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search.

Bottom-Up Abstractive Summarization

This work explores the use of data-efficient content selectors to over-determine phrases in a source document that should be part of the summary, and shows that this approach improves the ability to compress text, while still generating fluent summaries.

A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization

A deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization, which demonstrates the superiority of the proposed method in the abstractive summarization.

Text Summarization with Pretrained Encoders

This paper introduces a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences and proposes a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two.

Towards Improving Abstractive Summarization via Entailment Generation

The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.

Get To The Point: Summarization with Pointer-Generator Networks

A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.

Deep Reinforcement Learning for Sequence-to-Sequence Models

This work provides the source code for implementing most of the RL models discussed in this paper to support the complex task of abstractive text summarization and provides some targeted experiments for these RL models, both in terms of performance and training time.

PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

This work proposes pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective, PEGASUS, and demonstrates it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores.

A Neural Attention Model for Abstractive Sentence Summarization

This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.
...