• Corpus ID: 195776396

Cooperative Generator-Discriminator Networks for Abstractive Summarization with Narrative Flow

@article{Gabriel2019CooperativeGN,
  title={Cooperative Generator-Discriminator Networks for Abstractive Summarization with Narrative Flow},
  author={Saadia Gabriel and Antoine Bosselut and Ari Holtzman and Kyle Lo and Asli Celikyilmaz and Yejin Choi},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.01272}
}
We introduce Cooperative Generator-Discriminator Networks (Co-opNet), a general framework for abstractive summarization with distinct modeling of the narrative flow in the output summary. [] Key Method To promote research toward abstractive summarization with narrative flow, we first introduce a new dataset, Scientific Abstract SummarieS (SASS), where the abstracts are used as proxy gold summaries for scientific articles.
Discriminative Adversarial Search for Abstractive Summarization
TLDR
The results obtained show that a naive application of DAS improves over the state-of-the-art methods, with further gains obtained via discriminator retraining, and it is shown how DAS can be effective for cross-domain adaptation.
To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs
TLDR
This paper shows that the SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.
Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
TLDR
This work shows how all of the problems of generative dialogue models can be addressed by extending the recently introduced unlikelihood loss to these cases, and demonstrates the efficacy of this approach across several dialogue tasks.
Mark-Evaluate: Assessing Language Generation using Population Estimation Methods
TLDR
A family of metrics to assess language generation derived from population estimation methods widely used in ecology, which uses mark-recapture and maximum-likelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild.
BERTS CORE : E VALUATING T EXT G ENERATION WITH BERT
  • Computer Science
  • 2019
TLDR
This work proposes BERTSCORE, an automatic evaluation metric for text generation that correlates better with human judgments and provides stronger model selection performance than existing metrics.
IIITBH-IITP@CL-SciSumm20, CL-LaySumm20, LongSumm20
TLDR
This paper presents the IIIT Bhagalpur and IIT Patna team’s effort to solve the three shared tasks namely, CL-SciSumm 2020,CL-LaySumm2020, Long Summ 2020 at SDP 2020, and develops a supervised system for the first two tasks.
Social Bias Frames: Reasoning about Social and Power Implications of Language
TLDR
It is found that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias, they are not effective at spelling out more detailed explanations in terms of Social Bias Frames.
Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences
TLDR
Moral Stories, a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning, is introduced and decoding strategies that combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines are proposed.
Evaluation of Text Generation: A Survey
TLDR
This paper surveys evaluation methods of natural language generation (NLG) systems that have been developed in the last few years, with a focus on the evaluation of recently proposed NLG tasks and neural NLG models.
...
...

References

SHOWING 1-10 OF 40 REFERENCES
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
TLDR
An accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively to generate a concise overall summary is proposed, which achieves the new state-of-the-art on all metrics on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores.
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
TLDR
This work proposes the first model for abstractive summarization of single, longer-form documents (e.g., research papers), consisting of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary.
Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
TLDR
A conditional recurrent neural network (RNN) which generates a summary of an input sentence which significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.
Efficient Adaptation of Pretrained Transformers for Abstractive Summarization
TLDR
This work proposes two solutions for efficiently adapting pretrained transformer language models as text summarizers: source embeddings and domain-adaptive training, and tests them on three abstractive summarization datasets.
Bottom-Up Abstractive Summarization
TLDR
This work explores the use of data-efficient content selectors to over-determine phrases in a source document that should be part of the summary, and shows that this approach improves the ability to compress text, while still generating fluent summaries.
A Neural Attention Model for Abstractive Sentence Summarization
TLDR
This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.
Data-driven Summarization of Scientific Articles
TLDR
This work generates two novel multi-sentence summarization datasets from scientific articles and test the suitability of a wide range of existing extractive and abstractive neural network-based summarization approaches, demonstrating that scientific papers are suitable for data-driven text summarization.
Learning to Write with Cooperative Discriminators
TLDR
Human evaluation demonstrates that text generated by the unified learning framework is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.
Controllable Abstractive Summarization
TLDR
A neural summarization model with a simple but effective mechanism to enable users to specify high level attributes in order to control the shape of the final summaries to better suit their needs.
...
...