Discourse Understanding and Factual Consistency in Abstractive Summarization

@inproceedings{Gabriel2021DiscourseUA,
  title={Discourse Understanding and Factual Consistency in Abstractive Summarization},
  author={Saadia Gabriel and Antoine Bosselut and Jeff Da and Ari Holtzman and Jan Buys and Asli Celikyilmaz and Yejin Choi},
  booktitle={EACL},
  year={2021}
}
We introduce a general framework for abstractive summarization with factual consistency and distinct modeling of the narrative flow in an output summary. Our work addresses current limitations of models for abstractive summarization that often hallucinate information or generate summaries with coherence issues. To generate abstractive summaries with factual consistency and narrative flow, we propose Cooperative Generator-Discriminator Networks (Co-opNet), a novel transformer-based framework… 

DialogSum Challenge: Results of the Dialogue Summarization Shared Task

We report the results of DialogSum Challenge , the shared task on summarizing real-life scenario dialogues at INLG 2022. Four teams participate in this shared task and three submit their system

Choisir le bon co-équipier pour la génération coopérative de texte (Choosing The Right Teammate For Cooperative Text Generation)

Les modèles de langue génèrent des textes en prédisant successivement des distributions de probabilité pour les prochains tokens en fonction des tokens précédents. Pour générer des textes avec des

Generating Scientific Definitions with Controllable Complexity

TLDR
A novel reranking approach is introduced and it is found in human evaluations that it offers superior fluency while also controlling complexity, compared to several controllable generation baselines.

Which Discriminator for Cooperative Text Generation?

TLDR
This paper examines three families of (transformer-based) discriminators for this specific task of cooperative decoding: bidirectional, left-to-right and generative ones, exploring respective accuracy on classification tasks along with their impact on the resulting sample quality and computational performances.

Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods

TLDR
This survey provides a systematic overview of the research progress on the faithfulness problem of NLG, including problem analysis, evaluation metrics and optimization methods, and organizes the evaluation and optimized methods for different tasks into a unified taxonomy to facilitate comparison and learning across tasks.

Paper Plain: Making Medical Research Papers Approachable to Healthcare Consumers with Natural Language Processing

TLDR
The study results suggest that guiding readers to relevant passages and providing plain language summaries, or “gists,” alongside the original paper content can make reading medical papers easier and give readers more confidence to approach these papers.

It’s not Rocket Science: Interpreting Figurative Language in Narratives

TLDR
This paper studies the interpretation of two non-compositional figurative languages (idioms and similes), and proposes knowledge-enhanced models, adopting human strategies for interpreting figurative language types: inferring meaning from the context and relying on the constituent words’ literal meanings.

Generating Scientific Definitions with Controllable Complexity

TLDR
A novel reranking approach is introduced and it is shown in human evaluations that it offers superior fluency while also controlling complexity, compared to several controllable generation base-lines.

References

SHOWING 1-10 OF 46 REFERENCES

Faithful to the Original: Fact Aware Neural Abstractive Summarization

TLDR
This work argues that faithfulness is also a vital prerequisite for a practical abstractive summarization system and proposes a dual-attention sequence-to-sequence framework to force the generation conditioned on both the source text and the extracted fact descriptions.

Evaluating the Factual Consistency of Abstractive Text Summarization

TLDR
A weakly-supervised, model-based approach for verifying factual consistency and identifying conflicts between source documents and a generated summary substantially outperforms previous models, including those trained with strong supervision using standard datasets for natural language inference and fact checking.

On Faithfulness and Factuality in Abstractive Summarization

TLDR
It is found that neural abstractive summarization models are highly prone to hallucinate content that is unfaithful to the input document and textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.

A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents

TLDR
This work proposes the first model for abstractive summarization of single, longer-form documents (e.g., research papers), consisting of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary.

How to Write Summaries with Patterns? Learning towards Abstractive Summarization through Prototype Editing

TLDR
Extensive experiments conducted on a large-scale real-world text summarization dataset show that PESG achieves the state-of-the-art performance in terms of both automatic metrics and human evaluations.

Discourse-Aware Neural Extractive Text Summarization

TLDR
DiscoBert extracts sub-sentential discourse units (instead of sentences) as candidates for extractive selection on a finer granularity and outperforms state-of-the-art methods by a significant margin on popular summarization benchmarks compared to other BERT-base models.

Paragraph-Level Commonsense Transformers with Recurrent Memory

TLDR
PARA-COMET, a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives, outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.

Bottom-Up Abstractive Summarization

TLDR
This work explores the use of data-efficient content selectors to over-determine phrases in a source document that should be part of the summary, and shows that this approach improves the ability to compress text, while still generating fluent summaries.

Get To The Point: Summarization with Pointer-Generator Networks

TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.

A Neural Attention Model for Abstractive Sentence Summarization

TLDR
This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.