• Corpus ID: 235358779

Global-aware Beam Search for Neural Abstractive Summarization

  title={Global-aware Beam Search for Neural Abstractive Summarization},
  author={Ye Ma and Zixun Lan and Lu Zong and Kaizhu Huang},
This study develops a calibrated beam-based algorithm with awareness of the global attention distribution for neural abstractive summarization, aiming to improve the local optimality problem of the original beam search in a rigorous way. Specifically, a novel global protocol is proposed based on the attention distribution to stipulate how a global optimal hypothesis should attend to the source. A global scoring mechanism is then developed to regulate beam search to generate summaries in a near… 

More Interpretable Graph Similarity Computation via Maximum Common Subgraph Inference

A more interpretable end-to-end paradigm for graph similarity learning, named Similarity Computation via Maximum Common Subgraph Inference (INFMCS), which consistently outperforms state-of-the-art baselines for graph-graph classification and regression tasks.



Abstractive Document Summarization with a Graph-Based Attentional Neural Model

A novel graph-based attention mechanism in the sequence-to-sequence framework to address the saliency factor of summarization, which has been overlooked by prior works and is competitive with state-of-the-art extractive methods.

Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting

An accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively to generate a concise overall summary is proposed, which achieves the new state-of-the-art on all metrics on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores.

Get To The Point: Summarization with Pointer-Generator Networks

A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.

A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents

This work proposes the first model for abstractive summarization of single, longer-form documents (e.g., research papers), consisting of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary.

A Neural Attention Model for Abstractive Sentence Summarization

This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.

Improving Abstractive Document Summarization with Salient Information Modeling

A Transformer-based encoder-decoder framework with two novel extensions for abstractive document summarization that outperforms other state-of-the-art baselines on the ROUGE metrics.

Improving Neural Abstractive Document Summarization with Structural Regularization

This paper proposes to leverage the structural information of both documents and multi-sentence summaries to improve the document summarization performance and imports both structural-compression and structural-coverage regularization into the summarization process in order to capture the information compression and information coverage properties.

Guiding Generation for Abstractive Text Summarization Based on Key Information Guide Network

A guiding generation model that combines the extractive method and the abstractive method, and introduces a Key Information Guide Network (KIGN), which encodes the keywords to the key information representation, to guide the process of generation.

Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization

An initial investigation into a novel adaptation method that exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an Abstractive summary.

What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment

An end-to-end, domain-independent neural encoder-aligner-decoder model for selective generation, i.e., the joint task of content selection and surface realization, achieves the best selection and generation results reported to-date on the benchmark WeatherGov dataset, despite using no specialized features or linguistic resources.