Learning Summary Prior Representation for Extractive Summarization

@inproceedings{Cao2015LearningSP,
  title={Learning Summary Prior Representation for Extractive Summarization},
  author={Ziqiang Cao and Furu Wei and Sujian Li and Wenjie Li and M. Zhou and Houfeng Wang},
  booktitle={Annual Meeting of the Association for Computational Linguistics},
  year={2015}
}
In this paper, we propose the concept of summary prior to define how much a sentence is appropriate to be selected into summary without consideration of its context. Different from previous work using manually compiled documentindependent features, we develop a novel summary system called PriorSum, which applies the enhanced convolutional neural networks to capture the summary prior features derived from length-variable phrases. Under a regression framework, the learned prior features are… 

Tables from this paper

Learning to Estimate the Importance of Sentences for Multi-Document Summarization

A model for improving the quality of the scoring step, which benefits sentence selection to extract high-quality summaries and achieves sufficient improvements over traditional methods and competitive results with state-of-the-art deep learning models is presented.

Unity in Diversity: Learning Distributed Heterogeneous Sentence Representation for Extractive Summarization

This work develops a data-driven novel summary system called HNet, which exploits the various semantic and compositional aspects latent in a sentence to capture document independent features.

Neural Document Summarization by Jointly Learning to Score and Select Sentences

This paper presents a novel end-to-end neural network framework for extractive document summarization by jointly learning to score and select sentences, which significantly outperforms the state-of-the-art extractive summarization models.

A Redundancy-Aware Sentence Regression Framework for Extractive Summarization

A new framework to conduct regression with respect to the relative gain of s given S calculated by the ROUGE metric is presented and outperforms state-of-the-art extractive summarization approaches.

A Supervised Approach to Extractive Summarisation of Scientific Papers

This paper introduces a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and develops models on the dataset making use of both neural sentence encoding and traditionally used summarisation features.

System Combination for Multi-document Summarization

This work presents a novel framework of system combination for multi-document summarization, which generates candidate summaries by combining whole sentences from the summaries generated by different systems, and presents a supervised model to select among the candidates.

Does Supervised Learning of Sentence Candidates Produce the Best Extractive Summaries?

It is found that the classifiers performance is not related to the summary quality mainly classifier's goal is not aligned to summarizer's goal, as classifier is based on selecting whole sentences, while summarization is evaluated by n-grams, and therefore it is relevant while comparing performances between different works in the state of the art.

Multi-document Summarization via Deep Learning Techniques: A Survey

This survey, the first of its kind, systematically overviews the recent deep-learning-based MDS models and proposes a novel taxonomy to summarize the design strategies of neural networks and conduct a comprehensive summary of the state of the art.

RECURRENT NEURAL NETWORKS WITH ATTENTION MECHANISM

Based on the recurrent neural network equipped with the attention mechanism, a general framework that consists of a hierarchical sentence encoder and an attentionbased sentence extractor is set up to establish various extractive summarization models and explore them.

Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling

Experimental results demonstrate that the explicit modeling and optimizing of the information selection process improves document summarization performance significantly, which enables the model to generate more informative and concise summaries, and thus significantly outperform state-of-the-art neural abstractive methods.
...

References

SHOWING 1-10 OF 20 REFERENCES

Ranking with Recursive Neural Networks and Its Application to Multi-Document Summarization

We develop a Ranking framework upon Recursive Neural Networks (R2N2) to rank sentences for multi-document summarization. It formulates the sentence ranking task as a hierarchical regression

Multi-document Summarization Using Support Vector Regression

Support Vector Regression (SVR) model is used for automatically combining the features and scoring the sentences in multi-document summarization systems, where various features will be picked out and combined into different feature sets to be tested.

A Skip-Chain Conditional Random Field for Ranking Meeting Utterances by Importance

This work uses skipchain Conditional Random Fields to model non-local pragmatic dependencies between paired utterances such as Question-Answer that typically appear together in summaries, and shows that these models outperform linear-chain CRFs and Bayesian models in the task.

Left-Brain / Right-Brain Multi-Document Summarization

This paper discusses the design of CLASSY, variants adapted to each task, and new linguistic endeavors, and an analysis of the results of the efforts using both Rouge and SEE evaluations is discussed.

The use of MMR, diversity-based reranking for reordering documents and producing summaries

This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization, and preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization.

Improving the Estimation of Word Importance for News Multi-Document Summarization

A supervised model for ranking word importance that incorporates a rich set of features is proposed that is superior to prior approaches for identifying words used in human summaries and shows that an extractive summarizer which includes the estimation of word importance results in summaries comparable with the state-of-the-art by automatic evaluation.

The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries

A method for combining query-relevance with information-novelty in the context of text retrieval and summarization and preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization.

CTSUM: extracting more certain summaries for news articles

This paper proposes a novel system called CTSUM to incorporate the new factor of information certainty into the summarization task, which automatically predicts the certainty levels of sentences in news articles by using the support vector regression method with a few useful features.

Convolutional Neural Networks for Sentence Classification

The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification, and are proposed to allow for the use of both task-specific and static vectors.

Learning semantic representations using convolutional neural networks for web search

This paper presents a series of new latent semantic models based on a convolutional neural network to learn low-dimensional semantic vectors for search queries and Web documents that significantly outperforms other se-mantic models in retrieval performance.