DUC in context

@article{Over2007DUCIC,
  title={DUC in context},
  author={Paul Over and Hoa Trang Dang and Donna K. Harman},
  journal={Inf. Process. Manag.},
  year={2007},
  volume={43},
  pages={1506-1520}
}
Recent years have seen increased interest in text summarization with emphasis on evaluation of prototype systems. Many factors can affect the design of such evaluations, requiring choices among competing alternatives. This paper examines several major themes running through three evaluations: SUMMAC, NTCIR, and DUC, with a concentration on DUC. The themes are extrinsic and intrinsic evaluation, evaluation procedures and methods, generic versus focused summaries, single- and multi-document… Expand
Multilingual Summarization Evaluation without Human Models
TLDR
This work applies a new content-based evaluation framework called Fresa to compute a variety of divergences among probability distributions in text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summary in French and Spanish. Expand
Summary Evaluation with and without References
TLDR
A new content–based method for the evaluation of text summarization systems without human models which is used to produce system rankings is studied and a variety of divergences among probability distributions are computed. Expand
Evaluating Multiple System Summary Lengths: A Case Study
TLDR
This paper analyzes a couple of datasets as a case study and concludes that the evaluation protocol in question is indeed competitive, paving the way to practically evaluating varying-length summaries with simple, possibly existing, summarization benchmarks. Expand
Creating Summarization Systems with SUMMA
TLDR
A new version of SUMMA, a text summarization toolkit for the development of adaptive summarization applications, is presented, which includes algorithms for computation of various sentence relevance features and functionality for single and multidocument summarization in various languages. Expand
Summarization Techniques : A Brief Survey
TLDR
In this review, the main approaches to automatic text summarization are described and the e ectiveness and shortcomings of the di erent methods are described. Expand
Multilingual Summarization Approaches
TLDR
The various state-of-the-art multilingual summarization approaches have been grouped based on their characteristics and presented in this chapter. Expand
Automatic Text Summarization: Past, Present and Future
TLDR
This paper gives a short overview of summarization methods and evaluation and the number of interesting summarization topics being proposed in different contexts by end users. Expand
Text Summarization Techniques: A Brief Survey
TLDR
The main approaches to automatic text summarization are described and the effectiveness and shortcomings of the different methods are described. Expand
Automatic Evaluation of Linguistic Quality in Multi-Document Summarization
TLDR
This work presents the first systematic assessment of several diverse classes of metrics designed to capture various aspects of well-written text, and trains and test linguistic quality models on consecutive years of NIST evaluation data to show the generality of results. Expand
Detecting (Un)Important Content for Single-Document News Summarization
TLDR
This work presents a robust approach for detecting intrinsic sentence importance in news, by training on two corpora of document-summary pairs, combined with the “beginning of document” heuristic, which outperforms a state-of-the-art summarizer and the beginning- of-article baseline in both automatic and manual evaluations. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 40 REFERENCES
Overview of DUC 2005
The focus of DUC 2005 was on developing new evaluation methods that take into account variation in content in human-authored summaries. Therefore, DUC 2005 had a single user-oriented,Expand
The Effects of Human Variation in DUC Summarization Evaluation
TLDR
How the variation in human judgments does and does not affect the results and their interpretation of automatic text summarization systems’ output is examined. Expand
Evaluating Content Selection in Summarization: The Pyramid Method
TLDR
It is argued that the method presented is reliable, predictive and diagnostic, thus improves considerably over the shortcomings of the human evaluation method currently used in the Document Understanding Conference. Expand
A Relevance-Based Language Modeling approach to DUC 2005
TLDR
A sentence extraction based summarization system which scores the sentences using Relevance Based Language Modeling, Latent Semantic Indexing and number of special words to generate a summary of required granularity. Expand
CATS a topic-oriented multi-document summarization system at DUC 2005
TLDR
CATS is a multidocument summarizing document that produces an integrated summary of the need for information at a given level of granularity from a set of topic related documents developed at the Universit´ e de Montrfor DUC2005. Expand
Vocabulary Agreement Among Model Summaries And Source Documents 1
Analysis of 9000 manually-written summaries of newswire stories provided to participants in four Document Understanding Conferences indicates that no more than 55% of the vocabulary items they employExpand
From Definitions to Complex Topics: Columbia University at DUC 2005
We describe our approach for the DUC 2005 topicfocused summarization task by adapting a system initially designed to answer only definitional and biographical questions (i.e. “What/Who is X?”). WeExpand
The TIPSTER SUMMAC Text Summarization Evaluation
The TIPSTER Text Summarization Evaluation (SUMMAC) has established definitively that automatic text summarization is very effective in relevance assessment tasks. Summaries as short as 17% of fullExpand
BBN/UMD at DUC-2004: Topiary
TLDR
It is shown that the combination of linguistically motivated sentence compression with statistically selected topic terms performs better than either alone or either alone, according to some automatic summary evaluation measures. Expand
Extrinsic Evaluation of Automatic Metrics for Summarization
TLDR
It is shown that it is possible to save time using summaries for relevance assessment without adversely impacting the degree of accuracy that would be possible with full documents, and a small yet statistically significant correlation between some of the intrinsic measures and a user's performance in an extrinsic task is found. Expand
...
1
2
3
4
...