Corpus ID: 26768540

Overview of the TAC 2008 Update Summarization Task

@article{Dang2008OverviewOT,
  title={Overview of the TAC 2008 Update Summarization Task},
  author={H. Dang and Karolina Owczarzak},
  journal={Theory and Applications of Categories},
  year={2008}
}
The summarization track at the Text Analysis Conference (TAC) is a direct continuation of the Document Understanding Conference (DUC) series of workshops, focused on providing common data and evaluation framework for research in automatic summarization. In the TAC 2008 summarization track, the main task was to produce two 100-word summaries from two related sets of 10 documents, where the second summary was an update summary. While all of the 71 submitted runs were automatically scored with the… Expand
TAC 2009 Update Summarization of ICL
TLDR
For the update summarization task of TAC 2009, two runs are submitted using two different methods, called clustering method, which divides all sentences in one topic into several clusters and the more significant a cluster appears, the more likely sentences will be chosen from this cluster. Expand
Re-evaluating Evaluation in Text Summarization
TLDR
Assessing the reliability of automatic metrics using top-scoring system outputs on recently popular datasets for both system-level and summary-level evaluation settings finds that conclusions about evaluation metrics on older datasets do not necessarily hold on modern datasets and systems. Expand
Automatic Summarization from Multiple Documents
TLDR
This work formalizes the n-gram graph representation and its use in NLP tasks, and presents a set of algorithmic constructs and methodologies that aim to support meaning extraction and textual quality quantification. Expand
Generating Update Summaries: Using an Unsupervized Clustering Algorithm to Cluster Sentences
  • A. Bossard
  • Computer Science
  • Multi-source, Multilingual Information Extraction and Summarization
  • 2013
TLDR
This article presents a summarization system dedicated to update summarization, based on CBSEAS, and describes TAC 2009 “Update Task”, used to evaluate the system. Expand
TAC2011 MultiLing Pilot Overview
The Text Analysis Conference MultiLing Pilot of 2011 posed a multi-lingual summarization task to the summarization community, aiming to quantify and measure the performance of multi-lingual,Expand
Re-ranking Summaries Based on Cross-Document Information Extraction
TLDR
A method to automatically incorporate IE results into sentence ranking based on cross-document information extraction results is described, which can significantly improve a high-performing multi-document summarization system. Expand
Toward a Gold Standard for Extractive Text Summarization
TLDR
This work employed a corpus partially labelled with Summary Content Units: snippets which convey the main ideas in the document collection and created SCU-optimal summaries for extractive summarization, which support the claim of optimality. Expand
HLTCOE Submission at TREC 2013 : Temporal Summarization
Our team submitted runs for the first running of the TREC Temporal Summarization track. We focused on the Sequential Update Summarization task. This task involves simulating processing a temporallyExpand
Multi-document multilingual summarization and evaluation tracks in ACL 2013 MultiLing Workshop
The MultiLing 2013 Workshop of ACL 2013 posed a multi-lingual, multidocument summarization task to the summarization community, aiming to quantify and measure the performance of multi-lingual,Expand
HLTCOE at TREC 2013: Temporal Summarization
TLDR
This task involves simulating processing a temporally ordered stream of over 1 billion documents to identify sentences that are relevant to a specific breaking news stories which contain new and important content. Expand
...
1
2
3
4
5
...

References

SHOWING 1-3 OF 3 REFERENCES
ROUGE: A Package for Automatic Evaluation of Summaries
TLDR
Four different RouGE measures are introduced: ROUGE-N, ROUge-L, R OUGE-W, and ROUAGE-S included in the Rouge summarization evaluation package and their evaluations. Expand
Evaluating DUC 2005 using Basic Elements
TLDR
It is shown that this method correlates better with human judgments than any other automated procedure to date, and overcomes the subjectivity/variability problems of manual methods that require humans to preprocess summaries to be evaluated. Expand
Applying the Pyramid Method in DUC 2005
TLDR
It is found that a modified pyramid score gave good results and would simplify peer annotation in the future and high score correlations between sets from different annotators, and good interannotator agreement, indicate that participants can perform annotation reliably. Expand