• Publications
  • Influence
MUC-4 evaluation metrics
TLDR
The scoring algorithms used to arrive at the metrics as well as the improvements that were made to the MUC-3 methods were described, showing that the M UC-4 systems' scores represent a larger improvement over MUC -3 performance than the numbers themselves suggest. Expand
Overview of MUC-7/MET-2
Abstract : The tasks performed by the systems participating in the seventh Message Understanding Conference and the Second Multilingual Entity Task are described here in general terms with examples.Expand
Overview of MUC-7
The task of Coreference (CO) had its origins in Semeval, an attempt after MUC-5 to define semantic research tasks that needed to be solved to be successful at generating scenario templates. In theExpand
Evaluating Message Understanding Systems: An Analysis of the Third Message Understanding Conference (MUC-3)
TLDR
The purpose, history, and methodology of the conference are reviewed, the participating systems are summarized, issues of measuring system effectiveness are discussed, the linguistic phenomena tests are described, and a critical look at the evaluation in terms of the lessons learned is provided. Expand
MUC-5 evaluation metrics
TLDR
The metrics used for the Fifth Message Understanding Conference (MUC-5) evaluation are a major update to those used for MUC-4 in 1992, and the reasons for their adoption are discussed. Expand
The Multilingual Entity Task (MET) Overview
TLDR
Preliminary results indicate that MET systems in all three languages performed comparably to those of the MUC-6 evaluation in English. Expand
Message Understanding Conference (MUC) Tests of Discourse Processing
TLDR
An early attempt to use the results from an information extraction evaluation to provide insight notice relationship between the difficulty of discourse processing and performance on the information extraction task and an upcoming noun phrase coreference evaluation is described. Expand
...
1
2
3
4
...