• Publications
  • Influence
The Automatic Content Extraction (ACE) Program - Tasks, Data, and Evaluation
The objective of the ACE program is to develop technology to automatically infer from human language data the entities being mentioned, the relations among these entities that are directly expressed, and the events in which these entities participate. Expand
The DET curve in assessment of detection task performance
We introduce the DET Curve as a means of representing performance on detection tasks that involve a tradeoff of error types. Expand
SHEEP, GOATS, LAMBS and WOLVES: a statistical analysis of speaker performance in the NIST 1998 speaker recognition evaluation
We propose statistical tests for the existence of animal names for different types of speakers, including sheep, goats, lambs and wolves, depending on their behavior with respect to automatic recognition systems. Expand
Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation
This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. Expand
The NIST speaker recognition evaluation - Overview, methodology, systems, results, perspective
This paper, based on three presentations made in 1998 at RLA2C Workshop in Avignon, discusses the evaluation of speaker recognition systems from several perspectives. Expand
The 2011 NIST Language Recognition Evaluation
In 2017, the U.S. National Institute of Standards and Technology conducted the most recent in an ongoing series of Language Recognition Evaluations (LRE) meant to foster research in robust textand speaker-independent language recognition as well as measure performance of current state-of-the-art systems. Expand
An Introduction to Evaluating Biometric Systems
The authors designed this article to provide sufficient information to know what questions to ask when evaluating a biometric system and to assist in determining whether performance levels meet the requirements of an application. Expand
The NIST 1999 Speaker Recognition Evaluation - An Overview
Martin, Alvin, and Przybocki, Mark, The NIST 1999 Speaker Recognition Evaluation?An Overview, Digital Signal Processing10(2000), 1?18.This article summarizes the 1999 NIST Speaker RecognitionExpand
NIST speaker recognition evaluation chronicles
NIST has coordinated annual evaluations of textindependent speaker recognition since 1996. Expand
1993 Benchmark Tests for the ARPA Spoken Language Program
This paper reports results obtained in benchmark tests conducted within the ARPA Spoken Language program in November and December of 1993. Expand