SemEval-2007 Task 02: Evaluating Word Sense Induction and Discrimination Systems

Abstract

Word Sense Disambiguation (WSD) is a key enabling-technology. Supervised WSD techniques are the best performing in public evaluations, but need large amounts of hand-tagging data. Existing hand-annotated corpora like SemCor (Miller et al., 1993), which is annotated with WordNet senses (Fellbaum, 1998) allow for a small improvement over the simple most frequent sense heuristic, as attested in the all-words track of the last Senseval competition (Snyder and Palmer, 2004). In theory, larger amounts of training data (SemCor has approx. 500M words) would improve the performance of supervised WSD, but no current project exists to provide such an expensive resource. Another problem of the supervised approach is that the inventory and distribution of senses changes dramatically from one domain to the other, requiring additional hand-tagging of corpora (Martı́nez and Agirre, 2000; Koeling et al., 2005). Supervised WSD is based on the “fixed-list of senses” paradigm, where the senses for a target word are a closed list coming from a dictionary or lexicon. Lexicographers and semanticists have long warned about the problems of such an approach, where senses are listed separately as discrete entities, and have argued in favor of more complex representations, where, for instance, senses are dense regions in a continuum (Cruse, 2000). Unsupervised Word Sense Induction and Discrimination (WSID, also known as corpus-based unsupervised systems) has followed this line of thinking, and tries to induce word senses directly from the corpus. Typical WSID systems involve clustering techniques, which group together similar examples. Given a set of induced clusters (which represent word uses or senses1), each new occurrence of the target word will be compared to the clusters and the most similar cluster will be selected as its sense. One of the problems of unsupervised systems is that of managing to do a fair evaluation. Most of current unsupervised systems are evaluated in-house, with a brief comparison to a re-implementation of a former system, leading to a proliferation of unsupervised systems with little ground to compare among them. The goal of this task is to allow for comparison across sense-induction and discrimination systems, and also to compare these systems to other supervised and knowledge-based systems. The paper is organizes as follows. Section 2 presents the evaluation framework used in this task. Section 3 presents the systems that participated in the task, and the official results. Finally, Section 4 draws the conclusions.

Extracted Key Phrases

5 Figures and Tables

010203020072008200920102011201220132014201520162017
Citations per Year

155 Citations

Semantic Scholar estimates that this publication has 155 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Agirre2007SemEval2007T0, title={SemEval-2007 Task 02: Evaluating Word Sense Induction and Discrimination Systems}, author={Eneko Agirre and Aitor Soroa}, booktitle={SemEval@ACL}, year={2007} }