• Corpus ID: 618047

Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task

  title={Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task},
  author={Heeyoung Lee and Yves Peirsman and Angel X. Chang and Nathanael Chambers and Mihai Surdeanu and Dan Jurafsky},
  booktitle={CoNLL Shared Task},
This paper details the coreference resolution system submitted by Stanford at the CoNLL-2011 shared task. [] Key Result Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.

Tables from this paper

System paper for CoNLL-2012 shared task: Hybrid Rule-based Algorithm for Coreference Resolution.

This paper describes our coreference resolution system for the CoNLL-2012 shared task. Our system is based on the Stanford’s dcoref deterministic system which applies multiple sieves with the order

Hybrid rule-based algorithm for coreference resolution

This paper describes our coreference resolution system for the CoNLL-2012 shared task. Our system is based on the Stanford's dcoref deterministic system which applies multiple sieves with the order

A Mixed Deterministic Model for Coreference Resolution

A mixed deterministic model for coreference resolution in the CoNLL-2012 shared task is presented and several sub-tasks are solved by machine learning method and deterministic rules based on multi-filters, such as lexical, syntactic, semantic, gender and number information.

A Multi-Pass Sieve Coreference Resolution for Indonesian

This work examines the portability of the multi-pass sieve coreference resolution model to the Indonesian language and finds that the system yields 72.74% of MUC F-measure and 52.18% of BCUBED F-Measure.

Entity-Centric Coreference Resolution with Model Stacking

This work trains an entity-centric coreference system that learns an effective policy for building up coreference chains incrementally by aggregating the scores produced by mention pair models to define powerful entity-level features between clusters of mentions.

Joint Anaphoricity Detection and Coreference Resolution with Constrained Latent Structures

A latent tree is used to represent the full coreference and anaphoric structure of a document at a global level, and the parameters of the two models are jointly learned using a version of the structured perceptron algorithm.

Unsupervised Ranking Model for Entity Coreference Resolution

This paper proposes a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables and achieves higher F1 score than the Stanford deterministic system.

Multi-pass Sieve Coreference Resolution System for Polish

This paper examines the portability of Stanford's multi-pass rule-based sieve coreference resolution system to inflectional language (Polish) with a different annotation scheme and shows that the results for Polish are higher than those seen on the CoNLL-2011/2012 data.

Coreference Resolution for Russian: Taking Stock and Moving Forward

Thorough manual feature engineering allows us to significantly improve current state-of-the-art results and compare applicability of various clustering algorithms to building coreference groups.

Relational Structures and Models for Coreference Resolution

Two methods for incorporating relational information into a coreference resolver using a filtering algorithm to rerank the output of coreference hypotheses and a joint model enriched with a set of relational features derived from semantic relations of each mention are discussed.



Coreference Resolution in a Modular, Entity-Centered Model

This generative, model-based approach in which each of these factors is modularly encapsulated and learned in a primarily unsu-pervised manner is presented, resulting in the best results to date on the complete end-to-end coreference task.

SUCRE: A Modular System for Coreference Resolution

SUCRE is a new software tool that is able to separately do noun, pronoun and full coreference resolution and its feature engineering based on a relational database model and a regular feature definition language.

SemEval-2010 Task 1: Coreference Resolution in Multiple Languages

An insight is provided into (i) the portability of coreference resolution systems across languages, and (ii) the effect of different scoring metrics on ranking the output of the participant systems.

A Multi-Pass Sieve for Coreference Resolution

This work proposes a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision, and outperforms many state-of-the-art supervised and unsupervised models on several standard corpora.

Accurate Semantic Class Classifier for Coreference Resolution

New ways to extract WordNet feature, along with other features such as named entity feature, can be used to build an accurate semantic class (SC) classifier and the relaxation of SC agreement features on ACE2 coreference evaluation are proposed.

UBIU: A Language-Independent System for Coreference Resolution

We present UBIU, a language independent system for detecting full coreference chains, composed of named entities, pronouns, and full noun phrases which makes use of memory based learning and a

Simple Coreference Resolution with Rich Syntactic and Semantic Features

This work presents a simple approach which completely modularizes these three aspects of coreference, which is deterministic and is driven entirely by syntactic and semantic compatibility as learned from a large, unlabeled corpus.

Conundrums in Noun Phrase Coreference Resolution: Making Sense of the State-of-the-Art

This work examines three subproblems that play a role in coreference resolution: named entity recognition, anaphoricity determination, and coreference element detection, and measures the performance of a state-of-the-art coreference resolver on several classes of anaphora.

Improving Machine Learning Approaches to Coreference Resolution

A noun phrase coreference system that extends the work of Soon et al. (2001) and produces the best results to date on the M UC-6 and MUC-7 coreference resolution data sets --- F-measures of 70.4 and 63.4, respectively.

Antecedent Selection Techniques for High-Recall Coreference Resolution

We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head