Coreference Resolution through a seq2seq Transition-Based System

  title={Coreference Resolution through a seq2seq Transition-Based System},
  author={Bernd Bohnet and Chris Alberti and Michael Collins},
  journal={Transactions of the Association for Computational Linguistics},
Abstract Most recent coreference resolution systems use search algorithms over possible spans to identify mentions and resolve coreference. We instead present a coreference resolution system that uses a text-to-text (seq2seq) paradigm to predict mentions and links jointly. We implement the coreference system as a transition system and use multilingual T5 as an underlying language model. We obtain state-of-the-art accuracy on the CoNLL-2012 datasets with 83.3 F1-score for English (a 2.3 higher… 



A Cluster Ranking Model for Full Anaphora Resolution

This paper introduces an architecture to simultaneously identify non-referring expressions (including expletives, predicative s, and other types) and build coreference chains, including singletons and a cluster-ranking system that achieves a score equivalent to that of the state-of-the-art system by Kantor and Globerson (2019) on that dataset.

CorefQA: Coreference Resolution as Query-based Span Prediction

CorefQA is presented, an accurate and extensible approach for the coreference resolution task, formulated as a span prediction task, like in question answering, which provides the flexibility of retrieving mentions left out at the mention proposal stage.

End-to-end Neural Coreference Resolution

This work introduces the first end-to-end coreference resolution model, trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions.

CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes

The OntoNotes annotation (coreference and other layers) is described and the parameters of the shared task including the format, pre-processing information, evaluation criteria, and presents and discusses the results achieved by the participating systems.

Revealing the Myth of Higher-Order Inference in Coreference Resolution

This paper implements an end-to-end coreference system as well as four HOI approaches, attended antecedent, entity equalization, span clustering, and cluster merging, where the latter two are the original methods.

Limited memory incremental coreference resolution

An algorithm for coreference resolution based on analogy with shift-reduce parsing is proposed, which achieves CoNLL scores and is competitive with the best reported research systems despite having low memory requirements and a simpler model.

Lazy Low-Resource Coreference Resolution: a Study on Leveraging Black-Box Translation Tools

This work empirically explores the appealing idea of leveraging machine translation tools for bootstrapping coreference resolution in languages with limited resources, and finds no improvement over monolingual baseline models.

Neural End-to-end Coreference Resolution for German in Different Domains

In an effort to support datasets representing the domains of both news and literature, this work makes use of two distinct model architectures: a mention linking-based and an incremental entity-based approach that should scale to very long documents such as literary works.

Neural Coreference Resolution for Arabic

This paper introduces a coreference resolution system for Arabic based on Lee et al’s end-to-end architecture combined with the Arabic version of bert and an external mention detector that substantially outperforms the existing state-of-the-art on OntoNotes 5.0.

Coreference Resolution without Span Representations

A lightweight end-to-end coreference model that removes the dependency on span representations, handcrafted features, and heuristics and performs competitively with the current standard model, while being simpler and more efficient.