Understanding Mention Detector-Linker Interaction in Neural Coreference Resolution

@article{Wu2021UnderstandingMD,
  title={Understanding Mention Detector-Linker Interaction in Neural Coreference Resolution},
  author={Zhaofeng Wu and Matt Gardner},
  journal={ArXiv},
  year={2021},
  volume={abs/2009.09363}
}
Despite significant recent progress in coreference resolution, the quality of current state-of-the-art systems still considerably trails behind human-level performance. Using the CoNLL-2012 and PreCo datasets, we dissect the best instantiation of the mainstream end-to-end coreference resolution model that underlies most current best-performing coreference systems, and empirically analyze the behavior of its two components: mention detector and mention linker. While the detector traditionally… 

Figures and Tables from this paper

Incorporating Constituent Syntax for Coreference Resolution
TLDR
This work proposes a simple yet effective graph-based method to incorporate constituent syntactic structures and explores to utilise higher-order neighbourhood information to encode rich structures in constituent trees.
Adapting Coreference Resolution Models through Active Learning
TLDR
This paper explores how to actively label coreference, examining sources of model uncertainty and document reading costs, and compares uncertainty sampling strategies and their advantages through thorough error analysis.
Moving on from OntoNotes: Coreference Resolution Model Transfer
TLDR
This work examines eleven target datasets and finds that continued training is consistently effective and especially beneficial when there are few target documents, and establishes new benchmarks across several datasets, including state-of-the-art results on PreCo.

References

SHOWING 1-10 OF 34 REFERENCES
End-to-end Neural Coreference Resolution
TLDR
This work introduces the first end-to-end coreference resolution model, trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions.
Revealing the Myth of Higher-Order Inference in Coreference Resolution
TLDR
This paper implements an end-to-end coreference system as well as four HOI approaches, attended antecedent, entity equalization, span clustering, and cluster merging, where the latter two are the original methods.
Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention Clustering
TLDR
This paper proposes to improve the end-to-end coreference resolution system by using a biaffine attention model to get antecedent scores for each possible mention, and jointly optimizing the mention detection accuracy and mention clustering accuracy given the mention cluster labels.
Solving Hard Coreference Problems
TLDR
This paper presents a general coreference resolution system that significantly improves state-of-the-art performance on hard, Winograd-style, pronoun resolution cases, while still performing at the state of the art level on standard coreferenceresolution datasets.
PreCo: A Large-scale Dataset in Preschool Vocabulary for Coreference Resolution
TLDR
Experiments show that with higher training-test overlap, error analysis on PreCo is more efficient than the one on OntoNotes, a popular existing dataset and annotate singleton mentions making it possible for the first time to quantify the influence that a mention detector makes on coreference resolution performance.
Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning
TLDR
This work presents a new crowdsourced dataset containing more than 24K span-selection questions that require resolving coreference among entities in over 4.7K English paragraphs from Wikipedia, and shows that state-of-the-art reading comprehension models perform significantly worse than humans on this benchmark.
Learning Global Features for Coreference Resolution
TLDR
RNNs are proposed to be used to learn latent, global representations of entity clusters directly from their mentions, which are especially useful for the prediction of pronominal mentions, and can be incorporated into an end-to-end coreference system that outperforms the state of the art without requiring any additional search.
CorefQA: Coreference Resolution as Query-based Span Prediction
TLDR
CorefQA is presented, an accurate and extensible approach for the coreference resolution task, formulated as a span prediction task, like in question answering, which provides the flexibility of retrieving mentions left out at the mention proposal stage.
Latent Structures for Coreference Resolution
TLDR
A unified representation of different approaches to coreference resolution in terms of the structure they operate on is proposed and a systematic analysis of the output of these approaches is conducted, highlighting differences and similarities.
Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task
TLDR
The coreference resolution system submitted by Stanford at the CoNLL-2011 shared task was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.
...
...