Paraphrasing vs Coreferring: Two Sides of the Same Coin

@article{Meged2020ParaphrasingVC,
  title={Paraphrasing vs Coreferring: Two Sides of the Same Coin},
  author={Y. Meged and Avi Caciularu and Vered Shwartz and Ido Dagan},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.14979}
}
We study the potential synergy between two different NLP tasks, both confronting predicate lexical variability: identifying predicate paraphrases, and event coreference resolution. First, we used annotations from an event coreference dataset as distant supervision to re-score heuristically-extracted predicate paraphrases. The new scoring gained more than 18 points in average precision upon their ranking by the original scoring method. Then, we used the same re-ranking features as additional… 

Figures and Tables from this paper

Event Coreference Resolution with their Paraphrases and Argument-aware Embeddings

TLDR
EPASE recognizes deep paraphrase relations in an event-specific context of sentences and can cover event paraphrases of more situations, bringing about a better generalization and the embeddings of argument roles are encoded into event embedding without relying on a fixed number and type of arguments, which results in the better scalability of EPASE.

Sequential Cross-Document Coreference Resolution

TLDR
A new model is proposed that extends the efficient sequential prediction paradigm for coreference resolution to cross- document settings and achieves competitive results for both entity and event coreference while providing strong evidence of the efficacy of both sequential models and higher-order inference in cross-document settings.

Generalizing Cross-Document Event Coreference Resolution Across Multiple Corpora

TLDR
It is found that the importance of event actions, event time, and so forth, for resolving coreference in practice varies greatly between the corpora, and several systems overfit on the structure of the ECB+ corpus.

Hierarchical Graph Convolutional Networks for Jointly Resolving Cross-document Coreference of Entity and Event Mentions

TLDR
A novel deep learning model is proposed for CDECR that introduces hierarchical graph convolutional neural networks (GCN) and document-level GCN to jointly resolve entity and event mentions.

Cross-Document Event Coreference Resolution Beyond Corpus-Tailored Systems

TLDR
This work defines a uniform evaluation setup involving three CDCR corpora: ECB+, the Gun Violence Corpus and the Football Coreference Corpus, and compares a corpus-independent, feature-based system against a recent neural system developed for ECB+.

Enhancing Structure Preservation in Coreference Resolution by Constrained Graph Encoding

TLDR
A general graph schema derived from diverse knowledge sources is proposed to directly link mentions, so that rich information can be exchanged via the relevant connections and two adaptive constraints during graph encoding are imposed to regularize the embedding space.

Event and Entity Coreference using Trees to Encode Uncertainty in Joint Decisions

TLDR
An alternating optimization method for inference that when clustering event mentions, considers the uncertainty of the clustering of entity mentions and vice-versa, and it is shown that the proposed joint model provides empirical advantages over state-of-the-art independent and joint models.

Focus on what matters: Applying Discourse Coherence Theory to Cross Document Coreference

TLDR
This work model the entities/events in a reader’s focus as a neighborhood within a learned latent embedding space which minimizes the distance between mentions and the centroids of their gold coreference clusters, leading to a robust coreference resolution model that is now feasible to apply to downstream tasks.

iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration

TLDR
iF, a web application for exploring topical document collections, integrates interactive summarization together with faceted search, by providing a novel faceted navigation scheme that yields abstractive summaries for the user’s selections.

Continuous Entailment Patterns for Lexical Inference in Context

TLDR
In a direct comparison with discrete patterns, CONAN consistently leads to improved performance, setting a new state of the art in lexical inference in context and raising important questions regarding the understanding of PLMs using text patterns.

References

SHOWING 1-10 OF 41 REFERENCES

Extracting Lexically Divergent Paraphrases from Twitter

TLDR
A new model suited to identify paraphrases within the short messages on Twitter, and a novel annotation methodology that has allowed us to crowdsource a paraphrase corpus from Twitter is presented.

Scoring Coreference Partitions of Predicted Mentions: A Reference Implementation

TLDR
It is argued that mention manipulation for scoring predicted mentions is unnecessary, and potentially harmful as it could produce unintuitive results, and an open-source, thoroughly-tested reference implementation of the main coreference evaluation measures is made available.

A Continuously Growing Dataset of Sentential Paraphrases

TLDR
A new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs is presented, which presents the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification.

Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution

TLDR
This paper augmented the corpus of the EventCorefBank with a new corpus component, consisting of 502 texts, describing different instances of event types that were already captured by the 43 topics of the ECB, making it more representative of news articles on the web.

Same Referent, Different Words: Unsupervised Mining of Opaque Coreferent Mentions

TLDR
A new unsupervised method for mining opaque pairs and a dictionary of opaque coreferent mentions, which can be integrated into any coreference system and is easily extendable by using news aggregators.

Paraphrasing Revisited with Neural Machine Translation

TLDR
This paper revisit bilingual pivoting in the context of neural machine translation and presents a paraphrasing model based purely on neural networks, which represents paraphrases in a continuous space, estimates the degree of semantic relatedness between text segments of arbitrary length, and generates candidate paraphrase for any source input.

Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment

TLDR
This work applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences.

Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution

TLDR
This work jointly model entity and event coreference, and proposes a neural architecture for cross-document coreference resolution using its lexical span, surrounding context, and relation to entity (event) mentions via predicate-arguments structures.

A model-theoretic coreference scoring scheme

This note describes a scoring scheme for the coreference task in MUC6. It improves on the original approach by: (1) grounding the scoring scheme in terms of a model; (2) producing more intuitive

Automatic paraphrase acquisition from news articles

TLDR
This is the initial attempt at automatically extracting paraphrases from a corpus, and the results are promising.