ContraCAT: Contrastive Coreference Analytical Templates for Machine Translation

@inproceedings{Stojanovski2020ContraCATCC,
  title={ContraCAT: Contrastive Coreference Analytical Templates for Machine Translation},
  author={Dario Stojanovski and Benno Krojer and Denis Peskov and Alexander M. Fraser},
  booktitle={COLING},
  year={2020}
}
Recent high scores on pronoun translation using context-aware neural machine translation have suggested that current approaches work well. ContraPro is a notable example of a contrastive challenge set for English→German pronoun translation. The high scores achieved by transformer models may suggest that they are able to effectively model the complicated set of inferences required to carry out pronoun translation. This entails the ability to determine which entities could be referred to… 

On the Limits of Minimal Pairs in Contrastive Evaluation

It is argued that two conditions are necessary for this assumption that model behavior on contrastive pairs is predictive of model behavior at large to hold: a tested hypothesis should be well-motivated, and test data should be chosen such as to minimize distributional discrepancy between evaluation time and deployment time.

Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution

This work presents Wino-X, a parallel dataset of German, French, and Russian schemas, aligned with their English counterparts, to investigate whether neural machine translation (NMT) models can perform CoR that requires commonsense knowledge and whether multilingual language models (MLLMs) are capable of CSR across multiple languages.

Divide and Rule: Effective Pre-Training for Context-Aware Multi-Encoder Translation Models

This work proposes to pre-train the contextual parameters over split sentence pairs, which makes an efficient use of the available data and increases the contextual training signal by breaking intra-sentential syntactic relations, and thus pushing the model to search the context for disambiguating clues more frequently.

Measuring and Increasing Context Usage in Context-Aware Machine Translation

A new metric, conditional cross-mutual information, is introduced, to quantify usage of context by context-aware models, and it is found that target context is referenced more than source context, and that including more context has a diminishing affect on results.

Hypergraph Contrastive Collaborative Filtering

A new self-supervised recommendation framework Hypergraph Contrastive Collaborative Filtering (HCCF) is proposed to jointly capture local and global collaborative relations with a hypergraph-enhanced cross-view contrastive learning architecture, and enhances the discrimination ability of GNN-based CF paradigm, in comprehensively capturing the complex high-order dependencies among users.

Divide and Rule: Training Context-Aware Multi-Encoder Translation Models with Little Resources

This work proposes an efficient alternative, based on splitting sentence pairs, that allows to enrich the training signal of a set of parallel sentences by breaking intra-sentential syntactic links, and thus frequently pushing the model to search the context for disambiguating clues.

References

SHOWING 1-10 OF 38 REFERENCES

Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite

An extensive, targeted dataset is contributed that can be used as a test suite for pronoun translation, covering multiple source languages and different pronoun errors drawn from real system translations, for English and an evaluation measure is proposed to differentiate good and bad pronoun translations.

Coreference and Coherence in Neural Machine Translation: A Study Using Oracle Experiments

It is shown that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7.02 B LEU for coreference and 1.89 BLEu for coherence on subtitles translation.

Context-Aware Neural Machine Translation Learns Anaphora Resolution

A context-aware neural machine translation model designed in such way that the flow of information from the extended context to the translation model can be controlled and analyzed is introduced.

Evaluating Discourse Phenomena in Neural Machine Translation

This article presents hand-crafted, discourse test sets, designed to test the recently proposed multi-encoder NMT models’ ability to exploit previous source and target sentences, and explores a novel way of exploiting context from the previous sentence.

Improving Anaphora Resolution in Neural Machine Translation Using Curriculum Learning

This work proposes a carefully designed training curriculum that facilitates better anaphora resolution in context-aware NMT and trains context- aware models which are improved with respect to coreference resolution, even though both the baseline and the improved system have access to exactly the same information at test time.

How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs

This work presents LingEval97, a large-scale data set of 97000 contrastive translation pairs based on the WMT English->German translation task, with errors automatically created with simple rules, and finds that recently introduced character-level NMT systems perform better at transliteration than models with byte-pair encode segmentation, but perform more poorly at morphosyntactic agreement, and translating discontiguous units of meaning.

When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion

This work performs a human study on an English-Russian subtitles dataset and identifies deixis, ellipsis and lexical cohesion as three main sources of inconsistency as well as introducing a model suitable for this scenario and demonstrating major gains over a context-agnostic baseline on new benchmarks without sacrificing performance as measured with BLEU.

PROTEST: A Test Suite for Evaluating Pronouns in Machine Translation

The proposed test suite comprises 250 hand-selected pronoun tokens and an automatic evaluation method which compares the translations of pronouns in MT output with those in the reference translation, designed to support analysis of system performance at the level of individual pronoun groups.

Evaluating Gender Bias in Machine Translation

An automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis is devised, which shows that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages.

Modelling pronominal anaphora in statistical machine translation

A word dependency model is presented for SMT, which can represent links between word pairs in the same or in different sentences, and is used to integrate the output of a coreference resolution system into English-German SMT with a view to improving the translation of anaphoric pronouns.