• Corpus ID: 232478801

Detecting over/under-translation errors for determining adequacy in human translations

@article{Gupta2021DetectingOE,
  title={Detecting over/under-translation errors for determining adequacy in human translations},
  author={Prabhakar Gupta and Ridha Juneja and Anil Kumar Nelakanti and Tamojit Chatterjee},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.00267}
}
We present a novel approach to detecting over and under translations (OT/UT) as part of adequacy error checks in translation evaluation. We do not restrict ourselves to machine translation (MT) outputs and specifically target applications with human generated translation pipeline. The goal of our system is to identify OT/UT errors from human translated video subtitles with high error recall. We achieve this without reference translations by learning a model on synthesized training data. We… 

Tables from this paper

References

SHOWING 1-10 OF 23 REFERENCES
Are we Estimating or Guesstimating Translation Quality?
TLDR
It is suggested that although QE models might capture fluency of translated sentences and complexity of source sentences, they cannot model adequacy of translations effectively.
Otem&Utem: Over- and Under-Translation Evaluation Metric for NMT
TLDR
Two quantitative metrics are proposed, the Otem and Utem, to automatically evaluate the system performance in terms of over- and under-translation respectively, based on the proportion of mismatched n-grams between gold reference and system translation.
Estimating the Sentence-Level Quality of Machine Translation Systems
TLDR
Results show that the proposed method allows obtaining good estimates and that identifying a reduced set of relevant features plays an important role in predicting the quality of sentences produced by machine translation systems when reference translations are not available.
DeepSubQE: Quality estimation for subtitle translations
TLDR
This work shows how existing QE methods are inadequate and proposes the method DeepSubQE as a system to estimate quality of translation given subtitles data for a pair of languages and creates a hybrid network which learns semantic and syntactic features of bilingual data and compares it with only-LSTM and only-CNN networks.
Modeling Coverage for Neural Machine Translation
TLDR
This paper proposes coverage-based NMT, which maintains a coverage vector to keep track of the attention history and improves both translation quality and alignment quality over standard attention- based NMT.
A Study of Translation Edit Rate with Targeted Human Annotation
TLDR
A new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments is examined, which indicates that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate withhuman judgments as well as—or better than—a second human judgment does.
Bleu: a Method for Automatic Evaluation of Machine Translation
TLDR
This work proposes a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
TLDR
An architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts using a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, coupled with an auxiliary decoder and trained on publicly available parallel corpora.
Towards Automatic Error Analysis of Machine Translation Output
TLDR
A framework for automatic error analysis and classification based on the identification of actual erroneous words using the algorithms for computation of Word Error Rate and Position-independent word Error Rate is proposed, which is just a very first step towards development of automatic evaluation measures that provide more specific information of certain translation problems.
GECToR – Grammatical Error Correction: Tag, Not Rewrite
In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder. Our system is pre-trained on synthetic data and then fine-tuned in two stages: first on errorful
...
1
2
3
...