• Corpus ID: 17848183

Perception vs. reality: measuring machine translation post-editing productivity

  title={Perception vs. reality: measuring machine translation post-editing productivity},
  author={Federico Gaspari and Antonio Toral and Sudip Kumar Naskar and Declan Groves and Andy Way},
  booktitle={Conference of the Association for Machine Translation in the Americas},
This paper presents a study of user-perceived vs real machine translation (MT) post-editing effort and productivity gains, focusing on two bidirectional language pairs: English—German and English—Dutch. Twenty experienced media professionals post-edited statistical MT output and also manually translated comparative texts within a production environment. The paper compares the actual post-editing time against the users’ perception of the effort and time required to post-edit the MT output to… 

Figures and Tables from this paper

Cognitive Effort in Post-Editing of Machine Translation

This thesis presents an empirical study that examines MT post-editing by contrasting the cognitive effort required by this activity with a number of its key elements, including characteristics of the source text and of the MT output, post-edsitors’ individual traits, and the quality of the post-edited text assessed by human evaluators.

Translation vs Post-editing of NMT Output: Measuring effort in the English-Greek language pair

Machine Translation (MT) has been increasingly used in industrial translation production scenarios thanks to the development of Neural Machine Translation (NMT) models and the improvement of MT

Translation Quality and Effort: Options versus Post-editing

This paper directly compares two common assistance types – selection from lists of translation options, and postediting of machine translation output produced by Google Translate – across two significantly different subject domains for Chinese-to-English translation.

Translators’ perceptions of literary post-editing using statistical and neural machine translation

In the context of recent improvements in the quality of machine translation (MT) output and new use cases being found for that output, this article reports on an experiment using statistical and

Machine translation and Welsh: analysing free statistical machine translation for the professional translation of an under-researched language pair

A key-logging study carried out to test the benefits of post-editing Machine Translation for the professional translator within a hypothetico-deductive framework finds little difference in quality between the translated and post-edited texts, and that both sets of texts were acceptable according to accuracy and fidelity.

A mixed-methods study with experienced and novice translators in the English-Greek language pair

In recent years, Post-Editing (PE) has been increasingly gaining ground, especially following the advent of neural machine translation (NMT) models. However, translators still approach PE with

MMPE: A Multi-Modal Interface for Post-Editing Machine Translation

MMPE is presented, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT, and the results of an evaluation suggest that pen and touch interaction are suitable for deletion and reordering tasks, while they are of limited use for longer insertions.

Post-editing for Professional Translators: Cheer or Fear?

Attitudes of translators post-editing for the first time are studied and they are related to their productivity rates and a survey answered by professional post-edsitors assessing their perception of the task in the current marketplace is compared.

A Satisfaction Survey on the Human Translation Outcomes and Machine Translation Post-Editing Outcomes

This cross-sectional survey research carried out with the inquisitive agenda on satisfaction of the translation outcomes as performed by human translation and (machine translation) post-editing. The

Enhancement of post-editing performance: introducing machine translation post-editing in translator training

The dissertation contributes to the definition of the scope of post-editors’ professional expertise, offers a scalable training model and describes to what extent such model may enhance post-EDiting performance in undergraduate translation students.



Productivity and quality in MT post-editing

Results suggest that translators have higher productivity and quality when using machinetranslated output than when processing fuzzy matches from translation memories, and technical experience seems to have an impact on productivity but not on quality.

Post-editing time as a measure of cognitive effort

This paper presents two experiments investigating the connection between post-editing time and cognitive effort, and examines whether sentences with long and short post-edsiting times involve edits of different levels of difficulty.

Machine Translation Infrastructure and Post-editing Performance at Autodesk

The Moses-based infrastructure developed and use as a productivity tool for the localisation of software documentation and user interface strings at Autodesk into twelve languages is presented and a strong correlation between the amount of editing applied to the raw MT output by the translators and their productivity gain is indicated.

A Productivity Test of Statistical Machine Translation Post-Editing in a Typical Localisation Context

A Productivity Test of Statistical Machine Translation Post-Editing in a Typical Localisation Context and results show a productivity increase for each participant, with significant variance across inviduals.

Post-editing of Machine Translation: Processes and Applications

This volume is a compilation of work by researchers, developers and practitioners of post-editing, presented at two recent events on post-EDiting, and partly due to the increasing quality of machine translation output, but also to the availability of free, reliable software for both machine translation and post-edsiting.

To post-edit or not to post-edit? Estimating the benefits of MT post-editing for a European organization

An ongoing large-scale machine translation post-editing evaluation campaign is described the purpose of which is to estimate the business benefits from the use of machine translation for the European Parliament.

A Study of Translation Edit Rate with Targeted Human Annotation

A new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments is examined, which indicates that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate withhuman judgments as well as—or better than—a second human judgment does.

Assessing post-editing efficiency in a realistic translation environment

It is found that post-editing reduces translation time significantly, although considerably less than reported in isolated experiments, and it is argued that overall assessments of post-Editing efficiency should be based on a realistic translation environment.

Source Text Characteristics and Technical and Temporal Post-Editing Effort: What is Their Relationship

This paper focuses on the relationship between source text characteristics (ambiguity, complexity and style compliance) and machine-translation post-editing effort (both temporal and technical).

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

METEOR is described, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations and can be easily extended to include more advanced matching strategies.