Corpus ID: 17848183

Perception vs Reality: Measuring Machine Translation Post-Editing Productivity

@inproceedings{Gaspari2014PerceptionVR,
  title={Perception vs Reality: Measuring Machine Translation Post-Editing Productivity},
  author={F. Gaspari},
  year={2014}
}
  • F. Gaspari
  • Published 2014
  • Computer Science
  • This paper presents a study of user-perceived vs real machine translation (MT) post-editing effort and productivity gains, focusing on two bidirectional language pairs: English— German and English—Dutch. Twenty experienced media professionals post-edited statistical MT output and also manually translated comparative texts within a production environment. The paper compares the actual post-editing time against the users’ perception of the effort and time required to post-edit the MT output to… CONTINUE READING
    22 Citations
    Cognitive Effort in Post-Editing of Machine Translation
    • 9
    • PDF
    Translation Quality and Effort: Options versus Post-editing
    • 1
    • PDF
    MMPE: A Multi-Modal Interface for Post-Editing Machine Translation
    • 3
    • PDF
    Correlations of perceived post-editing effort with measurements of actual effort
    • 48
    • PDF
    Comparing Translator Acceptability of TM and SMT Outputs
    • 12
    • PDF

    References

    SHOWING 1-10 OF 16 REFERENCES
    Productivity and quality in MT post-editing
    • 77
    • PDF
    Post-editing Time as a Measure of Cognitive Effort
    • 76
    • PDF
    Comparing human perceptions of post-editing effort with post-editing operations
    • 92
    • PDF
    Machine Translation Infrastructure and Post-editing Performance at Autodesk
    • 35
    • PDF
    A Productivity Test of Statistical Machine Translation Post-Editing in a Typical Localisation Context
    • 218
    • Highly Influential
    • PDF
    Assessing Post-Editing Efficiency in a Realistic Translation Environment
    • 54
    • PDF
    METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments
    • 2,155
    • PDF
    Bleu: a Method for Automatic Evaluation of Machine Translation
    • 12,838
    • Highly Influential
    • PDF