Bleu: a Method for Automatic Evaluation of Machine Translation

@inproceedings{Papineni2002BleuAM,
  title={Bleu: a Method for Automatic Evaluation of Machine Translation},
  author={Kishore Papineni and S. Roukos and T. Ward and Wei-Jing Zhu},
  booktitle={ACL},
  year={2002}
}
Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick… Expand
13,477 Citations
Survey of Machine Translation Evaluation
  • 8
  • PDF
Correlating Automated and Human Assessments of Machine Translation Quality
  • 133
  • Highly Influenced
  • PDF
A Comparative Study and Analysis of Evaluation Matrices in Machine Translation
  • 1
Human Evaluation of Machine Translation Through Binary System Comparisons
  • 31
  • PDF
(Meta-) Evaluation of Machine Translation
  • 397
  • PDF
Evaluation of Machine Translation and its Evaluation
  • 311
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-6 OF 6 REFERENCES
The ARPA MT Evaluation Methodologies: Evolution, Lessons, and Future Approaches
  • 219
  • PDF
Additional mt-eval references
  • Technical report, International Standards for Language Engineering, Evaluation Working Group. http://isscowww.unige.ch/projects/isle/taxonomy2/
  • 2001
Additional mt-eval references International Standards for Language Engineering, Evaluation Working Group
  • Additional mt-eval references International Standards for Language Engineering, Evaluation Working Group
  • 2001
Toward finely differentiated evaluation metrics for machine translation
  • Proceedings of the Eagles Workshop on Standards and Evaluation, Pisa, Italy.
  • 1999