Bleu: a Method for Automatic Evaluation of Machine Translation

@inproceedings{Papineni2002BleuAM,
  title={Bleu: a Method for Automatic Evaluation of Machine Translation},
  author={Kishore Papineni and Salim Roukos and Todd Ward and Wei-Jing Zhu},
  booktitle={ACL},
  year={2002}
}
Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick… CONTINUE READING
Highly Influential
This paper has highly influenced 1,977 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 9,633 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 6,673 extracted citations

Approximate Computing for Long Short Term Memory (LSTM) Neural Networks

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems • 2018
View 13 Excerpts
Highly Influenced

9,634 Citations

05001000'00'04'09'14'19
Citations per Year
Semantic Scholar estimates that this publication has 9,634 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-4 of 4 references

Additional mt-eval references

Florence Reeder.
Technical report, International Standards for Language Engineering, Evaluation Working Group. http://isscowww.unige.ch/projects/isle/taxonomy2/ • 2001

Toward finely differentiated evaluation metrics for machine translation

E. H. Hovy.
Proceedings of the Eagles Workshop on Standards and Evaluation, Pisa, Italy. • 1999
View 2 Excerpts

Similar Papers

Loading similar papers…