BLEU

Known as: Bilingual Evaluation Understudy, Bleu score 
BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language… (More)
Wikipedia

Topic mentions per year

Topic mentions per year

1978-2017
010020019782017

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2016
Highly Cited
2016
Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem… (More)
  • figure 1
  • table 2
  • table 1
  • table 3
  • figure 2
Is this relevant?
Highly Cited
2016
Highly Cited
2016
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome… (More)
  • figure 1
  • figure 2
  • figure 3
  • figure 4
  • table 2
Is this relevant?
Highly Cited
2015
Highly Cited
2015
An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the… (More)
Is this relevant?
Highly Cited
2014
Highly Cited
2014
Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although… (More)
  • figure 1
  • table 1
  • table 2
  • figure 2
  • table 3
Is this relevant?
Highly Cited
2006
Highly Cited
2006
We argue that the machine translation community is overly reliant on the Bleu machine translation evaluation metric. We show that… (More)
Is this relevant?
Highly Cited
2006
Highly Cited
2006
We argue that the machine translation community is overly reliant on the Bleu machine translation evaluation metric. We show that… (More)
  • table 1
  • figure 1
  • table 3
  • figure 2
  • figure 3
Is this relevant?
Highly Cited
2006
Highly Cited
2006
We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more… (More)
  • table 1
  • table 2
  • table 3
  • table 4
  • table 5
Is this relevant?
Highly Cited
2004
Highly Cited
2004
Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU and the related NIST metric, are becoming… (More)
  • table 1
  • figure 1
  • table 2
  • table 4
  • table 6
Is this relevant?
2003
2003
In this paper we attempt to apply the IBM algorithm, BLEU, to the output of four different summarizers in order to perform an… (More)
  • table 1
  • table 2
  • table 3
  • table 4
Is this relevant?
Highly Cited
2002
Highly Cited
2002
Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve… (More)
  • figure 1
  • figure 2
  • table 2
  • table 1
  • figure 3
Is this relevant?