[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:rg:2011:bleu-a-method-for-automatic-evaluation-of-machine-translation [2011/12/06 10:35]
popel comment and typos
courses:rg:2011:bleu-a-method-for-automatic-evaluation-of-machine-translation [2011/12/06 11:31] (current)
galuscakova
Line 3: Line 3:
 written by Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu (IBM T. J. Watson Research Center) written by Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu (IBM T. J. Watson Research Center)
  
-spoken by Jindřich Libovický+spoken by Petr Jankovský
  
 reported by Petra Galuščáková reported by Petra Galuščáková
Line 13: Line 13:
 ===== Notes ===== ===== Notes =====
  
-BLEU score is based on the comparison of the automatic (candidate) translation and reference human translations. Basically, counts of the n-grams shared in automatic translation and reference translation are calculated and divided by number of all n-grams. This n-gram precision is further modified. If the number of particular shared n-gram is higher in the candidate translation than in the reference translation, then this count is replaced by the maximum count of this n-gram in reference translation. The BLUE score is then calculated as a linear average of these modified precisions. The brevity penalty is added to the sum to penalize shorter translations than the reference translations. +BLEU score is based on the comparison of the automatic (candidate) translation and reference human translations. Basically, counts of the n-grams shared in automatic translation and reference translation are calculated and divided by number of all n-grams. This n-gram precision is further modified. If the number of particular shared n-gram is higher in the candidate translation than in the reference translation, then this count is replaced by the maximum count of this n-gram in reference translation. The BLUE score is then calculated as an arithmetical average of logarithms of modified precisions. The brevity penalty is added to the sum to penalize shorter translations than the reference translations.
-> No, it's not a "//linear average of these modified precisions//" it's an "arithmetical average of **logarithms** of modified precisions", in other words it is a "**geometric** average of modified precisions". See Section 2.1.3.  --- Martin Popel+
  
-Jindřich noticed a mistake in section 2 where is written that the phrase "of the party" is shared only with Reference 2, but it is shared also with Reference 3. +Petr noticed a mistake in section 2 where is written that the phrase "of the party" is shared only with Reference 2, but it is shared also with Reference 3. 
  
 Another problem, that was discussed, was found in section 2.2.2. For example if we have three reference translations with lengths 12, 15 and 17 words and our translation has length 14 words. Then, according to the article, our translation is punished, because the closest sentence has length 15, despite the fact, that there also exits shorter reference translation. This was a bit suspicious. Another problem, that was discussed, was found in section 2.2.2. For example if we have three reference translations with lengths 12, 15 and 17 words and our translation has length 14 words. Then, according to the article, our translation is punished, because the closest sentence has length 15, despite the fact, that there also exits shorter reference translation. This was a bit suspicious.
 +> Yes. It's suspicious, but that is the official definition of BLEU. The kind-of official implementation [[ftp://jaguar.ncsl.nist.gov/mt/resources/mteval-v13a.pl|mteval.pl]] has an option to choose the **shortest** reference instead of the **closest** (actually, the option to choose the closest length was added quite recently -- in version 13). In practice, quite often there is just one reference translation available, so it doesn't matter. --- Martin Popel
  
 Performed experiments show high correlation of manual ranking and automatic ranking of translation systems. BLEU is able to distinguish between good and bad translations and between translation created by human or by automatic system.  Performed experiments show high correlation of manual ranking and automatic ranking of translation systems. BLEU is able to distinguish between good and bad translations and between translation created by human or by automatic system. 

[ Back to the navigation ] [ Back to the content ]