Differences
This shows you the differences between two versions of the page.
Next revision Both sides next revision | |||
courses:rg:2012:meant [2012/11/12 22:28] rosa vytvořeno |
courses:rg:2012:meant [2012/11/12 22:47] rosa sect 1 |
||
---|---|---|---|
Line 6: | Line 6: | ||
Presented by Petr Jankovský | Presented by Petr Jankovský | ||
Report by Rudolf Rosa | Report by Rudolf Rosa | ||
+ | |||
+ | |||
+ | The paper was widely discussed throughout the whole session. The report is mainly chronological, | ||
===== 1 Introduction ===== | ===== 1 Introduction ===== | ||
+ | The paper proposes a semi-automatic translation evaluation metric that is claimed to be both well correlated with human judgement (especially in comparison to BLEU) and less labour-intensive than HTER (which is claimed to be much more expensive). | ||
+ | |||
+ | Meant assumes that a good traslation is one where the reader understands correctly "Who did what to whom, when, where and why" - which, as Martin noted, is rather adequacy than fluency, and therefore a comparison with BLEU, which is more fluency-oriented, | ||
+ | Matin further explained that HTER is a metric where the humans post-edit the MT output to transform it into a correct translation, | ||
+ | Matěj Korvas then pointed to an important difference between MEANT and HTER: MEANT uses reference translations, | ||
+ | The group then discussed whether HMEANT evaluations are really faster than HTER annotations, | ||
Section **2 Related work** was skipped. | Section **2 Related work** was skipped. |