[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
courses:rg:2012:meant [2012/11/12 23:54]
rosa
courses:rg:2012:meant [2012/11/13 00:02]
rosa final section
Line 70: Line 70:
 They ran the grid search optimization on the 40 sentences they have, but then they evaluated HMEANT on the same data. They ran the grid search optimization on the 40 sentences they have, but then they evaluated HMEANT on the same data.
 The group agreed that such evaluation is completely flawed and it is not clear why it was performed and included in the paper. The group agreed that such evaluation is completely flawed and it is not clear why it was performed and included in the paper.
 +Karel Bílek also notes that it is quite ridiculous to state the precision to 4 decimal digits when only 40 sentences are used.
  
-Table 4+In Table 4, the authors probably try to compensate for this flaw by performing cross-validation. However, please note there are only 10 sentences in one fold. Petr thinks that the table should show that the parameter weights are stable. However, Martin thinks that for only 40 sentences, it is probably easy to find 12 parameter values to achieve good performance. Moreover, Aleš Tamchyna assumes that even the formulas used might be fitted to those 40 sentences.
  
-Martin then informed the group that Dekai Wu has still not given us the data from the annotations done on ÚFAL (which was already several months ago), which rises even more suspicion whether the experiments were fair.+Martin then informed the group that Dekai Wu has still not given us the data from the annotations done on ÚFAL (which was already several months ago), which makes us even more suspicious whether the experiments were fair.
  
 Martin also notes that the authors claim that all other existing evaluation metrics require lexical matches to consider a translation to be correct - which is not true, as the Meteor metric can also use paraphrases. Martin also notes that the authors claim that all other existing evaluation metrics require lexical matches to consider a translation to be correct - which is not true, as the Meteor metric can also use paraphrases.
  
-KB: 40 sentences but precision to 4 decimal digits  +The group generally agreed that, although the ideas behind HMEANT seem reasonable, the paper itself is misleading and is not to be believed much (or probably at all). The proposed metric possibly correlates better with human judgement than automatic metrics, but it does not really seem to reach HTER.
- +
- +
- +
-The group generally agreed that, although the ideas behind HMEANT seem reasonable, the paper itself is misleading and is not to be believed much (or probably at all). +

[ Back to the navigation ] [ Back to the content ]