Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
courses:rg:2012:rosareport [2012/09/17 01:26] rosa adding discussion |
courses:rg:2012:rosareport [2012/09/17 01:38] (current) rosa adding more discussion |
||
---|---|---|---|
Line 14: | Line 14: | ||
* Why does the paper talk about " | * Why does the paper talk about " | ||
- | * It seems they actually perform forced decoding. | + | * It seems they actually perform |
* The paper describes three ways of using the alignment (best alignment, n-best alignments, all alignments), | * The paper describes three ways of using the alignment (best alignment, n-best alignments, all alignments), | ||
* The authors claim to have avoided using " | * The authors claim to have avoided using " | ||
Line 20: | Line 20: | ||
* The whole sentence is by definition always consistent with the word alignment. | * The whole sentence is by definition always consistent with the word alignment. | ||
* There may be a hard limit for maximum phrase length, but this is not mentioned in the paper. | * There may be a hard limit for maximum phrase length, but this is not mentioned in the paper. | ||
- | * | + | * We discussed whether even singleton phrases should be extracted or whether it would be better to skip them. |
+ | * Apparently, practice shows that there is little reason to skip them, as it is usually better to have more data. | ||
+ | * We found that the meaning of " | ||
+ | * It seems to us that the authors simply used a misleading term here, as they use the procedure not for validation but for training (probably they just perform " | ||
+ | * We discussed whether in Table 4, N stands for the number of different alignments for a pair of sentences, but found out that we are rather unsure about that. | ||