Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
courses:rg:2012:encouraging-consistent-translation [2012/10/17 11:44] dusek |
courses:rg:2012:encouraging-consistent-translation [2012/10/17 11:59] dusek |
||
---|---|---|---|
Line 57: | Line 57: | ||
* but rules are very similar, so we also need something less fine-grained | * but rules are very similar, so we also need something less fine-grained | ||
* C2 is a target-side feature, just counts the target side tokens (only the "most important" | * C2 is a target-side feature, just counts the target side tokens (only the "most important" | ||
- | * It may be compared to Language Model features, but is trained only on the target part of the bilingual | + | * It may be compared to Language Model features, but is trained only on the target part of the bilingual |
* C3 counts occurrences of source-target token pairs (and uses the "most important" | * C3 counts occurrences of source-target token pairs (and uses the "most important" | ||
Line 63: | Line 63: | ||
* They need two passes through the data | * They need two passes through the data | ||
* You need to have document segmentation | * You need to have document segmentation | ||
- | * Since the frequencies are trained on the training | + | * Since the frequencies are trained on the tuning |
+ | |||
+ | ==== Sec. 5. Evaluation and Discussion ==== | ||
+ | **Choice of baseline** | ||
+ | * Baselines are quite nice and competitive, | ||
+ | * MIRA is very cutting-edge | ||
+ | |||
+ | **Tuning the feature weights** | ||
+ | * For the 1st phase, " | ||
+ | * This is in order to speed up the experiment, they don't want to wait for MIRA twice. | ||
+ | |||
+ | **Different evaluation metrics** | ||
+ | * The BLEU variants do not differ that much, only in Brevity Penalty for multiple references | ||
+ | * IBM BLEU uses the reference that is closest to the MT output (in terms of length), NIST BLEU uses the shortest one | ||
+ | * This was probably just due to some technical reasons, e.g. they had their optimization software designed for one metric and not the other |