Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
courses:rg:2012:encouraging-consistent-translation-bushra [2012/10/23 15:30] jawaid |
courses:rg:2012:encouraging-consistent-translation-bushra [2012/10/29 20:30] (current) jawaid |
||
---|---|---|---|
Line 3: | Line 3: | ||
====Related Work:==== | ====Related Work:==== | ||
- | - Paper on the similar approach by Carput (2009) has found to be different in comparison to this work. They have used "one translation per discourse" | + | - Paper on the similar approach by Carput (2009) has found to be different in comparison to this work. They have used "one translation per discourse" |
- | + | - Without giving any proper evidence, authors have speculated that modeling " | |
- | - Without giving any proper evidence, authors have speculated that modeling " | + | |
====Analysis: | ====Analysis: | ||
- | - Forced Decoding is the decoding method in which for a given pair of source and target sentences, decoder searches for the translation rules that fit the target sentence for a given source sentence. | + | |
- | + | - Term " | |
- | - Term " | + | - After selecting sample cases, few filtering techniques have been applied to discard the irrelevant samples. Filtering steps are well documented in the paper on page 419 in first paragraph of 2nd column. |
- | + | ||
- | - After selecting sample cases, few filtering techniques have applied to discard the irrelevant samples. Filtering steps are well documented in the paper on page 419 in first paragraph of 2nd column. | + | |
====Approach: | ====Approach: | ||
- | - The core idea of maintaining translation consistencies (TC) is implemented by intrdocuing | + | |
- | + | - BM25 which is used as term weighting function is a well known ranking function in the filed of information retrieval and a refined version of TF-IDF (another ranking function | |
- | - BM25 which is used as term weighting function is a well known ranking function in the filed of information retrieval and a refined version of TF-IDF (another ranking function | + | - Description of consistency features: |
- | + | - C< | |
- | - C< | + | - C< |
- | + | - C< | |
- | - C< | + | |
- | + | ||
- | - C< | + | |
- | + | ||
- | Evaluation: | + | |
- | + | ||
- | - Cdec's implementation of Hierarchical MT is used in this work. As we know, hierarchical decoding is also implemented in other MT systems such as Moses, Joshua etc. The selection of cdec over other MT systems is authors' | + | |
- | + | ||
- | - MIRA is used to train the MT system. | + | |
- | + | ||
- | - Authors don't tune decoder in first-pass i.e. they don't calulcate feature weights (lambda) and probably they use weights from their previous experiments or setups. They don't clearly state the reason of this decision but our hypothesis is they might skiped the tuning step just to speed up the translation process. | + | |
- | + | ||
- | - NIST-BLEU is used to compare results with official NIST evaluation whereas IBM-BLEU is used for evaluating the rest of experiments. We don't fully understand the use of different BLEU (prefering shorter sentences incase of NIST and longer incase of IBM) for evaluation and not sticking with NIST-BLEU only. | + | |
- | + | ||
- | - They gain maximum of 1.0 point increase in BLEU after combining all three features. | + | |
- | + | ||
- | - Authors called BLEU as a " | + | |
- | i- They could have supported their argument by manually evaluating the test set. | + | ====Evaluation: |
- | ii- Instead of wasting half of the page length by criticizing over BLEU, they could have evaluated their system on other metric such as METEOR. | + | |
- | - | + | |
+ | - MIRA is used for tuning feature weights. | ||
+ | - Authors don't tune decoder in first-pass i.e. they don't calculate feature weights (lambda) and probably they use weights from their previous experiments or setups. They don't clearly state the reason of this decision but our hypothesis is they might skipped the tuning step just to speed up the translation process. | ||
+ | - NIST-BLEU (prefers shorter sentences) is used to compare results with official NIST evaluation whereas IBM-BLEU (prefers longer sentences) is used for evaluating the rest of experiments. We don't fully understand the use of different BLEU for evaluation and why they didn't use only NIST-BLEU for evaluation. (MP: that's not exact, see [[courses: | ||
+ | - They gain maximum of 1.0 point increase in BLEU after combining all three features. | ||
+ | - Authors called BLEU as a " | ||
+ | - They could have supported their argument by manually evaluating the test set. | ||
+ | - Instead of wasting half of the page length by criticizing over BLEU, they could have evaluated their system on other metric such as METEOR. | ||
+ | - We believe that significance testing | ||
====Conclusion: | ====Conclusion: | ||
- | Paper is nicely written and all experiments are well documented. We believe that consistent translation choices system is better | + | Paper is nicely written and all experiments are well documented. We believe that consistent translation choices system is well suited only for translating from direction of morphologically-rich to morphologically-low language pairs and not the other way round. For translating |