[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
courses:rg:overcoming_vocabulary_sparsity_in_mt_using_lattices [2010/11/29 23:26]
ivanova
courses:rg:overcoming_vocabulary_sparsity_in_mt_using_lattices [2010/11/29 23:29]
ivanova
Line 2: Line 2:
 Steve DeNeefe and Ulf Hermjakob and Kevin Knight Steve DeNeefe and Ulf Hermjakob and Kevin Knight
  
-Overview of the article:+===== Overview of the article ===== 
 1. Introduction 1. Introduction
 2. Related work 2. Related work
Line 8: Line 9:
 6. Experiment 6. Experiment
 7. Conclusion 7. Conclusion
 +
 +
 +==== Introduction ====
  
 The article is about overcoming the problem of vocabulary sparsity in SMT. The sparsity occurs because many words can have inflection or can take different affixes while in the vocabulary we might not find all those forms. The article is about overcoming the problem of vocabulary sparsity in SMT. The sparsity occurs because many words can have inflection or can take different affixes while in the vocabulary we might not find all those forms.
Line 17: Line 21:
 To solve the indicated problems authors modify training and test aligned bilingual data. The strong side of proposed approaches is that these techniques work for the large training data To solve the indicated problems authors modify training and test aligned bilingual data. The strong side of proposed approaches is that these techniques work for the large training data
  
-**(1)** For the first challenge of vocabulary sparsity they don't intend to do complex morphological analysis, but they apply lightweight technique.+===== Challenge (1) ===== 
 + 
 +For the first challenge of vocabulary sparsity they don't intend to do complex morphological analysis, but they apply lightweight technique.
 They split off w- prefix when motivated by the aligned English words and remove sentence-initial w- prefix based on corpus statistics.  They split off w- prefix when motivated by the aligned English words and remove sentence-initial w- prefix based on corpus statistics. 
 The two lists are used: The two lists are used:
Line 32: Line 38:
 one with lattices and one without lattices and compare the results. one with lattices and one without lattices and compare the results.
  
-**(2)** To translate rare and unknown words that are not in the dictionary the authors use 193 hand-written linguistic rules about how to cut-off affixes and get rid of inflection. The word that we get after cutting off the affix, might be in the dictionary, if not, algorithm will try to apply more rules to get a word that is in the dictionary.+===== Challenge (2) ===== 
 + 
 +To translate rare and unknown words that are not in the dictionary the authors use 193 hand-written linguistic rules about how to cut-off affixes and get rid of inflection. The word that we get after cutting off the affix, might be in the dictionary, if not, algorithm will try to apply more rules to get a word that is in the dictionary.
  
 There is no information in the article about how the rule is selected in case there are suitable rules for one affix. Probably they have uniform distribution of rules and they leave to a language model to choose one. There is no information in the article about how the rule is selected in case there are suitable rules for one affix. Probably they have uniform distribution of rules and they leave to a language model to choose one.
  
-**(3)** The third challenge is to correct spelling mistakes. If the word has one spelling mistakes, they try to correct. But they don't remove the original word, they just add the found options. If the word has more than one spelling mistakes, they do not deal with it. +==== Challenge (3) ==== 
 + 
 +The third challenge is to correct spelling mistakes. If the word has one spelling mistakes, they try to correct. But they don't remove the original word, they just add the found options. If the word has more than one spelling mistakes, they do not deal with it. 
  
 It is not clear from the article how exactly they correct the mistakes, for example It is not clear from the article how exactly they correct the mistakes, for example
 mHAd__t__At - mHAd__v__At mHAd__t__At - mHAd__v__At
-Do they have rules that for example probability of substituting __t__ by __v__ is bigger, than probability of substituting __t__ by __a__?+Do they have rules that for example probability of substituting __t__ by __v__ is bigger, than probability of substituting __t__ by __a__ ? 
 + 
 + 
 +====== Evaluation ======
  
  
-Future work +===== Future work ===== 
 + 
 They can work with prefixes b-, l-, Al- and k- using similar approach as for w- prefix. They can work with prefixes b-, l-, Al- and k- using similar approach as for w- prefix.
 +They can look at the context for spelling correction.
    
  

[ Back to the navigation ] [ Back to the content ]