[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:rg:2012:longdtreport [2012/03/12 22:47]
longdt
courses:rg:2012:longdtreport [2012/03/12 22:59] (current)
longdt
Line 24: Line 24:
  
 __Sorted Array__ __Sorted Array__
-  + Use n array for n-gram model (array i-th is used for i-gram) + 
-  Each element in array in pair (w,c) +Use n array for n-gram model (array i-th is used for i-gram) 
-            w : index of that word in unigram array +Each element in array in pair (w,c) 
-            c : offset pointer +     w : index of that word in unigram array  
-  Sort base on w+     c : offset pointer 
 +Sort base on w 
 Improvement : Implicitly encode W (all n-gram ending with particular word wi are stored -> wasteful. So, maintain another array save the beginning and the end of the range Improvement : Implicitly encode W (all n-gram ending with particular word wi are stored -> wasteful. So, maintain another array save the beginning and the end of the range
                        
Line 38: Line 40:
  
 __Variable length coding__ __Variable length coding__
-   
-Most of the attendants apparently understood the talk and the paper well, and a 
-lively discussion followed. One of our first topics of debate was the notion of 
-skyline presented in the paper. The skyline was somewhat of a supervised element 
--- the authors estimated initial parameters for a model from gold data and 
-trained it afterwards. They assumed that a model with parameters estimated from 
-gold data cannot be beaten by an unsupervisedly trained model. Verily, after 
-training the skyline model, its accuracy dropped very significantly. The reasons 
-of this were a point of surprise for us as well as for the paper's authors. 
  
-Complementary to the skyline, the authors presented a baseline which should +Idea :  
-definitely be beaten by their final model. This baseline, they called +Context offset tend to be close with each others=> Only save the first offsets address and the difference of others with the it
-"uninformed", but were vague about which exact probability distribution they +
-used in this model. We could only speculate it was a uniform or random +
-probability distribution.+
  
-A point about unsupervised language modeling came outMany linguistic phenomena +Question :  
-are annotated in a way that is to some extent arbitrary, and reflects more the +how the unigram be sorted ?  
-linguistic theory used than the language itself, and an unsupervised model +Martin suggest that it must be sorted base on frequency
-cannot hope to get them right. The example we discussed was whether the word +
-"should" is governing the verb it's bound with, or vice versa. The authors +
-noticed that dependency orientation in general was not a particularly strong +
-point of their parser, and so they also included an evaluation metric that +
-ignored the dependency orientations.+
  
-Perhaps the most crucial observations the authors made was that there is a limit +__Block Compression__ 
-where feeding more data to the model training hurts its accuracy. They +compress the key/value array in the blocks of 128 bytes,  
-progressed from short sentences to longer, and identified the threshold, where +the underline reason is only to support binary search
-it's best to start ignoring any more training data, at sentences of length 15. +
-Howeverwe were not 100% clear how they computed this constant. +
-If the model was to be fully unsupervised, it remains a question, how to setup +
-this threshold, because it cannot be safely assumed that it would be the same +
-for all languages and setups.+
  
-The writing style of the paper was also a matter of differing opinions. +==== Decoding ==== 
-Undeniably, it is written in a vocabulary-intensive fashion, bringing readers +**I. Exploiting Repetitive Queries**
-face to face with words like "unbridled" or "jettison", which personally had +
-never seen before.+
  
-==== Conclusion ====+The method use cache to speed up the process 
 +This simple implementation increase performance of 300% over conventional implementation 
 +**II. Exploiting Scrolling Queries**
  
-All in all, it was a paper worth reading, well presented, and thoroughly +We can quickly form the context encoding of the next query by concatenating new words with saved offset from previous query 
-discussed, bringing useful general ideas as well as interesting details.+ 
 +==== Conclusion ==== 
 +In summary, it was a worth reading paper, well presented, and thoroughly 
 +discussed. Paper contain many detail techniques that definitely helpful for actual implementation.  

[ Back to the navigation ] [ Back to the content ]