Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
courses:rg:2014:mdsm [2014/11/24 16:09] nguyenti created |
courses:rg:2014:mdsm [2014/11/29 13:52] (current) popel |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | 1. Recall | + | You should focus on the first paper (you can skip section 2.3): [[http:// |
- | distributional semantic and distributed semantic | + | The second |
+ | |||
+ | |||
+ | Q1. | ||
+ | Recall the paper about word representations presented by Tam on November 10. | ||
+ | Read http:// | ||
+ | |||
+ | (M_{w,d} is a matrix with w rows and d columns). | ||
+ | What does w, d and k mean? | ||
+ | What are the values of w, d and k used in the experiments in this paper? | ||
+ | |||
+ | Q2. | ||
+ | a) Compute the similarity between two words " | ||
+ | Use these raw counts (no Local Mutual Information, | ||
+ | |||
+ | | planet | night | full | shadow | shine | ||
+ | Moon | ||
+ | Sun | | ||
+ | Dog | | ||
+ | Mars | ||
+ | |||
+ | b) How do they deal with high dimension of vectors in those papers? | ||
+ | Can you suggest some (other) techniques of preprocessing vectors with high dimensions? | ||
+ | |||
+ | Q3. | ||
+ | a) What are Bag of Word (BOVW) and Bag of Visual Word (BOW)? | ||
+ | b) How do they apply BOVW to compute representation of a word (concept) from a large set of images? | ||
+ | |||
+ | Q4 (bonus). | ||
+ | When they construct text-based vectors of words from DM model | ||
+ | they mentioned Local Mutual Information score. (section 3.2, also section 2.1 in the 2nd paper) | ||
+ | So what is that score? Why did they use it? | ||
+ | |||
+ | Q5 (bonus). | ||
+ | Have you ever wished to see beautiful " | ||
+ | Have you ever seen " | ||
+ | " | ||
+ | |||
+ | Think about a computational way to show that how they look like? | ||
- | 2. |