[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:rg:2014:mdsm [2014/11/24 17:16]
nguyenti
courses:rg:2014:mdsm [2014/11/29 13:52] (current)
popel
Line 1: Line 1:
-1. Recall from Tam'paper presentation week ago.  +You should focus on the first paper (you can skip section 2.3): [[http://www.aclweb.org/anthology/W11-2503.pdf|Distributional semantics from text and images]]
-What is the difference between: +The second paper [[http://www.aclweb.org/anthology/P12-1015.pdf|Distributional Semantics in Technicolor]], an extent of the first one, is optional reading.
-distributional semantic and distributed semantic representation+
  
-2. What is the vector representation of word "Dog" 
  
-3What is the similarity and difference between Bag of Word and Bag of Visual Word+Q1 
 +Recall the paper about word representations presented by Tam on November 10. 
 +Read http://www.quora.com/Whats-the-difference-between-distributed-and-distributional-semantic-representations 
 + 
 +(M_{w,d} is a matrix with w rows and d columns). 
 +What does w, d and k mean? 
 +What are the values of w, d and k used in the experiments in this paper? 
 + 
 +Q2. 
 +a) Compute the similarity between two words "Moon" and "Mars" from the co-occurrence matrix below. 
 +Use these raw counts (no Local Mutual Information, no normalization) and cosine similarity. 
 + 
 +           | planet | night | full | shadow | shine        
 +    Moon     34     27  |  19  |      |   20 
 +    Sun    |   32     23  |  10  |   47     15 
 +    Dog    |      |   19  |  2     11     1 
 +    Mars     44     23  |  17  |      |   9 
 +     
 +b) How do they deal with high dimension of vectors in those papers? 
 +Can you suggest some (other) techniques of preprocessing vectors with high dimensions? 
 +      
 +Q3.  
 +a) What are Bag of Word (BOVW) and Bag of Visual Word (BOW)? 
 +b) How do they apply BOVW to compute representation of a word (concept) from a large set of images? 
 +    
 +Q4 (bonus). 
 +When they construct text-based vectors of words from DM model 
 +they mentioned Local Mutual Information score. (section 3.2, also section 2.1 in the 2nd paper) 
 +So what is that score? Why did they use it? 
 + 
 +Q5 (bonus). 
 +Have you ever wished to see beautiful "Mermaids"? 
 +Have you ever seen "Unicorns" in the real life? 
 +"Assume that there are no photos of them on the Internet" 
 + 
 +Think about a computational way to show that how they look like?
  
-4. 
  
-5. Have you seen "wampimuk" in the real life? Can you imagine how does this animal look like? 
-how to you think a way to show that how does it look like? 

[ Back to the navigation ] [ Back to the content ]