[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:rg:2014:mdsm [2014/11/25 21:57]
nguyenti
courses:rg:2014:mdsm [2014/11/29 13:52] (current)
popel
Line 1: Line 1:
 +You should focus on the first paper (you can skip section 2.3): [[http://www.aclweb.org/anthology/W11-2503.pdf|Distributional semantics from text and images]].
 +The second paper [[http://www.aclweb.org/anthology/P12-1015.pdf|Distributional Semantics in Technicolor]], an extent of the first one, is optional reading.
  
  
-Q1. Recall from the paper presented by Tam 3 week ago.  +Q1.  
-a) What is the difference between distributional semantic and distributed semantic representation+Recall the paper about word representations presented by Tam on November 10
-b) What is maximum dimension of word vector in distributional representation approach?+Read http://www.quora.com/Whats-the-difference-between-distributed-and-distributional-semantic-representations 
 + 
 +(M_{w,d} is a matrix with w rows and d columns). 
 +What does w, d and k mean
 +What are the values of w, d and k used in the experiments in this paper?
  
 Q2. Q2.
-a) Compute the similarity between two words "Moon" and "Mars" from the co-occurrence matrix below:+a) Compute the similarity between two words "Moon" and "Mars" from the co-occurrence matrix below
 +Use these raw counts (no Local Mutual Information, no normalization) and cosine similarity.
  
            | planet | night | full | shadow | shine                   | planet | night | full | shadow | shine       
Line 14: Line 21:
     Mars     44     23  |  17  |      |   9     Mars     44     23  |  17  |      |   9
          
-b) How do they manage size of dimension of vectors in those papers+b) How do they deal with high dimension of vectors in those papers? 
-Do you think it is a bit disadvantage+Can you suggest some (other) techniques of preprocessing vectors with high dimensions?
-Can you suggest some techniques for them to manage vector dimension?+
            
 Q3.  Q3. 
-a) What are Bag of Word (BOVW) and Bag of Visual Word (BOW)? Are they synonyms+a) What are Bag of Word (BOVW) and Bag of Visual Word (BOW)? 
-b) How do they apply BOVW to compute representation of a word (concept) from a large set of Images? +b) How do they apply BOVW to compute representation of a word (concept) from a large set of images?
-(note: they use some different visual features in two papers)+
        
-Q4. When they construct text-based vectors of words from a corpus in 2nd (section 2.1) +Q4 (bonus). 
-they mentioned LMI score. Sowhat is that score? Why do they use it?+When they construct text-based vectors of words from DM model 
 +they mentioned Local Mutual Information score. (section 3.2, also section 2.1 in the 2nd paper
 +So what is that score? Why did they use it? 
 + 
 +Q5 (bonus). 
 +Have you ever wished to see beautiful "Mermaids"? 
 +Have you ever seen "Unicorns" in the real life? 
 +"Assume that there are no photos of them on the Internet"
  
-Q5: Have you ever wished to see a beautiful "Mermaids". 
-Have you ever seen "Unicorns" in the real life. 
 Think about a computational way to show that how they look like? Think about a computational way to show that how they look like?
 +
 +

[ Back to the navigation ] [ Back to the content ]