[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki

[ Back to the navigation ]

You should focus on the first paper (you can skip section 2.3): Distributional semantics from text and images.
The second paper Distributional Semantics in Technicolor, an extent of the first one, is optional reading.

Recall the paper about word representations presented by Tam on November 10.
Read http://www.quora.com/Whats-the-difference-between-distributed-and-distributional-semantic-representations

(M_{w,d} is a matrix with w rows and d columns).
What does w, d and k mean?
What are the values of w, d and k used in the experiments in this paper?

a) Compute the similarity between two words “Moon” and “Mars” from the co-occurrence matrix below.
Use these raw counts (no Local Mutual Information, no normalization) and cosine similarity.

         | planet | night | full | shadow | shine       
  Moon   |   34   |   27  |  19  |   9    |   20
  Sun    |   32   |   23  |  10  |   47   |   15
  Dog    |   0    |   19  |  2   |   11   |   1
  Mars   |   44   |   23  |  17  |   3    |   9

b) How do they deal with high dimension of vectors in those papers?
Can you suggest some (other) techniques of preprocessing vectors with high dimensions?

a) What are Bag of Word (BOVW) and Bag of Visual Word (BOW)?
b) How do they apply BOVW to compute representation of a word (concept) from a large set of images?

Q4 (bonus).
When they construct text-based vectors of words from DM model
they mentioned Local Mutual Information score. (section 3.2, also section 2.1 in the 2nd paper)
So what is that score? Why did they use it?

Q5 (bonus).
Have you ever wished to see beautiful “Mermaids”?
Have you ever seen “Unicorns” in the real life?
“Assume that there are no photos of them on the Internet”

Think about a computational way to show that how they look like?

[ Back to the navigation ] [ Back to the content ]