[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
courses:rg:2012:distributed-perceptron [2012/12/16 22:40]
machacek
courses:rg:2012:distributed-perceptron [2012/12/16 23:02]
machacek
Line 5: Line 5:
 ==== 3 Structured Perceptron ==== ==== 3 Structured Perceptron ====
  
-  * In unstructured perceptron, you are trying to separate two sets with hyperplane. See Question 1 for the algorithm. In training phase, you iterate your training data and adjust the hyperplane every time you make a mistake. [[http://www.youtube.com/watch?v=vGwemZhPlsA|Youtube Example]]+  * In unstructured perceptron, you are trying to separate two sets of with hyperplane. See Question 1 for the algorithm. In training phase, you iterate your training data and adjust the hyperplane every time you make a mistake. [[http://www.youtube.com/watch?v=vGwemZhPlsA|Youtube Example]]
  
   * Structured (or multiclass) perceptron is generalization of the unstructured perceptron. See figure 1 in the paper for the algorithm.   * Structured (or multiclass) perceptron is generalization of the unstructured perceptron. See figure 1 in the paper for the algorithm.
 +  * You can use any structured input x (not just vector, sentence for example) and any structured output y (not just binary value, parse tree for example)
 +  * You need to have fuction f(x,y) which returns feature representation of candidate input-output pair
 +  * Using the Theorem 1, you can bound the number of mistakes made during the training (the computational time is therefore also bounded)
  
 ==== 4 Distributed Structured Perceptron ==== ==== 4 Distributed Structured Perceptron ====

[ Back to the navigation ] [ Back to the content ]