Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
|
courses:rg:2013:hmm-perc-experiments [2013/03/13 01:47] ufal vytvořeno |
courses:rg:2013:hmm-perc-experiments [2013/04/01 12:33] (current) cechj5cm |
||
|---|---|---|---|
| Line 9: | Line 9: | ||
| N N X N X | N N X N X | ||
| How would this result alter values of alfa_X,X,X, alfa_N, | How would this result alter values of alfa_X,X,X, alfa_N, | ||
| - | Supposing that the best tag sequence | + | Supposing that the best tag sequence |
| 2. Suppose this tagged sentence as the only entry in your training data: | 2. Suppose this tagged sentence as the only entry in your training data: | ||
| Line 23: | Line 23: | ||
| (You might compare this problem to similar one in paper presented on RG in winter - http:// | (You might compare this problem to similar one in paper presented on RG in winter - http:// | ||
| + | |||
| + | ===SHORT REPORT=== | ||
| + | Intro | ||
| + | Autors said that training perceptron is quicker and easier solution. | ||
| + | Definition | ||
| + | We defined structured perceptron and parametres for special case of structured perceptron. | ||
| + | Parameters are log of conditional probability for trigram words and conditional probability for word and tag. | ||
| + | αx,y,z = log P(z | x,y) | ||
| + | αt,w = log P(w| t) | ||
| + | Logs are used for precision | ||
| + | Then speaker show how work process of learning. | ||
| + | Autors then deal with separable and inseparable data. | ||
| + | They defined condition for inseparable data. | ||
| + | |||
| + | We spend rest of time solution of questions. During that solution were shown others fetatures of this percepton training. | ||
