Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
courses:rg:2013:hmm-perc-experiments [2013/03/13 01:48] ufal |
courses:rg:2013:hmm-perc-experiments [2013/04/01 12:33] (current) cechj5cm |
||
|---|---|---|---|
| Line 23: | Line 23: | ||
| (You might compare this problem to similar one in paper presented on RG in winter - http:// | (You might compare this problem to similar one in paper presented on RG in winter - http:// | ||
| + | |||
| + | ===SHORT REPORT=== | ||
| + | Intro | ||
| + | Autors said that training perceptron is quicker and easier solution. | ||
| + | Definition | ||
| + | We defined structured perceptron and parametres for special case of structured perceptron. | ||
| + | Parameters are log of conditional probability for trigram words and conditional probability for word and tag. | ||
| + | αx,y,z = log P(z | x,y) | ||
| + | αt,w = log P(w| t) | ||
| + | Logs are used for precision | ||
| + | Then speaker show how work process of learning. | ||
| + | Autors then deal with separable and inseparable data. | ||
| + | They defined condition for inseparable data. | ||
| + | |||
| + | We spend rest of time solution of questions. During that solution were shown others fetatures of this percepton training. | ||
