Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
courses:rg:2012:distributed-perceptron [2012/12/16 16:46] machacek |
courses:rg:2012:distributed-perceptron [2012/12/16 17:08] machacek |
||
---|---|---|---|
Line 58: | Line 58: | ||
==== Question 3 ==== | ==== Question 3 ==== | ||
In figure 4, why do you think that the F-measure for Regular Perceptron (first column) learned by the Serial (All Data) algorithm is worse than the Parallel (Iterative Parametere Mix)? | In figure 4, why do you think that the F-measure for Regular Perceptron (first column) learned by the Serial (All Data) algorithm is worse than the Parallel (Iterative Parametere Mix)? | ||
+ | |||
+ | |||
+ | **Answer:** | ||
+ | |||
+ | * Iterative Parameter Mixing is just a form of parameter averaging, which has the same effect as the averaged perceptron. | ||
+ | * F-measures for seral (All Data) and Paralel (Iterative Parameter Mix) are very similar in the second column. It is because the both methods are already averaged. | ||
+ | * Bagging like effect | ||
==== Question 4 ==== | ==== Question 4 ==== |