# Differences

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision Next revision | Previous revision | ||

courses:rg:2013:convolution-kernels [2013/03/11 18:54] dusek |
courses:rg:2013:convolution-kernels [2013/03/12 11:27] (current) popel <latex>x</latex> was not rendered |
||
---|---|---|---|

Line 19: | Line 19: | ||

* They are able to " | * They are able to " | ||

* Examples: Naive Bayes, Mixtures of Gaussians, HMM, Bayesian Networks, Markov Random Fields | * Examples: Naive Bayes, Mixtures of Gaussians, HMM, Bayesian Networks, Markov Random Fields | ||

- | * **Discriminative models** do everything in one-step -- they learn the posterior < | + | * **Discriminative models** do everything in one-step -- they learn the posterior < |

* They are simpler and can use many more features, but are prone to missing inputs. | * They are simpler and can use many more features, but are prone to missing inputs. | ||

- | * Examples: SVM, Logistic Regression, Neuron. sítě, k-NN, Conditional Random Fields | + | * Examples: SVM, Logistic Regression, Neural network, k-NN, Conditional Random Fields |

- Each CFG rule generates just one level of the derivation tree. Therefore, using " | - Each CFG rule generates just one level of the derivation tree. Therefore, using " | ||

* '' | * '' |