Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
courses:rg:2013:convolution-kernels [2013/03/11 18:42] dusek |
courses:rg:2013:convolution-kernels [2013/03/12 11:27] (current) popel <latex>x</latex> was not rendered |
||
---|---|---|---|
Line 19: | Line 19: | ||
* They are able to " | * They are able to " | ||
* Examples: Naive Bayes, Mixtures of Gaussians, HMM, Bayesian Networks, Markov Random Fields | * Examples: Naive Bayes, Mixtures of Gaussians, HMM, Bayesian Networks, Markov Random Fields | ||
- | * **Discriminative models** do everything in one-step -- they learn the posterior < | + | * **Discriminative models** do everything in one-step -- they learn the posterior < |
* They are simpler and can use many more features, but are prone to missing inputs. | * They are simpler and can use many more features, but are prone to missing inputs. | ||
- | * Examples: SVM, Logistic Regression, | + | * Examples: SVM, Logistic Regression, |
- Each CFG rule generates just one level of the derivation tree. Therefore, using " | - Each CFG rule generates just one level of the derivation tree. Therefore, using " | ||
* '' | * '' | ||
Line 33: | Line 33: | ||
- < | - < | ||
- < | - < | ||
- | - | + | - Convolution is defined like this: < |
+ | - There is a (tiny) error in the last formula of Section 3. You cannot actually multiply tree parses, so it should read: < | ||
+ | |||
+ | ==== Report ==== | ||
+ | |||
+ | We discussed the answers to the questions most of the time. Other issues raised in the discussion were: | ||
+ | |||
+ | * **Usability** -- the approach is only usable for // | ||
+ | * **Scalability** -- they only use 800 sentences and 20 candidates per sentence for training. We believe that for large data (milions of examples) this will become too complex. | ||
+ | * **Evaluation** -- it looks as if they used a non-standard evaluation metric to get " |