[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Last revision Both sides next revision
courses:rg:non-projective-dependency-parsing-using-spanning-tree-algorithms [2011/04/18 14:22]
abzianidze vytvořeno
courses:rg:non-projective-dependency-parsing-using-spanning-tree-algorithms [2011/04/19 10:00]
popel
Line 15: Line 15:
   * The score of dependency trees are commonly represented as the sum of the scores of all edges in the tree, and the score of an edge is a dot product weight vector and feature vector (containing information about nodes - words).   * The score of dependency trees are commonly represented as the sum of the scores of all edges in the tree, and the score of an edge is a dot product weight vector and feature vector (containing information about nodes - words).
   * Chu-Liu-Edmonds algorithm for finding maximum spanning trees takes in general <latex>O(n^3)</latex> time but for dense graphs Tarjan (1997) gave an efficient implementation of the algorithm with <latex>O(n^2)</latex>. The former implementation is used by authors.             * Chu-Liu-Edmonds algorithm for finding maximum spanning trees takes in general <latex>O(n^3)</latex> time but for dense graphs Tarjan (1997) gave an efficient implementation of the algorithm with <latex>O(n^2)</latex>. The former implementation is used by authors.          
-  * In the training phase, two modified versions of the Margin Infused relaxed Algorithm (MIRA) is used: Single-best MIRA and Factored MIRA. The reason of the modifications is to lower the time complexity of the training.+  * In the training phase, two modified versions of the Margin Infused relaxed Algorithm (MIRA) are used: Single-best MIRA and Factored MIRA. The reason of the modifications is to lower the time complexity of the training.
   * Experiments are done on the Czech PDT. In particular, on entire PDT (Czech-A) and on 23% portion of PDT including only non-projective dependency trees (Czech-B). The introduced algorithm is competing to other 3 dependency parsers (2 projective and 1 pseudo-projective).         * Experiments are done on the Czech PDT. In particular, on entire PDT (Czech-A) and on 23% portion of PDT including only non-projective dependency trees (Czech-B). The introduced algorithm is competing to other 3 dependency parsers (2 projective and 1 pseudo-projective).      
   * Chu-Liu-Edmonds MST algorithm with Factored MIRA shows the best results on both Czech-A and Czech-B, and slightly lower results than McD2005 projective parser on English projective dependency trees.      * Chu-Liu-Edmonds MST algorithm with Factored MIRA shows the best results on both Czech-A and Czech-B, and slightly lower results than McD2005 projective parser on English projective dependency trees.   

[ Back to the navigation ] [ Back to the content ]