# Differences

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision | |||

courses:rg:non-projective-dependency-parsing-using-spanning-tree-algorithms [2011/04/19 10:00] popel |
courses:rg:non-projective-dependency-parsing-using-spanning-tree-algorithms [2011/04/19 12:09] (current) abzianidze |
||
---|---|---|---|

Line 14: | Line 14: | ||

* In the paper, a non-projective parser is understood as a parser allowing non-projective dependency parse trees along with projective dependency parse trees, in contrast to a projective parser, which forbids non-projective dependency parse trees. Also under the space of non-projective trees, authors mean the union space of both non-projective and projective trees. | * In the paper, a non-projective parser is understood as a parser allowing non-projective dependency parse trees along with projective dependency parse trees, in contrast to a projective parser, which forbids non-projective dependency parse trees. Also under the space of non-projective trees, authors mean the union space of both non-projective and projective trees. | ||

* The score of dependency trees are commonly represented as the sum of the scores of all edges in the tree, and the score of an edge is a dot product weight vector and feature vector (containing information about nodes - words). | * The score of dependency trees are commonly represented as the sum of the scores of all edges in the tree, and the score of an edge is a dot product weight vector and feature vector (containing information about nodes - words). | ||

- | * Chu-Liu-Edmonds algorithm for finding maximum spanning trees takes in general <latex>O(n^3)</latex> time but for dense graphs Tarjan (1997) gave an efficient implementation of the algorithm with <latex>O(n^2)</latex>. The former implementation is used by authors. | + | * Chu-Liu-Edmonds algorithm for finding maximum spanning trees takes in general <latex>O(n^3)</latex> time but for dense graphs Tarjan (1997) gave an efficient implementation of the algorithm with <latex>O(n^2)</latex>. The latter implementation is used by authors. |

* In the training phase, two modified versions of the Margin Infused relaxed Algorithm (MIRA) are used: Single-best MIRA and Factored MIRA. The reason of the modifications is to lower the time complexity of the training. | * In the training phase, two modified versions of the Margin Infused relaxed Algorithm (MIRA) are used: Single-best MIRA and Factored MIRA. The reason of the modifications is to lower the time complexity of the training. | ||

* Experiments are done on the Czech PDT. In particular, on entire PDT (Czech-A) and on 23% portion of PDT including only non-projective dependency trees (Czech-B). The introduced algorithm is competing to other 3 dependency parsers (2 projective and 1 pseudo-projective). | * Experiments are done on the Czech PDT. In particular, on entire PDT (Czech-A) and on 23% portion of PDT including only non-projective dependency trees (Czech-B). The introduced algorithm is competing to other 3 dependency parsers (2 projective and 1 pseudo-projective). |