[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
user:zeman:treebanks:nl [2012/01/10 11:58]
zeman Sample.
user:zeman:treebanks:nl [2012/01/11 11:32] (current)
zeman Typo.
Line 43: Line 43:
 ==== Inside ==== ==== Inside ====
  
-CoNLL Alpino:  The orginal POS tags from the Alpino Treebank were replaced by POS +In the CoNLL version, the original POS tags from the Alpino Treebank were replaced by POS tags from the Memory-based part-of-speech tagger using the WOTAN tagset, which is described in the file ''tagset.txt''. The morphological annotation includes lemmas. The syntactic annotation is mostly identical to that of the Corpus Gesproken Nederlands (CGN, Spoken Dutch Corpus) as described in the file ''syn_prot.pdf'' (Dutch only). An attempt to describe a number of differences between the CGN and Alpino annotation practice is given in the file ''diff.pdf'' (which is heavily out of date, but the number of differences has been reduced). Conversion issues: head selection, multi-word units, discourse units.
-        tags from the Memory-based part-of-speech tagger using the WOTAN +
-        tagset, which is described in the file tagset.txt +
- The syntactic annotation is mostly identical to that of the Corpus +
-        Gesproken Nederlands (CGN, Spoken Dutch Corpus) as described in the +
-        file syn_prot.pdf (Dutch only). An attempt to describe a number of +
-        differences between the CGN and Alpino annotation practice is given in +
-        the file diff.pdf (which is heavily out of date, but the number of +
-        differences has been reduced heavily recently.) +
- 3.Conversion+
  
-        Issues: +Multi-word expressions have been concatenated into one token, using underscore as the joining character (e.g. "Economische_en_Monetaire_Unie"). They have special part-of-speech tags ''MWU'', their subparts of speech and features may describe the individual parts of the unit. E.g. "aan_het" has CPOS ''MWU'', (sub)POS ''Prep_Art'' and features ''voor_bep|onzijd|neut''.
-        - head selection +
-        - multi-word units +
-        - discourse units +
- +
- +
-The original morphosyntactic tags have been converted to fit into the three columns (CPOS, POS and FEAT) of the CoNLL format. There //should// be a 1-1 mapping between the [[http://www.buch-kromann.dk/matthias/treebank/PAROLE-manual.pdf|DDT positional tags]] and the CoNLL 2006 annotation. Use [[http://quest.ms.mff.cuni.cz/cgi-bin/interset/index.pl?tagset=da::conll|DZ Interset]] to inspect the CoNLL tagset. +
- +
-The morphological analysis in the CoNLL 2006 version does not include lemmas (the original DTAG version does contain them). The morphosyntactic tags have been assigned (probably) manually. +
- +
-Some multi-word expressions have been collapsed into one token, using underscore as the joining character. This includes adverbially used prepositional phrases (e.g. i_lørdags = on Saturdaysbut not named entities.+
  
 ==== Sample ==== ==== Sample ====
Line 128: Line 109:
 ==== Parsing ==== ==== Parsing ====
  
-Nonprojectivities in DDT are not frequent. Only 988 of the 100,238 tokens in the CoNLL 2006 version are attached nonprojectively (0.99%).+Nonprojectivities in Alpino are quite frequent. 10858 of the 200,654 tokens in the CoNLL 2006 version are attached nonprojectively (5.41%).
  
-The results of the CoNLL 2006 shared task are [[http://ilk.uvt.nl/conll/results.html|available online]]. They have been published in [[http://aclweb.org/anthology-new/W/W06/W06-2920.pdf|(Buchholz and Marsi, 2006)]]. The evaluation procedure was non-standard because it excluded punctuation tokens. These are the best results for Danish:+The results of the CoNLL 2006 shared task are [[http://ilk.uvt.nl/conll/results.html|available online]]. They have been published in [[http://aclweb.org/anthology-new/W/W06/W06-2920.pdf|(Buchholz and Marsi, 2006)]]. The evaluation procedure was non-standard because it excluded punctuation tokens. These are the best results for Dutch:
  
 ^ Parser (Authors) ^ LAS ^ UAS ^ ^ Parser (Authors) ^ LAS ^ UAS ^
-| MST (McDonald et al.) | 84.79 | 90.58 +| MST (McDonald et al.) | 79.19 83.57 
-Malt (Nivre et al.84.77 | 89.80 +Riedel et al. | 78.59 | 82.91 | 
-Riedel et al. | 83.63 89.66 |+| Basis (John O'Neil) | 77.51 81.73 
 +Malt (Nivre et al.78.59 81.35 |
  

[ Back to the navigation ] [ Back to the content ]