[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
user:zeman:treebanks:hi [2011/12/06 17:58]
zeman Hindi test data sample.
user:zeman:treebanks:hi [2011/12/08 08:38]
zeman Zarovnání čísel v tabulkách.
Line 43: Line 43:
 ==== Size ==== ==== Size ====
  
-HyDT-Bangla shows dependencies between chunks, not words. The node/tree ratio is thus much lower than in other treebanks. The ICON 2009 version came with a data split into three parts: training, development and test:+HyDT-Hindi contains dependencies on two levels: between chunks and inside chunks. The ICON 2009 CoNLL-formatted version contained only dependencies between chunks, thus the node/tree ratio was much lower than in other treebanks. The ICON 2009 version came with a data split into three parts: training, development and test:
  
 ^ Part ^ Sentences ^ Chunks ^ Ratio ^ ^ Part ^ Sentences ^ Chunks ^ Ratio ^
-| Training | 980 6449 6.58 +| Training |    1501  13779  9.18 
-| Development | 150 | 811 5.41 +| Development |  150 |   1250  8.33 
-| Test | 150 | 961 6.41 +| Test |         150 |   1156  7.71 
-| TOTAL | 1280 8221 6.42 |+| TOTAL |       1801  16185  8.99 |
  
-The ICON 2010 version came with a data split into three parts: training, development and test:+The ICON 2010 version came with a data split into three parts: training, development and test. The intra-chunk dependencies have been added:
  
 ^ Part ^ Sentences ^ Chunks ^ Ratio ^ Words ^ Ratio ^ ^ Part ^ Sentences ^ Chunks ^ Ratio ^ Words ^ Ratio ^
-| Training | 979 6440 6.58 10305 10.52 +| Training |    2972 | | |  64452  21.69 
-| Development | 150 812 5.41 1196 7.97 +| Development |  543 | | |  12616  23.23 
-| Test | 150 961 6.41 1350 9.00 +| Test |         321 | | |   6588  20.52 
-| TOTAL | 1279 8213 6.42 12851 10.04 |+| TOTAL |       3836 | | |  83656  21.81 |
  
-I have counted the sentences and chunksThe number of words comes from (Husain et al., 2010). Note that the paper gives the number of training sentences as 980 (instead of 979), which is a mistake. The last training sentence has the id 980 but there is no sentence with id 418. +I have counted the sentences and tokens (words) on the ''.conll'' files; there are slight differences from the statistics presented in (Husain et al., 2010).
- +
-Apparently the training-development-test data split was more or less identical in both years, except for the minor discrepancies (number of training sentences and development chunks).+
  
 ==== Inside ==== ==== Inside ====
  
-  * Broken characters (''\x{FFFD} REPLACEMENT CHARACTER'') in the WX encoding. +The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Devanagari in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion. Note that there are (not infrequent) broken characters (''\x{FFFD} REPLACEMENT CHARACTER''in the WX encoding and the correct characters cannot be recovered automatically.
- +
--- +
- +
-The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Bengali in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion. +
- +
-The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk: +
- +
-<code>      ((      NP      <fs af='rumAla,n,,sg,,d,0,0head="rumAla" drel=k2:VGF name=NP3> +
-3.1     ekatA   QC      <fs af='eka,num,,,,,,'+
-3.2     ledisa  JJ      <fs af='ledisa,unk,,,,,,'> +
-3.3     rumAla  NN      <fs af='rumAla,n,,sg,,d,0,0' name="rumAla"> +
-        ))</code> +
- +
-In the CoNLL format, the CPOS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/Chunk-Tag-List.pdf|chunk label]] (e.g. ''NP'' = //noun phrase//and the POS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/POS-Tag-List.pdf|part of speech]] of the chunk head.+
  
 Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants. Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants.
Line 85: Line 69:
 The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags. The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags.
  
-According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically.+According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags, lemma, morphosyntactic features and inter-chunk dependencies (topology + tags) were annotated manually. The rest (intra-chunk dependencies, headword of chunk) was marked automatically. The tool for intra-chunk dependency parsing achieves about 96% accuracy.
  
-Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Bengali part.+Note: There have been cycles in the Hindi part of HyDT.
  
 ==== Sample ==== ==== Sample ====
Line 603: Line 587:
 ==== Parsing ==== ==== Parsing ====
  
-Nonprojectivities in HyDT-Bangla are not frequent. Only 78 of the 7252 chunks in the training+development ICON 2010 version are attached nonprojectively (1.08%).+Nonprojectivities in HyDT-Hindi are not frequent. Only 862 of the 77068 chunks in the training+development ICON 2010 version are attached nonprojectively (1.12%).
  
-The results of the ICON 2009 NLP tools contest have been published in [[http://ltrc.iiit.ac.in/nlptools2009/CR/intro-husain.pdf|(Husain, 2009)]]. There were two evaluation rounds, the first with the coarse-grained syntactic tags, the second with the fine-grained syntactic tags. To reward language independence, only systems that parsed all three languages were officially ranked. The following table presents the Bengali/coarse-grained results of the four officially ranked systems, and the best Bengali-only* system.+The results of the ICON 2009 NLP tools contest have been published in [[http://ltrc.iiit.ac.in/nlptools2009/CR/intro-husain.pdf|(Husain, 2009)]]. There were two evaluation rounds, the first with the coarse-grained syntactic tags, the second with the fine-grained syntactic tags. To reward language independence, only systems that parsed all three languages were officially ranked. The following table presents the Hindi/coarse-grained results of the four officially ranked systems.
  
 ^ Parser (Authors) ^ LAS ^ UAS ^ ^ Parser (Authors) ^ LAS ^ UAS ^
-| Kolkata (De et al.)* | 84.29 | 90.32 | +| Hyderabad (Ambati et al.) | 79.33 | 90.22 | 
-| Hyderabad (Ambati et al.) | 78.25 | 90.22 | +| Malt (Nivre) | 78.20 89.36 
-| Malt (Nivre) | 76.07 88.97 +| Malt+MST (Zeman) | 73.88 88.49 
-| Malt+MST (Zeman) | 71.49 86.89 +| Mannem | 76.90 88.06 |
-| Mannem | 70.34 83.56 |+
  
-The results of the ICON 2010 NLP tools contest have been published in [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], page 6. These are the best results for Bengali with fine-grained syntactic tags:+The results of the ICON 2010 NLP tools contest have been published in [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], page 6. These are the best results for Hindi with fine-grained syntactic tags:
  
 ^ Parser (Authors) ^ LAS ^ UAS ^ ^ Parser (Authors) ^ LAS ^ UAS ^
-| Attardi et al. | 70.66 87.41 +| Attardi et al. | 87.49 94.78 
-| Kosaraju et al. | 70.55 86.16 +| Kosaraju et al. | 88.63 94.54 
-| Kolachina et al. | 70.14 87.10 |+| Kolachina et al. | 86.22 93.25 |
  

[ Back to the navigation ] [ Back to the content ]