Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
user:zeman:treebanks:hi [2011/12/06 17:58] zeman Hindi test data sample. |
user:zeman:treebanks:hi [2011/12/06 22:35] zeman Inside. |
==== Size ==== | ==== Size ==== |
| |
HyDT-Bangla shows dependencies between chunks, not words. The node/tree ratio is thus much lower than in other treebanks. The ICON 2009 version came with a data split into three parts: training, development and test: | HyDT-Hindi contains dependencies on two levels: between chunks and inside chunks. The ICON 2009 CoNLL-formatted version contained only dependencies between chunks, thus the node/tree ratio was much lower than in other treebanks. The ICON 2009 version came with a data split into three parts: training, development and test: |
| |
^ Part ^ Sentences ^ Chunks ^ Ratio ^ | ^ Part ^ Sentences ^ Chunks ^ Ratio ^ |
| Training | 980 | 6449 | 6.58 | | | Training | 1501 | 13779 | 9.18 | |
| Development | 150 | 811 | 5.41 | | | Development | 150 | 1250 | 8.33 | |
| Test | 150 | 961 | 6.41 | | | Test | 150 | 1156 | 7.71 | |
| TOTAL | 1280 | 8221 | 6.42 | | | TOTAL | 1801 | 16185 | 8.99 | |
| |
The ICON 2010 version came with a data split into three parts: training, development and test: | The ICON 2010 version came with a data split into three parts: training, development and test. The intra-chunk dependencies have been added: |
| |
^ Part ^ Sentences ^ Chunks ^ Ratio ^ Words ^ Ratio ^ | ^ Part ^ Sentences ^ Chunks ^ Ratio ^ Words ^ Ratio ^ |
| Training | 979 | 6440 | 6.58 | 10305 | 10.52 | | | Training | 2972 | | | 64452 | 21.69 | |
| Development | 150 | 812 | 5.41 | 1196 | 7.97 | | | Development | 543 | | | 12616 | 23.23 | |
| Test | 150 | 961 | 6.41 | 1350 | 9.00 | | | Test | 321 | | | 6588 | 20.52 | |
| TOTAL | 1279 | 8213 | 6.42 | 12851 | 10.04 | | | TOTAL | 3836 | | | 83656 | 21.81 | |
| |
I have counted the sentences and chunks. The number of words comes from (Husain et al., 2010). Note that the paper gives the number of training sentences as 980 (instead of 979), which is a mistake. The last training sentence has the id 980 but there is no sentence with id 418. | I have counted the sentences and tokens (words) on the ''.conll'' files; there are slight differences from the statistics presented in (Husain et al., 2010). |
| |
Apparently the training-development-test data split was more or less identical in both years, except for the minor discrepancies (number of training sentences and development chunks). | |
| |
==== Inside ==== | ==== Inside ==== |
| |
* Broken characters (''\x{FFFD} REPLACEMENT CHARACTER'') in the WX encoding. | The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Devanagari in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion. Note that there are (not infrequent) broken characters (''\x{FFFD} REPLACEMENT CHARACTER'') in the WX encoding and the correct characters cannot be recovered automatically. |
| |
-- | |
| |
The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Bengali in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion. | |
| |
The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk: | |
| |
<code>3 (( NP <fs af='rumAla,n,,sg,,d,0,0' head="rumAla" drel=k2:VGF name=NP3> | |
3.1 ekatA QC <fs af='eka,num,,,,,,'> | |
3.2 ledisa JJ <fs af='ledisa,unk,,,,,,'> | |
3.3 rumAla NN <fs af='rumAla,n,,sg,,d,0,0' name="rumAla"> | |
))</code> | |
| |
In the CoNLL format, the CPOS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/Chunk-Tag-List.pdf|chunk label]] (e.g. ''NP'' = //noun phrase//) and the POS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/POS-Tag-List.pdf|part of speech]] of the chunk head. | |
| |
Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants. | Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants. |
The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags. | The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags. |
| |
According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically. | According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags, lemma, morphosyntactic features and inter-chunk dependencies (topology + tags) were annotated manually. The rest (intra-chunk dependencies, headword of chunk) was marked automatically. The tool for intra-chunk dependency parsing achieves about 96% accuracy. |
| |
Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Bengali part. | Note: There have been cycles in the Hindi part of HyDT. |
| |
==== Sample ==== | ==== Sample ==== |