===== Bengali (bn) =====
Hyderabad Dependency Treebank (HyDT-Bangla)
==== Versions ====
* ICON 2009
* [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/SSF.pdf|Shakti Standard Format]] (SSF; native)
* CoNLL format
* ICON 2010
* Shakti Standard Format (SSF; native)
* CoNLL format
There has been no official release of the treebank yet. There have been two as-is sample releases for the purposes of the NLP tools contests in parsing Indian languages, attached to the [[http://ltrc.iiit.ac.in/nlptools2009/|ICON 2009]] and [[http://ltrc.iiit.ac.in/nlptools2010/|2010]] conferences.
==== Obtaining and License ====
There is no standard distribution channel for the treebank after the ICON 2010 evaluation period. Inquire at the LTRC (ltrc (at) iiit (dot) ac (dot) in) about the possibility of getting the data. The ICON 2010 license in short:
* non-commercial research usage
* no redistribution
HyDT-Bangla is being created by members of the [[http://ltrc.iiit.ac.in/|Language Technologies Research Centre]], International Institute of Information Technology, Gachibowli, Hyderabad, 500032, India.
==== References ====
* Website
* //no website dedicated to the treebank//
* [[http://ltrc.iiit.ac.in/nlptools2009/papers.php|ICON 2009 NLP Tools Contest On-Line Proceedings]]
* [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|ICON 2010 NLP Tools Contest On-Line Proceedings]]
* Data
* //no separate citation//
* Principal publications
* Samar Husain: [[http://ltrc.iiit.ac.in/nlptools2009/CR/intro-husain.pdf|Dependency Parsers for Indian Languages]]. In: Proceedings of ICON-2009 NLP Tools Contest, Hyderabad, India, 2009.
* Samar Husain, Prashanth Mannem, Bharat Ambati, Phani Gadde: //The ICON-2010 tools contest on Indian language dependency parsing.// In: [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|Proceedings of ICON-2010 NLP Tools Contest]], pp. 1–8, Kharagpur, India, 2010.
* Documentation
* http://ltrc.iiit.ac.in/nlptools2010/documentation.php
==== Domain ====
Unknown.
==== Size ====
HyDT-Bangla shows dependencies between chunks, not words. The node/tree ratio is thus much lower than in other treebanks. The ICON 2009 version came with a data split into three parts: training, development and test:
^ Part ^ Sentences ^ Chunks ^ Ratio ^
| Training | 980 | 6449 | 6.58 |
| Development | 150 | 811 | 5.41 |
| Test | 150 | 961 | 6.41 |
| TOTAL | 1280 | 8221 | 6.42 |
The ICON 2010 version came with a data split into three parts: training, development and test:
^ Part ^ Sentences ^ Chunks ^ Ratio ^ Words ^ Ratio ^
| Training | 979 | 6440 | 6.58 | 10305 | 10.52 |
| Development | 150 | 812 | 5.41 | 1196 | 7.97 |
| Test | 150 | 961 | 6.41 | 1350 | 9.00 |
| TOTAL | 1279 | 8213 | 6.42 | 12851 | 10.04 |
I have counted the sentences and chunks. The number of words comes from (Husain et al., 2010). Note that the paper gives the number of training sentences as 980 (instead of 979), which is a mistake. The last training sentence has the id 980 but there is no sentence with id 418.
Apparently the training-development-test data split was more or less identical in both years, except for the minor discrepancies (number of training sentences and development chunks).
==== Inside ====
The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Bengali in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion.
The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk:
3 (( NP
3.1 ekatA QC
3.2 ledisa JJ
3.3 rumAla NN
))
In the CoNLL format, the CPOS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/Chunk-Tag-List.pdf|chunk label]] (e.g. ''NP'' = //noun phrase//) and the POS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/POS-Tag-List.pdf|part of speech]] of the chunk head.
Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants.
The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags.
According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically.
Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Bengali part.
==== Sample ====
The first sentence of the ICON 2010 training data (with fine-grained syntactic tags) in the Shakti format:
1 (( NP
1.1 mudZira NN
1.2 Agei NST
))
2 (( NP
2.1 praWama QO
2.2 kApa NN
2.3 cA NN
))
3 (( VGF
3.1 ese VM
3.2 . SYM
))
And in the CoNLL format:
| 1 | Agei | Age | NP | NST | lex-Age|cat-adv|gend-|num-|pers-|case-|vib-|tam-|head-Agei|name-NP | 3 | k7t | _ | _ |
| 2 | cA | cA | NP | NN | lex-cA|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-cA|name-NP2 | 3 | k1 | _ | _ |
| 3 | ese | As | VGF | VM | lex-As|cat-v|gend-|num-|pers-5|case-|vib-A_yA+Ce|tam-A|head-ese|name-VGF | 0 | main | _ | _ |
And after conversion of the WX encoding to the Bengali script in UTF-8:
| 1 | আগেই | আগে | NP | NST | lex-Age|cat-adv|gend-|num-|pers-|case-|vib-|tam-|head-Agei|name-NP | 3 | k7t | _ | _ |
| 2 | চা | চা | NP | NN | lex-cA|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-cA|name-NP2 | 3 | k1 | _ | _ |
| 3 | এসে | আস্ | VGF | VM | lex-As|cat-v|gend-|num-|pers-5|case-|vib-A_yA+Ce|tam-A|head-ese|name-VGF | 0 | main | _ | _ |
The first sentence of the ICON 2010 development data (with fine-grained syntactic tags) in the Shakti format:
1 (( NP
1.1 parabarwIkAle NN
))
2 (( NP
2.1 aPisa-biyArAraxera NN
))
3 (( NP
3.1 nAma NN
))
4 (( NP
4.1 GoRaNA NN
))
5 (( VGNN
5.1 karAra VM
))
6 (( NP
6.1 samay NN
))
7 (( NP
7.1 animeRake NNP
))
8 (( VGF
8.1 sariye VM
8.2 . SYM
))
And in the CoNLL format:
| 1 | parabarwIkAle | parabarwIkAle | NP | NN | lex-parabarwIkAle|cat-adv|gend-|num-|pers-|case-|vib-|tam-|head-parabarwIkAle|name-NP | 8 | k7t | _ | _ |
| 2 | aPisa-biyArAraxera | aPisa-biyArAraxera | NP | NN | lex-aPisa-biyArAraxera|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-aPisa-biyArAraxera|name-NP2 | 3 | r6 | _ | _ |
| 3 | nAma | nAma | NP | NN | lex-nAma|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-nAma|name-NP3 | 5 | k2 | _ | _ |
| 4 | GoRaNA | GoRaNA | NP | NN | lex-GoRaNA|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-GoRaNA|name-NP4 | 5 | pof | _ | _ |
| 5 | karAra | kar | VGNN | VM | lex-kar|cat-n|gend-|num-|pers-any|case-|vib-|tam-|head-karAra|name-VGNN | 6 | r6 | _ | _ |
| 6 | samay | samay | NP | NN | lex-samay|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-samay|name-NP5 | 8 | k7t | _ | _ |
| 7 | animeRake | animeRake | NP | NNP | lex-animeRake|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-animeRake|name-NP6 | 8 | k2 | _ | _ |
| 8 | sariye | sariye | VGF | VM | lex-sariye|cat-unk|gend-|num-|pers-5|case-|vib-0_rAKA+ka_ha+la|tam-|head-sariye|name-VGF | 0 | main | _ | _ |
And after conversion of the WX encoding to the Bengali script in UTF-8:
| 1 | পরবর্তীকালে | পরবর্তীকালে | NP | NN | lex-parabarwIkAle|cat-adv|gend-|num-|pers-|case-|vib-|tam-|head-parabarwIkAle|name-NP | 8 | k7t | _ | _ |
| 2 | অফিস-বিযারারদের | অফিস-বিযারারদের | NP | NN | lex-aPisa-biyArAraxera|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-aPisa-biyArAraxera|name-NP2 | 3 | r6 | _ | _ |
| 3 | নাম | নাম | NP | NN | lex-nAma|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-nAma|name-NP3 | 5 | k2 | _ | _ |
| 4 | ঘোষণা | ঘোষণা | NP | NN | lex-GoRaNA|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-GoRaNA|name-NP4 | 5 | pof | _ | _ |
| 5 | করার | কর্ | VGNN | VM | lex-kar|cat-n|gend-|num-|pers-any|case-|vib-|tam-|head-karAra|name-VGNN | 6 | r6 | _ | _ |
| 6 | সময্ | সময্ | NP | NN | lex-samay|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-samay|name-NP5 | 8 | k7t | _ | _ |
| 7 | অনিমেষকে | অনিমেষকে | NP | NNP | lex-animeRake|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-animeRake|name-NP6 | 8 | k2 | _ | _ |
| 8 | সরিযে | সরিযে | VGF | VM | lex-sariye|cat-unk|gend-|num-|pers-5|case-|vib-0_rAKA+ka_ha+la|tam-|head-sariye|name-VGF | 0 | main | _ | _ |
The first sentence of the ICON 2010 test data (with fine-grained syntactic tags) in the Shakti format:
1 (( NP
1.1 mAXabIlawA NNP
))
2 (( NP
2.1 waKana PRP
))
3 (( NP
3.1 hAwera NN
))
4 (( NP
4.1 GadZi NN
))
5 (( VGNF
5.1 Kule VM
))
6 (( NP
6.1 tebile NN
))
7 (( VGF
7.1 rAKaCila VM
7.2 । SYM
))
And in the CoNLL format:
| 1 | mAXabIlawA | mAXabIlawA | NP | NNP | lex-mAXabIlawA|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-mAXabIlawA|name-NP | 7 | k1 | _ | _ |
| 2 | waKana | waKana | NP | PRP | lex-waKana|cat-pn|gend-|num-|pers-|case-d|vib-0|tam-0|head-waKana|name-NP2 | 7 | k7t | _ | _ |
| 3 | hAwera | hAwa | NP | NN | lex-hAwa|cat-n|gend-|num-sg|pers-|case-o|vib-era|tam-era|head-hAwera|name-NP3 | 4 | r6 | _ | _ |
| 4 | GadZi | GadZi | NP | NN | lex-GadZi|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-GadZi|name-NP4 | 5 | k2 | _ | _ |
| 5 | Kule | Kul | VGNF | VM | lex-Kul|cat-v|gend-|num-|pers-5|case-|vib-ne|tam-ne|head-Kule|name-VGNF | 7 | vmod | _ | _ |
| 6 | tebile | tebila | NP | NN | lex-tebila|cat-n|gend-|num-sg|pers-|case-d|vib-me|tam-me|head-tebile|name-NP5 | 7 | k7p | _ | _ |
| 7 | rAKaCila | rAK | VGF | VM | lex-rAK|cat-v|gend-|num-|pers-5|case-|vib-Cila|tam-Cila|head-rAKaCila|name-VGF | 0 | main | _ | _ |
And after conversion of the WX encoding to the Bengali script in UTF-8:
| 1 | মাধবীলতা | মাধবীলতা | NP | NNP | lex-mAXabIlawA|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-mAXabIlawA|name-NP | 7 | k1 | _ | _ |
| 2 | তখন | তখন | NP | PRP | lex-waKana|cat-pn|gend-|num-|pers-|case-d|vib-0|tam-0|head-waKana|name-NP2 | 7 | k7t | _ | _ |
| 3 | হাতের | হাত | NP | NN | lex-hAwa|cat-n|gend-|num-sg|pers-|case-o|vib-era|tam-era|head-hAwera|name-NP3 | 4 | r6 | _ | _ |
| 4 | ঘড়ি | ঘড়ি | NP | NN | lex-GadZi|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-GadZi|name-NP4 | 5 | k2 | _ | _ |
| 5 | খুলে | খুল্ | VGNF | VM | lex-Kul|cat-v|gend-|num-|pers-5|case-|vib-ne|tam-ne|head-Kule|name-VGNF | 7 | vmod | _ | _ |
| 6 | টেবিলে | টেবিল | NP | NN | lex-tebila|cat-n|gend-|num-sg|pers-|case-d|vib-me|tam-me|head-tebile|name-NP5 | 7 | k7p | _ | _ |
| 7 | রাখছিল | রাখ্ | VGF | VM | lex-rAK|cat-v|gend-|num-|pers-5|case-|vib-Cila|tam-Cila|head-rAKaCila|name-VGF | 0 | main | _ | _ |
==== Parsing ====
Nonprojectivities in HyDT-Bangla are not frequent. Only 78 of the 7252 chunks in the training+development ICON 2010 version are attached nonprojectively (1.08%).
The results of the ICON 2009 NLP tools contest have been published in [[http://ltrc.iiit.ac.in/nlptools2009/CR/intro-husain.pdf|(Husain, 2009)]]. There were two evaluation rounds, the first with the coarse-grained syntactic tags, the second with the fine-grained syntactic tags. To reward language independence, only systems that parsed all three languages were officially ranked. The following table presents the Bengali/coarse-grained results of the four officially ranked systems, and the best Bengali-only* system.
^ Parser (Authors) ^ LAS ^ UAS ^
| Kolkata (De et al.)* | 84.29 | 90.32 |
| Hyderabad (Ambati et al.) | 78.25 | 90.22 |
| Malt (Nivre) | 76.07 | 88.97 |
| Malt+MST (Zeman) | 71.49 | 86.89 |
| Mannem | 70.34 | 83.56 |
The results of the ICON 2010 NLP tools contest have been published in [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], page 6. These are the best results for Bengali with fine-grained syntactic tags:
^ Parser (Authors) ^ LAS ^ UAS ^
| Attardi et al. | 70.66 | 87.41 |
| Kosaraju et al. | 70.55 | 86.16 |
| Kolachina et al. | 70.14 | 87.10 |