[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

This is an old revision of the document!


Table of Contents

Treebanks for Various Languages

Arabic (ar)

Prague Arabic Dependency Treebank (PADT)

Versions

The CoNLL 2007 version reportedly improves over CoNLL 2006 in quality of morphological annotation. Both CoNLL versions miss important parts of the original PADT annotation (see below).

Obtaining and License

The original PADT 1.0 is distributed by the LDC under the catalogue number LDC2004T23. It is free for LDC members 2004, price for non-members is unknown (contact LDC). The license in short:

The CoNLL 2006 and 2007 versions are obtainable upon request under similar license terms. Their publication in the LDC together with the other CoNLL treebanks is being prepared.

PADT was created by members of the Institute of Formal and Applied Linguistics (Ústav formální a aplikované lingvistiky, ÚFAL), Faculty of Mathematics and Physics (Matematicko-fyzikální fakulta), Charles University in Prague (Univerzita Karlova v Praze), Malostranské náměstí 25, Praha, CZ-11800, Czechia. The CoNLL conversion of the treebank was prepared by Otakar Smrž, one of the key authors of the original PADT.

Domain

Newswire text from press agencies (Agence France Presse, Ummah, Al Hayat, An Nahar, Xinhua 2001-2003).

Size

According to their website, the original PADT 1.0 contains 113,500 tokens annotated analytically. The CoNLL 2006 version contains 59752 tokens in 1606 sentences, yielding 37.21 tokens per sentence on average (CoNLL 2006 data split: 54379 tokens / 1460 sentences training, 5373 tokens / 146 sentences test). The CoNLL 2007 version contains 116,793 tokens in 3043 sentences, yielding 38.38 tokens per sentence on average (CoNLL 2007 data split: 111,669 tokens / 2912 sentences training, 5124 tokens / 131 sentences test).

As noted in (Nivre et al., 2007), “the parsing units in this treebank are in many cases larger than conventional sentences, which partly explains the high average number of tokens per sentence.”

References

Inside

The original PADT 1.0 is distributed in the FS format. The CoNLL versions are distributed in the CoNLL-X format. The original PADT contains more information than the CoNLL version. There is morphological annotation (tags and lemmas) both manual and by a tagger (while only manual is in CoNLL data), glosses etc. However, the most important piece of information that got lost during the conversion to CoNLL is the FS attribute called parallel. It distinguishes conjuncts from shared modifiers of coordination and thus the syntactic structure is incomplete without it.

Word forms and lemmas are vocalized, i.e. they contain diacritics for short vowels as well as consonant gemination and a few other things. The CoNLL 2006 version includes Buckwalter transliteration of the Arabic script (in the same column as Arabic, attached to the Arabic form/lemma with an underscore character).

Note that tokenization of Arabic typically includes splitting original words (inserting spaces between letters), not just separating punctuation from words. Example: وبالفالوجة = wabiālfālūjah = wa/CONJ + bi/PREP + AlfAlwjp/NOUN_PROP = and in al-Falujah. In PADT, conjunctions and prepositions are separate tokens and nodes.

The original PADT 1.0 uses 10-character positional morphological tags whose documentation is hard to find. The CoNLL 2006 version converts the tags to the three CoNLL columns, CPOS, POS and FEAT, most of the information being encoded as pipe-separated attribute-value assignments in FEAT. There should be a 1-1 mapping between the PADT positional tags and the CoNLL 2006 annotation. The CoNLL 2007 version uses a tag conversion different from CoNLL 2006. Both CoNLL distributions contain a README file with a brief description of the parts of speech and features. Use DZ Interset to inspect the two CoNLL tagsets. Also look at the Elixir FM online interface for a later development of the morphological analyzer created along with PADT.

The guidelines for syntactic annotation are documented in the PADT annotation manual (only peculiarities of Arabic are documented, otherwise it is referenced to the annotation manual for the Czech treebank). The list and brief description of syntactic tags (dependency relation labels) can be found in (Smrž, Šnaidauf and Zemánek, 2002).

Sample

The first two sentences of the CoNLL 2006 training data:

1 غِيابُ_giyAbu غِياب_giyAb N N case=1|def=R 0 ExD _ _
2 فُؤاد_fu&Ad فُؤاد_fu&Ad Z Z _ 3 Atr _ _
3 كَنْعان_kanoEAn كَنْعان_kanoEAn Z Z _ 1 Atr _ _
1 فُؤاد_fu&Ad فُؤاد_fu&Ad Z Z _ 2 Atr _ _
2 كَنْعان_kanoEAn كَنْعان_kanoEAn Z Z _ 9 Sb _ _
3 ،_, ،_, G G _ 2 AuxG _ _
4 رائِد_rA}id رائِد_rA}id N N _ 2 Atr _ _
5 القِصَّة_AlqiS~ap قِصَّة_qiS~ap N N gen=F|num=S|def=D 4 Atr _ _
6 القَصِيرَةِ_AlqaSiyrapi قَصِير_qaSiyr A A gen=F|num=S|case=2|def=D 5 Atr _ _
7 فِي_fiy فِي_fiy P P _ 4 AuxP _ _
8 لُبْنانِ_lubonAni لُبْنان_lubonAn Z Z case=2|def=R 7 Atr _ _
9 رَحَلَ_raHala رَحَل-َ_raHal-a V VP pers=3|gen=M|num=S 0 Pred _ _
10 مَساءَ_masA'a مَساء_masA' D D _ 9 Adv _ _
11 أَمْسِ_>amosi أَمْسِ_>amosi D D _ 10 Atr _ _
12 عَن_Ean عَن_Ean P P _ 9 AuxP _ _
13 81_81 81_81 Q Q _ 12 Adv _ _
14 عاماً_EAmAF عام_EAm N N gen=M|num=S|case=4|def=I 13 Atr _ _
15 ._. ._. G G _ 0 AuxK _ _

The first sentence of the CoNLL 2006 test data:

1 اِتِّفاقٌ_Ait~ifAqN اِتِّفاق_Ait~ifAq N N case=1|def=I 0 ExD _ _
2 بَيْنَ_bayona بَيْنَ_bayona P P _ 1 AuxP _ _
3 لُبْنانِ_lubonAni لُبْنان_lubonAn Z Z case=2|def=R 4 Atr _ _
4 وَ_wa وَ_wa C C _ 2 Coord _ _
5 سُورِيَّةٍ_suwriy~apK سُورِيا_suwriyA Z Z gen=F|num=S|case=2|def=I 4 Atr _ _
6 عَلَى_EalaY عَلَى_EalaY P P _ 1 AuxP _ _
7 رَفْعِ_rafoEi رَفْع_rafoE N N case=2|def=R 6 Atr _ _
8 مُسْتَوَى_musotawaY مُسْتَوَى_musotawaY N N _ 7 Atr _ _
9 التَبادُلِ_AltabAduli تَبادُل_tabAdul N N case=2|def=D 8 Atr _ _
10 التِجارِيِّ_AltijAriy~i تِجارِيّ_tijAriy~ A A case=2|def=D 9 Atr _ _
11 إِلَى_<ilaY إِلَى_<ilaY P P _ 7 AuxP _ _
12 500_500 500_500 Q Q _ 11 Atr _ _
13 مِلْيُونِ_miloyuwni مِلْيُون_miloyuwn N N case=2|def=R 12 Atr _ _
14 دُولارٍ_duwlArK دُولار_duwlAr N N case=2|def=I 13 Atr _ _

The first sentence of the CoNLL 2007 training data:

1 تَعْدادُ تَعْداد_1 N N- Case=1|Defin=R 7 Sb _ _
2 سُكّانِ ساكِن_1 N N- Case=2|Defin=R 1 Atr _ _
3 22 [DEFAULT] Q Q- _ 2 Atr _ _
4 دَوْلَةً دَوْلَة_1 N N- Gender=F|Number=S|Case=4|Defin=I 3 Atr _ _
5 عَرَبِيَّةً عَرَبِيّ_1 A A- Gender=F|Number=S|Case=4|Defin=I 4 Atr _ _
6 سَ سَ_FUT F F- _ 7 AuxM _ _
7 يَرْتَفِعُ اِرْتَفَع_1 V VI Mood=I|Voice=A|Person=3|Gender=M|Number=S 0 Pred _ _
8 إِلَى إِلَى_1 P P- _ 7 AuxP _ _
9 654 [DEFAULT] Q Q- _ 8 Adv _ _
10 مِلْيُونَ مِلْيُون_1 N N- Case=4|Defin=R 9 Atr _ _
11 نَسَمَةٍ نَسَمَة_1 N N- Gender=F|Number=S|Case=2|Defin=I 10 Atr _ _
12 فِي فِي_1 P P- _ 7 AuxP _ _
13 مُنْتَصَفِ مُنْتَصَف_1 N N- Case=2|Defin=R 12 Adv _ _
14 القَرْنِ قَرْن_1 N N- Case=2|Defin=D 13 Atr _ _

The first sentence of the CoNLL 2007 test data:

1 مُقاوَمَةُ مُقاوَمَة_1 N N- Gender=F|Number=S|Case=1|Defin=R 0 ExD _ _
2 زَواجِ زَواج_1 N N- Case=2|Defin=R 1 Atr _ _
3 الطُلّابِ طالِب_1 N N- Case=2|Defin=D 2 Atr _ _
4 العُرْفِيِّ عُرْفِيّ_1 A A- Case=2|Defin=D 2 Atr _ _

Parsing

Nonprojectivities in PADT are rare. Only 431 of the 116,793 tokens in the CoNLL 2007 version are attached nonprojectively (0.37%).

The results of the CoNLL 2006 shared task are available online. They have been published in (Buchholz and Marsi, 2006). The evaluation procedure was non-standard because it excluded punctuation tokens. These are the best results for Arabic:

Parser (Authors) LAS UAS
MST (McDonald et al.) 66.91 79.34
Basis (O'Neil) 66.71 78.54
Malt (Nivre et al.) 66.71 77.52
Edinburgh (Riedel et al.) 66.65 78.62

The results of the CoNLL 2007 shared task are available online. They have been published in (Nivre et al., 2007). The evaluation procedure was changed to include punctuation tokens. These are the best results for Arabic:

Parser (Authors) LAS UAS
Malt (Nilsson et al.) 76.52 85.81
Nakagawa 75.08 86.09
Malt (Hall et al.) 74.75 84.21
Sagae 74.71 84.04
Chen 74.65 83.49
Titov et al. 74.12 83.18

The two Malt parser results of 2007 (single malt and blended) are described in (Hall et al., 2007) and the details about the parser configuration are described here.

Bulgarian (bg)

BulTreeBank (BTB)

Versions

The original BTB is based on HPSG (head-driven phrase-structure grammar). The CoNLL version contains only the dependency information encoded in HPSG BulTreeBank.

Obtaining and License

Only the CoNLL version seems to be distributed but you may ask the creators about the HPSG version. For the dependency version, print the license, sign, scan, send to Kiril Simov (kivs (at) bultreebank (dot) org) and wait for the data. The license in short:

BTB was created by members of the Linguistic Modelling Department (Секция Лингвистично моделиране), Bulgarian Academy of Sciences (Българска академия на науките), Ул. Акад. Г. Бончев, Бл. 25 А, 1113 София, Bulgaria.

Domain

Unknown (“A set of Bulgarian sentences marked-up with detailed syntactic information. These sentences are mainly extracted from authentic Bulgarian texts. They are chosen with regards two criteria. First, they cover the variety of syntactic structures of Bulgarian. Second, they show the statistical distribution of these phenomena in real texts.”) At least part of it is probably news (Novinar, Sega, Standart).

Size

The CoNLL 2006 version contains 196,151 tokens in 13221 sentences, yielding 14.84 tokens per sentence on average (CoNLL 2006 data split: 190,217 tokens / 12823 sentences training, 5934 tokens / 398 sentences test).

References

Inside

The original morphosyntactic tags have been converted to fit into the three columns (CPOS, POS and FEAT) columns of the CoNLL format. There should be a 1-1 mapping between the BTB positional tags and the CoNLL 2006 annotation. Use DZ Interset to inspect the two CoNLL tagsets.

The morphological analysis does not include lemmas. The morphosyntactic tags have been assigned (probably) manually.

The guidelines for syntactic annotation are documented in the other technical report. The CoNLL distribution contains the BulTreeBankReadMe.html file with a brief description of the syntactic tags (dependency relation labels).

Sample

The first three sentences of the CoNLL 2006 training data:

1 Глава _ N Nc _ 0 ROOT 0 ROOT
2 трета _ M Mo gen=f|num=s|def=i 1 mod 1 mod
1 НАРОДНО _ A An gen=n|num=s|def=i 2 mod 2 mod
2 СЪБРАНИЕ _ N Nc gen=n|num=s|def=i 0 ROOT 0 ROOT
1 Народното _ A An gen=n|num=s|def=d 2 mod 2 mod
2 събрание _ N Nc gen=n|num=s|def=i 3 subj 3 subj
3 осъществява _ V Vpi trans=t|mood=i|tense=r|pers=3|num=s 0 ROOT 0 ROOT
4 законодателната _ A Af gen=f|num=s|def=d 5 mod 5 mod
5 власт _ N Nc _ 3 obj 3 obj
6 и _ C Cp _ 3 conj 3 conj
7 упражнява _ V Vpi trans=t|mood=i|tense=r|pers=3|num=s 3 conjarg 3 conjarg
8 парламентарен _ A Am gen=m|num=s|def=i 9 mod 9 mod
9 контрол _ N Nc gen=m|num=s|def=i 7 obj 7 obj
10 . _ Punct Punct _ 3 punct 3 punct

The first three sentences of the CoNLL 2006 test data:

1 Единственото _ A An gen=n|num=s|def=d 2 mod 2 mod
2 решение _ N Nc gen=n|num=s|def=i 0 ROOT 0 ROOT
1 Ерик _ N Np gen=m|num=s|def=i 0 ROOT 0 ROOT
2 Франк _ N Np gen=m|num=s|def=i 1 mod 1 mod
3 Ръсел _ H Hm gen=m|num=s|def=i 2 mod 2 mod
1 Пълен _ A Am gen=m|num=s|def=i 2 mod 2 mod
2 мрак _ N Nc gen=m|num=s|def=i 0 ROOT 0 ROOT
3 и _ C Cp _ 2 conj 2 conj
4 пълна _ A Af gen=f|num=s|def=i 5 mod 5 mod
5 самота _ N Nc _ 2 conjarg 2 conjarg
6 . _ Punct Punct _ 2 punct 2 punct

Parsing

Nonprojectivities in BTB are rare. Only 747 of the 196,151 tokens in the CoNLL 2006 version are attached nonprojectively (0.38%).

The results of the CoNLL 2006 shared task are available online. They have been published in (Buchholz and Marsi, 2006). The evaluation procedure was non-standard because it excluded punctuation tokens. These are the best results for Bulgarian:

Parser (Authors) LAS UAS
MST (McDonald et al.) 87.57 92.04
Malt (Nivre et al.) 87.41 91.72
Nara (Yuchang Cheng) 86.34 91.30

Bengali (bn)

Hyderabad Dependency Treebank (HyDT-Bangla)

Versions

There has been no official release of the treebank yet. There have been two as-is sample releases for the purposes of the NLP tools contests in parsing Indian languages, attached to the ICON 2009 and 2010 conferences.

Obtaining and License

There is no standard distribution channel for the treebank after the ICON 2010 evaluation period. Inquire at the LTRC (ltrc (at) iiit (dot) ac (dot) in) about the possibility of getting the data. The ICON 2010 license in short:

HyDT-Bangla is being created by members of the Language Technologies Research Centre, International Institute of Information Technology, Gachibowli, Hyderabad, 500032, India.

Domain

Unknown.

Size

HyDT-Bangla shows dependencies between chunks, not words. The node/tree ratio is thus much lower than in other treebanks. The ICON 2010 / CoNLL version contains 7252 chunk nodes (not tokens – see below) in 1129 trees, yielding 6.42 chunks per sentence on average (ICON 2010 data split: 6440 chunks / 979 sentences training, 812 chunks / 150 sentences test).

According to the data description file supplied with the data, there are 980 (training only?) sentences with 10305 words, which yields 10.52 words per sentence on average.


[ Back to the navigation ] [ Back to the content ]