Table of Contents

Telugu (te)

Hyderabad Dependency Treebank (HyDT-Telugu)

Versions

There has been no official release of the treebank yet. There have been two as-is sample releases for the purposes of the NLP tools contests in parsing Indian languages, attached to the ICON 2009 and 2010 conferences.

Obtaining and License

There is no standard distribution channel for the treebank after the ICON 2010 evaluation period. Inquire at the LTRC (ltrc (at) iiit (dot) ac (dot) in) about the possibility of getting the data. The ICON 2010 license in short:

HyDT-Telugu is being created by members of the Language Technologies Research Centre, International Institute of Information Technology, Gachibowli, Hyderabad, 500032, India.

References

Domain

Unknown.

Size

HyDT-Telugu shows dependencies between chunks, not words. The node/tree ratio is thus much lower than in other treebanks. The ICON 2009 version came with a data split into three parts: training, development and test:

Part Sentences Chunks Ratio
Training 1456 5494 3.77
Development 150 675 4.50
Test 150 583 3.89
TOTAL 1756 6752 3.85

As for ICON 2010, the data description in (Husain et al., 2010) does not match the data that we downloaded during the contest. They indicate the number of words, while we can count the number of nodes, i.e. chunks. Anyway, the number of training sentences should match and it does not. Also note that they give the same number of words as they gave for ICON 2009 in (Husain et al., 2009). In any case, the training data shrank during the year (clean up?) In the following table, we give both the published and real numbers of sentences, the published number of words and the counted number of chunks (nodes).

Part Sentences Chunks Ratio PSentences Words Ratio
Training 1300 5125 3.94 1400 7602 5.43
Development 150 597 3.98 150 839 5.59
Test 150 599 3.99 150 836 5.57
TOTAL 1600 6321 3.95 1700 9277 5.46

Inside

The text uses the WX encoding of Indian letters. If we know what the original script is (Telugu in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion.

The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk:

3       ((      NP      <fs af='AdavAlYlu,n,,sg,,,0,0_e' head='AdavAlYle' pbank='ARG3' name='NP3'>
3.1     932     QC      <fs af='932,num,,,,,,'>
3.2     maMxi   CL      <fs af='maMxi,n,,pl,,d,0,0'>
3.3     AdavAlYle       NN      <fs af='AdavAlYlu,n,,sg,,,0,0_e' name='AdavAlYle'>
        ))

In the CoNLL format, the CPOS column contains the chunk label (e.g. NP = noun phrase) and the POS column contains the part of speech of the chunk head.

Occasionally there are NULL nodes that do not correspond to any surface chunk or token. They represent ellided participants.

The syntactic tags (dependency relation labels) are karaka relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with fine-grained and coarse-grained syntactic tags.

According to (Husain et al., 2010), in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically.

Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Telugu part.

Sample

The first sentence of the ICON 2010 training data (with fine-grained syntactic tags) in the Shakti format:

<document id="">
<head>
<annotated-resource name="HyDT-Telugu" version="0.5" type="dep-interchunk-only" layers="morph,pos,chunk,dep-interchunk-only" language="tel" date-of-release="20100831">
    <annotation-standard>
        <morph-standard name="Anncorra-morph" version="1.31" date="20080920" />
        <pos-standard name="Anncorra-pos" version="" date="20061215" />
        <chunk-standard name="Anncorra-chunk" version="" date="20061215" />
        <dependency-standard name="Anncorra-dep" version="2.0" date="" dep-tagset-granularity="6" />
    </annotation-standard>
</annotated-resource>
</head>
<Sentence id="1">
1       ((      NP      <fs af='saMgawi,n,,sg,,d,0,0' head='saMgawi' drel='k1:VGF'>
1.1     maro    QF      <fs af='maro,avy,,,,,,'>
1.2     saMgawi NN      <fs af='saMgawi,n,,sg,,d,0,0' name='saMgawi'>
        ))
2       ((      NP      <fs af='mIru,pn,any,pl,2,,ki,ki' head='mIku' drel='k4:VGF' name='NP2'>
2.1     mIku    PRP     <fs af='mIru,pn,any,pl,2,,ki,ki' name='mIku'>
        ))
3       ((      VGF     <fs af='weVlusA,avy,,,,,0,0_avy' head='weVlusA' name='VGF'>
3.1     weVlusA VM      <fs af='weVlusA,avy,,,,,0,0_avy' name='weVlusA'>
3.2     ?       SYM     <fs af='?,punc,,,,,,'>
        ))
</Sentence>

And in the CoNLL format:

1 saMgawi saMgawi NP NN lex-saMgawi|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-saMgawi 3 k1 _ _
2 mIku mIru NP PRP lex-mIru|cat-pn|gend-any|num-pl|pers-2|case-|vib-ki|tam-ki|head-mIku|name-NP2 3 k4 _ _
3 weVlusA weVlusA VGF VM lex-weVlusA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-weVlusA|name-VGF 0 main _ _

And after conversion of the WX encoding to the Telugu script in UTF-8:

1 సంగతి సంగతి NP NN lex-saMgawi|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-saMgawi 3 k1 _ _
2 మీకు మీరు NP PRP lex-mIru|cat-pn|gend-any|num-pl|pers-2|case-|vib-ki|tam-ki|head-mIku|name-NP2 3 k4 _ _
3 తెలుసా తెలుసా VGF VM lex-weVlusA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-weVlusA|name-VGF 0 main _ _

The first sentence of the ICON 2010 development data (with fine-grained syntactic tags) in the Shakti format:

<document id="">
<head>
<annotated-resource name="HyDT-Telugu" version="0.5" type="dep-interchunk-only" layers="morph,pos,chunk,dep-interchunk-only" language="tel" date-of-release="20100831">
    <annotation-standard>
        <morph-standard name="Anncorra-morph" version="1.31" date="20080920" />
        <pos-standard name="Anncorra-pos" version="" date="20061215" />
        <chunk-standard name="Anncorra-chunk" version="" date="20061215" />
        <dependency-standard name="Anncorra-dep" version="2.0" date="" dep-tagset-granularity="6" />
    </annotation-standard>
</annotated-resource>
</head>
<Sentence id="1">
1       ((      RBP     <fs af='eVMwa,pn,,sg,,d,0,0' head='eVMwa' drel='adv:NP'>
1.1     eVMwa   WQ      <fs af='eVMwa,pn,,sg,,d,0,0' name='eVMwa'>
        ))
2       ((      NP      <fs af='bAXEnA,unk,,,,,,' head='bAXEnA' drel='k2s:VGNF' name='NP'>
2.1     bAXEnA  NN      <fs af='bAXEnA,unk,,,,,,' name='bAXEnA'>
        ))
3       ((      NP      <fs af='ixi,pn,fn,sg,3,o,ti,ti' head='xIni' drel='k2:VGNF' name='NP2'>
3.1     xIni    PRP     <fs af='ixi,pn,fn,sg,3,o,ti,ti' name='xIni'>
        ))
4       ((      RBP     <fs af='eVlA,avy,,,,,0,0_avy' head='eVlA' drel='adv:VGNF' name='RBP2'>
4.1     eVlA    WQ      <fs af='eVlA,avy,,,,,0,0_avy' name='eVlA'>
        ))
5       ((      NP      <fs af='bayata,n,,sg,,d,0,0' head='bayata' drel='pof:VGNF' name='NP3'>
5.1     bayata  NST     <fs af='bayata,n,,sg,,d,0,0' name='bayata'>
        ))
6       ((      VGNF    <fs af='peVttuko,pn,,sg,,,e_axi,e_axi_0' head='peVttukoVnexi' drel='k1s:VGNN' name='VGNF'>
6.1     peVttukoVnexi   VM      <fs af='peVttuko,pn,,sg,,,e_axi,e_axi_0' name='peVttukoVnexi'>
        ))
7       ((      RBP     <fs af='sarigA,avy,,,,,0,0_avy' head='sarigA' drel='adv:VGNN' name='RBP3'>
7.1     sarigA  RB      <fs af='sarigA,avy,,,,,0,0_avy' name='sarigA'>
        ))
8       ((      NP      <fs af='viRayaM,n,,sg,,d,0,0' head='viRayaM' drel='k1:VGNN' name='NP4'>
8.1     viRayaM NN      <fs af='viRayaM,n,,sg,,d,0,0' name='viRayaM'>
        ))
9       ((      VGNN    <fs af='weVliyu,v,any,any,any,,aka_po_adaM,aka_po_adaM' head='weVliyakapovadaM' name='VGNN'>
9.1     weVliyakapovadaM        VM      <fs af='weVliyu,v,any,any,any,,aka_po_adaM,aka_po_adaM' name='weVliyakapovadaM'>
9.2     .       SYM     <fs af='.,punc,,,,,,'>
        ))
</Sentence>

And in the CoNLL format:

1 eVMwa eVMwa RBP WQ lex-eVMwa|cat-pn|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-eVMwa 2 adv _ _
2 bAXEnA bAXEnA NP NN lex-bAXEnA|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-bAXEnA|name-NP 6 k2s _ _
3 xIni ixi NP PRP lex-ixi|cat-pn|gend-fn|num-sg|pers-3|case-o|vib-ti|tam-ti|head-xIni|name-NP2 6 k2 _ _
4 eVlA eVlA RBP WQ lex-eVlA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-eVlA|name-RBP2 6 adv _ _
5 bayata bayata NP NST lex-bayata|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-bayata|name-NP3 6 pof _ _
6 peVttukoVnexi peVttuko VGNF VM lex-peVttuko|cat-pn|gend-|num-sg|pers-|case-|vib-e_axi|tam-e_axi_0|head-peVttukoVnexi|name-VGNF 9 k1s _ _
7 sarigA sarigA RBP RB lex-sarigA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-sarigA|name-RBP3 9 adv _ _
8 viRayaM viRayaM NP NN lex-viRayaM|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-viRayaM|name-NP4 9 k1 _ _
9 weVliyakapovadaM weVliyu VGNN VM lex-weVliyu|cat-v|gend-any|num-any|pers-any|case-|vib-aka_po_adaM|tam-aka_po_adaM|head-weVliyakapovadaM|name-VGNN 0 main _ _

And after conversion of the WX encoding to the Telugu script in UTF-8:

1 ఎంత ఎంత RBP WQ lex-eVMwa|cat-pn|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-eVMwa 2 adv _ _
2 బాధైనా బాధైనా NP NN lex-bAXEnA|cat-unk|gend-|num-|pers-|case-|vib-|tam-|head-bAXEnA|name-NP 6 k2s _ _
3 దీని ఇది NP PRP lex-ixi|cat-pn|gend-fn|num-sg|pers-3|case-o|vib-ti|tam-ti|head-xIni|name-NP2 6 k2 _ _
4 ఎలా ఎలా RBP WQ lex-eVlA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-eVlA|name-RBP2 6 adv _ _
5 బయట బయట NP NST lex-bayata|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-bayata|name-NP3 6 pof _ _
6 పెట్టుకొనేది పెట్టుకొ VGNF VM lex-peVttuko|cat-pn|gend-|num-sg|pers-|case-|vib-e_axi|tam-e_axi_0|head-peVttukoVnexi|name-VGNF 9 k1s _ _
7 సరిగా సరిగా RBP RB lex-sarigA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-sarigA|name-RBP3 9 adv _ _
8 విషయం విషయం NP NN lex-viRayaM|cat-n|gend-|num-sg|pers-|case-d|vib-0|tam-0|head-viRayaM|name-NP4 9 k1 _ _
9 తెలియకపొవడం తెలియు VGNN VM lex-weVliyu|cat-v|gend-any|num-any|pers-any|case-|vib-aka_po_adaM|tam-aka_po_adaM|head-weVliyakapovadaM|name-VGNN 0 main _ _

The first sentence of the ICON 2010 test data (with fine-grained syntactic tags) in the Shakti format:

<document id="">
<head>
<annotated-resource name="HyDT-Telugu" version="0.5" type="dep-interchunk-only" layers="morph,pos,chunk,dep-interchunk-only" language="tel" date-of-release="20101013">
    <annotation-standard>
        <morph-standard name="Anncorra-morph" version="1.31" date="20080920" />
        <pos-standard name="Anncorra-pos" version="" date="20061215" />
        <chunk-standard name="Anncorra-chunk" version="" date="20061215" />
        <dependency-standard name="Anncorra-dep" version="2.0" date="" dep-tagset-granularity="6" />
    </annotation-standard>
</annotated-resource>
</head>
<Sentence id="29">
1       ((      NP      <fs af='iMkA,avy,,,,,0,0_avy' head="iMkA" drel=vmod:NULL_VGF name=NP poslcat="NM">
1.1     iMkA    PRP     <fs af='iMkA,avy,,,,,0,0_avy' poslcat="NM" name="iMkA">
        ))
2       ((      RBP     <fs af='warawarAlugA,avy,,,,,0,0_avy' head="warawarAlugA" drel=adv:VGNF name=RBP poslcat="NM">
2.1     warawarAlugA    RB      <fs af='warawarAlugA,avy,,,,,0,0_avy' poslcat="NM" name="warawarAlugA">
        ))
3       ((      VGNF    <fs af='nAtuko,v,any,any,any,,i_po_ina,i_po_ina' head="nAtukupoyina" drel=nmod:NP2 name=VGNF>
3.1     nAtukupoyina    VM      <fs af='nAtuko,v,any,any,any,,i_po_ina,i_po_ina' name="nAtukupoyina">
        ))
4       ((      NP      <fs af='aBiprAyaM,n,,pl,,d,0,0' head="aBiprAyAlu" drel=k1:NULL_VGF name=NP2>
4.1     aBiprAyAlu      NN      <fs af='aBiprAyaM,n,,pl,,d,0,0' name="aBiprAyAlu">
        ))
5       ((      NULL_VGF        <fs name='NULL_VGF'>
5.1     NULL    VM      <fs af='NULL,unk,,,,,,' poslcat="NM">
5.2     .       SYM     <fs af='.,punc,,,,,,' poslcat="NM">
        ))
</Sentence>

And in the CoNLL format:

1 iMkA iMkA NP PRP lex-iMkA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-iMkA|name-NP|poslcat-NM 5 vmod _ _
2 warawarAlugA warawarAlugA RBP RB lex-warawarAlugA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-warawarAlugA|name-RBP|poslcat-NM 3 adv _ _
3 nAtukupoyina nAtuko VGNF VM lex-nAtuko|cat-v|gend-any|num-any|pers-any|case-|vib-i_po_ina|tam-i_po_ina|head-nAtukupoyina|name-VGNF 4 nmod _ _
4 aBiprAyAlu aBiprAyaM NP NN lex-aBiprAyaM|cat-n|gend-|num-pl|pers-|case-d|vib-0|tam-0|head-aBiprAyAlu|name-NP2 5 k1 _ _
5 NULL NULL NULL_VGF VM name-NULL_VGF 0 main _ _

And after conversion of the WX encoding to the Telugu script in UTF-8:

1 ఇంకా ఇంకా NP PRP lex-iMkA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-iMkA|name-NP|poslcat-NM 5 vmod _ _
2 తరతరాలుగా తరతరాలుగా RBP RB lex-warawarAlugA|cat-avy|gend-|num-|pers-|case-|vib-0|tam-0_avy|head-warawarAlugA|name-RBP|poslcat-NM 3 adv _ _
3 నాటుకుపొయిన నాటుకొ VGNF VM lex-nAtuko|cat-v|gend-any|num-any|pers-any|case-|vib-i_po_ina|tam-i_po_ina|head-nAtukupoyina|name-VGNF 4 nmod _ _
4 అభిప్రాయాలు అభిప్రాయం NP NN lex-aBiprAyaM|cat-n|gend-|num-pl|pers-|case-d|vib-0|tam-0|head-aBiprAyAlu|name-NP2 5 k1 _ _
5 NULL NULL NULL_VGF VM name-NULL_VGF 0 main _ _

Parsing

Nonprojectivities in HyDT-Telugu are very rare. Only 13 of the 5722 chunks in the training+development ICON 2010 version are attached nonprojectively (0.23%).

The results of the ICON 2009 NLP tools contest have been published in (Husain, 2009). There were two evaluation rounds, the first with the coarse-grained syntactic tags, the second with the fine-grained syntactic tags. To reward language independence, only systems that parsed all three languages were officially ranked. The following table presents the Telugu/coarse-grained results of the four officially ranked systems.

Parser (Authors) LAS UAS
Malt (Nivre) 62.44 86.28
Mannem 65.01 85.76
Hyderabad (Ambati et al.) 65.01 85.25
Malt+MST (Zeman) 56.43 81.30

The results of the ICON 2010 NLP tools contest have been published in (Husain et al., 2010), page 6. These are the best results for Telugu with fine-grained syntactic tags:

Parser (Authors) LAS UAS
Kosaraju et al. 70.12 91.82
Attardi et al. 65.61 90.48
Kolachina et al. 68.11 90.15