[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
user:zeman:treebanks:te [2012/03/22 11:14]
zeman vytvořeno
user:zeman:treebanks:te [2012/03/22 17:06]
zeman Inside.
Line 46: Line 46:
  
 ^ Part ^ Sentences ^ Chunks ^ Ratio ^ ^ Part ^ Sentences ^ Chunks ^ Ratio ^
-| Training | 980 6449 6.58 +| Training     1456  5494 3.77 
-| Development | 150 | 811 5.41 +| Development |   150 |   675 4.50 
-| Test | 150 | 961 6.41 +| Test          150 |   583 3.89 
-| TOTAL | 1280 8221 6.42 |+| TOTAL        1756  6752 3.85 |
  
-The ICON 2010 version came with a data split into three partstrainingdevelopment and test:+As for ICON 2010, the data description in [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al.2010)]] does not match the data that we downloaded during the contest. They indicate the number of words, while we can count the number of nodes, i.e. chunks. Anyway, the number of training sentences should match and it does not. Also note that they give the same number of words as they gave for ICON 2009 in [[http://ltrc.iiit.ac.in/nlptools2009/CR/intro-husain.pdf|(Husain et al., 2009)]]. In any case, the training data shrank during the year (clean up?) In the following table, we give both the published and real numbers of sentences, the published number of words and the counted number of chunks (nodes).
  
-^ Part ^ Sentences ^ Chunks ^ Ratio ^ Words ^ Ratio ^ +^ Part ^ Sentences ^ Chunks ^ Ratio ^ PSentences ^ Words ^ Ratio ^ 
-| Training | 979 6440 6.58 10305 10.52 +| Training     1300  5125  3.94  1400  7602 |  5.43 
-| Development | 150 | 812 5.41 1196 7.97 +| Development |   150 |   597  3.98   150   839 |  5.59 
-| Test | 150 | 961 6.41 1350 9.00 +| Test          150 |   599  3.99   150   836 |  5.57 
-| TOTAL | 1279 8213 6.42 12851 10.04 | +| TOTAL        1600  6321  3.95  1700  9277  5.46 |
- +
-I have counted the sentences and chunks. The number of words comes from (Husain et al., 2010). Note that the paper gives the number of training sentences as 980 (instead of 979), which is a mistake. The last training sentence has the id 980 but there is no sentence with id 418. +
- +
-Apparently the training-development-test data split was more or less identical in both years, except for the minor discrepancies (number of training sentences and development chunks).+
  
 ==== Inside ==== ==== Inside ====
  
-The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Bengali in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion.+The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Telugu in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion.
  
 The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk: The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk:
  
-<code>      ((      NP      <fs af='rumAla,n,,sg,,d,0,0' head="rumAla" drel=k2:VGF name=NP3> +<code>      ((      NP      <fs af='AdavAlYlu,n,,sg,,,0,0_e' head='AdavAlYle' pbank='ARG3' name='NP3'
-3.1     ekatA   QC      <fs af='eka,num,,,,,,'> +3.1     932     QC      <fs af='932,num,,,,,,'> 
-3.2     ledisa  JJ      <fs af='ledisa,unk,,,,,,'> +3.2     maMxi   CL      <fs af='maMxi,n,,pl,,d,0,0'> 
-3.3     rumAla  NN      <fs af='rumAla,n,,sg,,d,0,0' name="rumAla">+3.3     AdavAlYle       NN      <fs af='AdavAlYlu,n,,sg,,,0,0_e' name='AdavAlYle'>
         ))</code>         ))</code>
  
Line 83: Line 79:
 According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically. According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically.
  
-Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Bengali part.+Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Telugu part.
  
 ==== Sample ==== ==== Sample ====

[ Back to the navigation ] [ Back to the content ]