Both sides previous revision
Previous revision
|
Next revision
Both sides next revision
|
user:zeman:treebanks:hi [2011/12/06 22:23] zeman Size. |
user:zeman:treebanks:hi [2011/12/06 22:35] zeman Inside. |
==== Inside ==== | ==== Inside ==== |
| |
* Broken characters (''\x{FFFD} REPLACEMENT CHARACTER'') in the WX encoding. | The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Devanagari in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion. Note that there are (not infrequent) broken characters (''\x{FFFD} REPLACEMENT CHARACTER'') in the WX encoding and the correct characters cannot be recovered automatically. |
| |
-- | |
| |
The text uses the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/map.pdf|WX encoding]] of Indian letters. If we know what the original script is (Bengali in this case) we can map the WX encoding to the original characters in UTF-8. WX uses English letters so if there was embedded English (or other string using Latin letters) it will probably get lost during the conversion. | |
| |
The CoNLL format contains only the chunk heads. The native SSF format shows the other words in the chunk, too, but it does not capture intra-chunk dependency relations. This is an example of a multi-word chunk: | |
| |
<code>3 (( NP <fs af='rumAla,n,,sg,,d,0,0' head="rumAla" drel=k2:VGF name=NP3> | |
3.1 ekatA QC <fs af='eka,num,,,,,,'> | |
3.2 ledisa JJ <fs af='ledisa,unk,,,,,,'> | |
3.3 rumAla NN <fs af='rumAla,n,,sg,,d,0,0' name="rumAla"> | |
))</code> | |
| |
In the CoNLL format, the CPOS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/Chunk-Tag-List.pdf|chunk label]] (e.g. ''NP'' = //noun phrase//) and the POS column contains the [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/POS-Tag-List.pdf|part of speech]] of the chunk head. | |
| |
Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants. | Occasionally there are ''NULL'' nodes that do not correspond to any surface chunk or token. They represent ellided participants. |
The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags. | The [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/dep-tagset.pdf|syntactic tags]] (dependency relation labels) are //karaka// relations, i.e. deep syntactic roles according to the Pāṇinian grammar. There are separate versions of the treebank with [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/mapping_fine-to-coarse.pdf|fine-grained and coarse-grained]] syntactic tags. |
| |
According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags and inter-chunk dependencies (topology + tags) were annotated manually. The rest (lemma, morphosyntactic features, headword of chunk) was marked automatically. | According to [[http://ltrc.iiit.ac.in/nlptools2010/files/documents/toolscontest10-workshoppaper-final.pdf|(Husain et al., 2010)]], in the ICON 2010 version, the chunk tags, POS tags, lemma, morphosyntactic features and inter-chunk dependencies (topology + tags) were annotated manually. The rest (intra-chunk dependencies, headword of chunk) was marked automatically. The tool for intra-chunk dependency parsing achieves about 96% accuracy. |
| |
Note: There have been cycles in the Hindi part of HyDT but no such problem occurs in the Bengali part. | Note: There have been cycles in the Hindi part of HyDT. |
| |
==== Sample ==== | ==== Sample ==== |