Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
courses:rg:cross-lingual_ontology_alignment [2010/12/06 13:31] popel images moved to this wiki |
courses:rg:cross-lingual_ontology_alignment [2011/06/25 20:27] (current) abzianidze |
{{https://wiki.ufal.ms.mff.cuni.cz/_media/courses:rg:bouma_lrec10.pdf | Cross-lingual Ontology Alignment using EuroWordNet and Wikipedia }} | {{https://wiki.ufal.ms.mff.cuni.cz/_media/courses:rg:bouma_lrec10.pdf | Cross-lingual Ontology Alignment using EuroWordNet and Wikipedia }} |
The International Conference on Language Resources and Evaluation (LREC) 2010 | The International Conference on Language Resources and Evaluation (LREC) 2010 |
| |
| report by **Lasha Abzianidze** |
| |
===== Introduction ===== | ===== Introduction ===== |
* In the paper, there is no details how they are calculating evaluation. We suppose that they are using following widely used method: To evaluate the resulting alignment A we compare it to a reference/sample alignment (R) based on some criterion. The usual approach for evaluating the returned alignments is to consider them as sets of correspondences (pairs) and to apply precision and recall originating from information retrieval: <latex>P = |R ∩ A| / |A|</latex> <latex>R = |R ∩ A| /|R|</latex> where <latex> P </latex> - Precision, <latex> R </latex>- recall; <latex>|R ∩ A|</latex> - The number of true positives; <latex>|A|</latex> - The number of the retrieved correspondences; <latex>|R|</latex> - The number of expected correspondences. | * In the paper, there is no details how they are calculating evaluation. We suppose that they are using following widely used method: To evaluate the resulting alignment A we compare it to a reference/sample alignment (R) based on some criterion. The usual approach for evaluating the returned alignments is to consider them as sets of correspondences (pairs) and to apply precision and recall originating from information retrieval: <latex>P = |R ∩ A| / |A|</latex> <latex>R = |R ∩ A| /|R|</latex> where <latex> P </latex> - Precision, <latex> R </latex>- recall; <latex>|R ∩ A|</latex> - The number of true positives; <latex>|A|</latex> - The number of the retrieved correspondences; <latex>|R|</latex> - The number of expected correspondences. |
* The task [[http://oaei.ontologymatching.org/2009/vlcr/|Very Large Cross-lingual Resources]](VLCR) on the [[http://oaei.ontologymatching.org/2009/|Ontology Alignment Evaluation Initiative]] (OAEI) workshop and about its [[http://oaei.ontologymatching.org/2009/vlcr/|evaluation]]. | * The task [[http://oaei.ontologymatching.org/2009/vlcr/|Very Large Cross-lingual Resources]](VLCR) on the [[http://oaei.ontologymatching.org/2009/|Ontology Alignment Evaluation Initiative]] (OAEI) workshop and about its [[http://oaei.ontologymatching.org/2009/vlcr/|evaluation]]. |
| |
| \\ |
| > Interesting thing - 67% of the terms from the GTAA (Duch linguistic resource) can be mapped to the Duch WordNet. It is much more, than the coverage of Czech WordNet over the terms from PDT (Prague Dependency Treebank). -MK- |
| > It has shown, that the cross-language links are not reversible, they are in relation n:n (many to many).-MK- |
| |
===== What do we like about the paper ===== | ===== What do we like about the paper ===== |