Table of Contents
Coreference Resolution in a Modular, Entity-Centered Model
Aria Haghighi and Dan Klein
HLT 2010, http://www.aclweb.org/anthology/N/N10/N10-1061.pdf
presented by Michal Novák
Before reading
- It is good to have an idea what is the philosophy of generative models (e.g. HMM tagging). In the end, we want to use the generative model to discover the hidden variables (POS tags or coreference in the case of this paper), but the didactical point of view is the opposite – first, generate the hidden variables using given parameters (<latex>\lambda</latex> and <latex>\sigma^2</latex> in the paper) and then generate the observed instances (sequence of words).
- Figure 2 is a diagram of so-called graphical models. The rectangles, circles and arrows have well-defined semantics, see a brief introduction (it is not sufficient for understanding Section 4, but at least something…).
Comments
- We went throught the whole paper, but we skimmed Section 4 very quickly, because none of us knows enough of graphical models theory.
The method can be also used as a Named Entity Recognition (NER) system.
The improvement of the current best results, reffering to all main metics used in coreference resolution task, is quite impressive
Contributed by MK
What do we like about the paper
- best reported results so far on the task (measured by MUC,<latex>B^3</latex> and pairwise <latex>F_1</latex>) and (almost) unsupervised approach (i.e. they don't need coreference-annotated corpus)
- They compare their results with other published results (on different data sets) using several metrics.
- They do end-to-end coreference resolution task, i.e. they don't exploit true mentions from the gold annotation.