Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
courses:rg:automatic-domain-adaptation-for-parsing1 [2011/09/14 18:13] ramasamy vytvořeno |
courses:rg:automatic-domain-adaptation-for-parsing1 [2011/09/14 18:50] (current) ramasamy |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | hello | + | ===== Automatic Domain Adaptation for Parsing ===== |
+ | Comments by : Loganathan | ||
+ | |||
+ | ===== Objective ===== | ||
+ | The objective of the paper is to make the statistical parsers adapting to new domains. Best parsing model for a particular testing data is identified by combining training data(source mixture) from different domains. This source mixture is learned from a regression model which will identify the appropriate parsing model. | ||
+ | |||
+ | ===== Comments ===== | ||
+ | Some of the comments and doubts arised during the discussion | ||
+ | * It was asked how the data was collected, mainly due to the size of the data used in training. | ||
+ | * Training and testing were reported in the development set not on the parsing models. | ||
+ | * It was noted that the parser has been tested across various domains. | ||
+ | * Entropy feature was not clear. | ||
+ | * The idea was to successfully adapt to new domains than to achieve very good accuracy for a particular domain. | ||
+ | |||
+ | ===== What do we like about the paper: ===== | ||
+ | * The multiple source adaptation method can identify the factors which affect the parsing accuracy for texts from different domains. | ||
+ | * They successfully included methods for domain detection compared to previous works. | ||
+ | * Inclusion of self trained corpora helped avoiding data sparsity in small corporas. | ||
+ | |||
+ | ===== What do we dislike about the paper: ===== | ||
+ | |