[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
courses:rg:automatic-domain-adaptation-for-parsing1 [2011/09/14 18:31]
ramasamy
courses:rg:automatic-domain-adaptation-for-parsing1 [2011/09/14 18:49]
ramasamy
Line 3: Line 3:
  
 ==== Objective ==== ==== Objective ====
 +The objective of the paper is to make the statistical parsers adapting to new domains. Best parsing model for a particular testing data is identified by combining training data(source mixture) from different domains. This source mixture is learned from a regression model which will identify the appropriate parsing model. 
  
 ===== Comments ===== ===== Comments =====
Line 9: Line 10:
    * Training and testing were reported in the development set not on the parsing models.    * Training and testing were reported in the development set not on the parsing models.
    * It was noted that the parser has been tested across various domains.    * It was noted that the parser has been tested across various domains.
-   Entroy feature was not clear. +   Entropy feature was not clear.  
 +   * The idea was to successfully adapt to new domains than to achieve very good accuracy for a particular domain
  
 ==== What do we like about the paper: ==== ==== What do we like about the paper: ====
    * The multiple source adaptation method can identify the factors which affect the parsing accuracy for texts from different domains.    * The multiple source adaptation method can identify the factors which affect the parsing accuracy for texts from different domains.
    * They successfully included methods for domain detection compared to previous works.    * They successfully included methods for domain detection compared to previous works.
 +   * Inclusion of self trained corpora helped avoiding data sparsity in small corporas.
  
 ==== What do we dislike about the paper: ==== ==== What do we dislike about the paper: ====
-   *   +   *  Results (just before section 7) could have been better explained. 
-  +
- +
- +
- +
  

[ Back to the navigation ] [ Back to the content ]