[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:rg:natural-logic-for-textual-inference [2011/05/03 02:01]
gutierrez
courses:rg:natural-logic-for-textual-inference [2011/05/14 22:09]
abzianidze
Line 4: Line 4:
 ===== Introduction ===== ===== Introduction =====
  
-This paper deals with “**natural logic**” which is logical inference that operates over natural language. Usually the approaches for natural language inference are whether robust but shallow or deep but brittle. The system proposed in this paper aims to be in the middle of the existent approaches and avoid, for instance, the error when translating a natural language to 1st order logic.+This paper deals with “**natural logic**” which is a system of logical inference that operates over natural language. Usually the approaches for natural language inference are either robust but shallow or deep but brittle. The system proposed in this paper aims to be in the middle of the existent approaches and avoids, for instance, the error when translating a natural language to first-order logic.
  
-One key concept in the theory of natural logic is “monoticity” in which, instead of using quantifiers,  the concepts or constraints expressed in a sentence are expanded or contracted.  This way,  linguistics expressions can be represented as //upward-monotone//, //downward monotone//, or //non-monotone// semantic functions.+One key concept in the theory of natural logic is “monotonicity” in which, instead of using quantifiers,  the concepts or constraints expressed in a sentence are expanded or contracted.  This way,  linguistics expressions can be represented as //upward-monotone//, //downward monotone//, or //non-monotone// semantic functions.
  
-The developed system is called **NatLog** and has an architecture with three main stages: Linguistic preprocessing (parse input sentencesmonotonicity markingAlignment (alignment between the premise and the hypothesis  with atomic edits) and Entailment classification (entailment relation for each edit based solely on lexical  featuresindependent of context).+The developed system is called **NatLog** and has an architecture with three main stages: 
 +  * Linguistic preprocessing - parsing input sentences and monotonicity marking
 +  * Alignment alignment between the premise and the hypothesis in terms of atomic edits
 +  * Entailment classification - predicting the final entailment relation based on the entailment relation of atomic editswhere the later are predicted based on the following features: a type of an atomic edit, the effective monotonicity at the effected token span, and various lexical features.
  
  
 ===== Comments ===== ===== Comments =====
  
-  * This work represents the first computational model of natural logic +  * This work represents the first computational model of natural logic; 
-  * In natural logic, entailment is defined as an ordering relation over expressions of all semantic types (not just sentences) +  * In natural logic, entailment is a semantic containment relation over expressions of all types, including words and phrases as well as sentences. Authors define the entailment relation <latex>\sqsubseteq </latex> recursively over the semantic types familiar from Montague semantics; 
-  *The training data used to predict the entailment relations was created for this specific experiment. +  * The training data used to predict the entailment relation of atomic edits was created for this specific experiment. 
-  *The system was tested in the //FraCaS test suite// which contains inference problems extracted from a textbook. Each problem has  3 possible answers: yes, no, unknown +  * The system was tested on the portion (having a single premise) of //[[http://www-nlp.stanford.edu/~wcmac/downloads/|FraCaS test suite]]// containing 346 inference problems reminiscent of a textbook on formal semantics. Each problem has 3 possible answers: yes, no, unknown; 
-  *It was also tested in the //RTE3 test suite// which contains much longer and “natural” premises+  * It was also tested on the //RTE3 test suite// which contains much longer and “natural” premises;
  
  
 ===== Discussion ===== ===== Discussion =====
-  *How much does this approach contribute to the existent logical inference approaches for natural language? +  * How much does this approach contribute to the existent logical inference approaches for natural language? 
-  *Language is fuzzy and this approach captures simple sentences. We are not sure that it can be generalized easily. +  * Language is fuzzy and this approach captures simple sentences. We are not sure that it can be generalized easily. 
-  *It is good that the examples in test data contain 3 different answers. +  * It is good that the examples in test data contain 3 different answers. 
-  *Disadvantage: If you combine a lot of atomic edits, the probability of getting the right answer is lower. +  * Disadvantage: If you combine a lot of atomic edits, the probability of getting the right answer is lower. 
-  *We liked the evaluation presented in the paper and the results interpretation (that is not very usual in semantics). Also it was good the fact that they digitalized a textbook of formal semantics to build the testsuit FraCaS +  * We liked the evaluation presented in the paper and the results interpretation (that is not very usual in semantics). 
 +  * Disadvantage: In composition of entailment relations of atomic edits, the authors while mentioning the obvious compositions do not mention non-obvious logically valid compositions, e.g. exclusive<latex>\circ</latex>reverse<latex>\equiv</latex>exclusive (<latex>| \circ \sqsupset \equiv |</latex>) or forward<latex>\circ</latex>exclusive<latex>\equiv</latex>exclusive (<latex>\sqsubset \circ | \equiv |</latex>). This fact puts the question whether they have implemented these two compositions or not in the system.    
  
  
  
 Written by Ximena Gutiérrez. Written by Ximena Gutiérrez.

[ Back to the navigation ] [ Back to the content ]