This is an old revision of the document!
Table of Contents
Introduction
This paper deals with “natural logic” which is logical inference that operates over natural language. Usually the approaches for natural language inference are whether robust but shallow or deep but brittle. The system proposed in this paper aims to be in the middle of the existent approaches and avoid, for instance, the error when translating a natural language to 1st order logic.
quantifiers,
One key concept in the theory of natural logic is “monoticity” in which, instead of using quantifiers, the concepts or constraints expressed in a sentence are expanded or contracted. This way, linguistics expressions can be represented as upward-monotone, downward monotone, or non-monotone semantic functions.
The developed system is called NatLog and has an architecture with three main stages: Linguistic preprocessing (parse input sentences, monotonicity marking) Alignment (alignment between the premise and the hypothesis with atomic edits) and Entailment classification (entailment relation for each edit based solely on lexical features, independent of context).
Comments
- This work represents the first computational model of natural logic
- In natural logic, entailment is defined as an ordering relation over expressions of all semantic types (not just sentences)
- The training data used to predict the entailment relations was created for this specific experiment.
- The system was tested in the FraCaS test suite which contains inference problems extracted from a textbook. Each problem has 3 possible answers: yes, no, unknown
- It was also tested in the RTE3 test suite which contains much longer and “natural” premises
Discussion
- How much does this approach contribute to the existent logical inference approaches for natural language?
- Language is fuzzy and this approach captures simple sentences. We are not sure that it can be generalized easily.
- It is good that the examples in test data contain 3 different answers.
- Disadvantage: If you combine a lot of atomic edits, the probability of getting the right answer is lower.
- We liked the evaluation presented in the paper and the result' s interpretation (that in not very usual in semantics). Also it was good the fact that they digitalized a textbook of formal semantics to build the testsuit FraCaS