Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment
Jason R. Smith Chris Quirk and Kristina Toutanova
Article is about parallel sentence extraction from Wikipedia. This resource can be viewed as comparable corpus in which the document alignment is already provided by the interwiki links.
Authors train three models:
When the binary classifier is used, there is a substantial class imbalance: O(n) positive examples and O(n²) negative examples.
The ranking model selects either a sentence in the target document or 'null' for each sentence target in the source document. This way there is no problem of class imbalance issue of the binary classifier.
A conditional random field is a type of discriminative undirected probabilistic graphical model. It is most often used for labeling or parsing of sequential data, such as natural language text.
Last two features are independent from word alignment. All these features are defined on sentence pairs and included in the binary classification and ranking models.
One set of features bins distances between previous and current aligned sentences. Another set of features looks at the absolute difference between the expected position (one after the previous aligned sentence) and the actual position.
Using these features the authors train the weights of a log-linear ranking model for P(wt|ws, T, S) where wt is a word in the target language, ws is a word in the source language, and T and S are linked articles in the target and source languages respectively. The model is trained from a small set of annotated Wikipedia article pairs.
Using this model, the authors generate a new translation table which is used to define another HMM word-alignment model for use in sentence extraction model.
Data for evaluation:
20 Wikipedia article pairs for Spanish-English, Bulgarian-English and German-English. Positive examples of sentence pairs in the datasets are the sentences that are mostly parallel with some missing words and sentences that are direct translations.
Evaluation measures:
In the first set of experiments they didn't include Wikipedia features and lexicon features. They evaluate binary classifier, ranking and CRF models.
In the second set of experiments they use Wikipedia specific features. They evaluate ranker and CRF. As these two models are asymmetric, they ran modes in both directions, and combined their outputs by intersection.
The SMT evaluation was using BLEU score. For each language the exploited 2 training conditions:
1) Medium
Training set data:
2) Large
Training set data included:
In each condition they exploited impact of including parallel sentences automatically extracted from Wikipedia.
Our understanding of this feature is:
TOPIC A: EN ↔ ES
↓ ↓
TOPIC B: EN ↔ ES
where
↓ is a link
↔ is an interwiki link
— Comments by Angelina Ivanova