[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

This is an old revision of the document!


Table of Contents

Statistical Significance Tests for Machine Translation Evaluation

Koehn, EMNLP 2004, link

Questions

1) BLEU_MT1 = 1, BLEU_MT2 = 0 (or undefined)
BLEU_MT3 = 0.2 (according to the formula in the paper, incorrect)
It should be exp(1/4(ln(4/5) + ln(3/4) + ln(2/3) + ln(1/2))) = 0.668

2) We should somehow sample the corpus (maybe take each k-th sentence, or create an entirely random samples).
This might however cause problems to systems which try to benefit from broader context features (that go beyond the sentence, e.g. to promote discourse coherence), so maybe take sample batches of sentences (e.g. 10).

Presentation

Introduction

Philipp Koehn's paper, so the MT system is probably Pharaoh (predecessor of Moses).

How to obtain multiple translation systems? Translate into English, from a number of different languages (trained on Europarl).

Section 3

Initial experiment: divide 30000 translated sentences into consecutive chunks of 300 sentences (100 test sets). BLEUs measured on individual test sets then vary quite a bit.

Compared with broad sampling, created 100 test sets:
1, 301, 601,… test set 1
2, 302, 602,… test set 2

BLEU scores become more stable ⇒ this procedure leads to a more representative test set.

We are not sure whether this can really be generalized – we only have two very similar systems (identical systems, only trained on different data).

Section 4

Significance tests are used to estimate an interval in which the true system score lies. We use Student's t-distribution to approximate the normal distribution, according to which the sentence scores are distributed (that is our assumption).

We are interested in the mean of sentence scores, which (given enough samples) is normally distributed according to the Central Limit Theorem, so the assumption is OK.

Section 5

We cannot use the t-distribution for BLEU because it cannot be factorised like the metric in Section 4: BLEU computed on the full test set != average of per-sentence BLEU scores


[ Back to the navigation ] [ Back to the content ]