[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Table of Contents

Phrase-based Statistical Language Generation using Graphical Models and Active Learning

François Mairesse, Milica Gašić, Filip Jurčíček, Simon Keizer, Blaise Thomson, Kai Yu, Steve Young
ACL 2010
http://aclweb.org/anthology-new/P/P10/P10-1157.pdf

Presented by Ondřej Dušek
Report by Honza Václ

Pre-sent Exercises

The following exercises were sent to the mail conference few days before and were aimed to make the readers think about the semantic stack representation used in the paper. They were not answered in very much detail in the lecture, just gone through to make sure we understand the basic concepts. Thus, the following solutions are mostly my own interpretation and are not guaranteed to be 100 percent correct.

Ex. 1) Try to think of a semantic stack representation and a dialogue act for the following sentence:
“For Chinese food, The Golden Palace restaurant is the best one in the centre of the city. It is located on the side of the river near East Road.”
The solution could look something like this (inspired by Table 1):

surface form for Chinese food The Golden Palace restaurant is the best one in the centre of the city It is located on the side of the river near East Road
sem. stack Chinese The Golden Palace restaurant centre riverside East Road
food name type area area area area near name
inform inform inform inform inform inform inform inform inform inform

Note: The semantic stack for a given surface phrase is always the whole column (but excluding the “surface form” row).

Ex. 2) Try to think of a surface realization for the following dialogue act:
reject(type=placetoeat,eattype=restaurant,pricerange=cheap,area=citycentre,food=Indian)
Solution (one of many possible):
“I am sorry but I have no information about any cheap Indian restaurant in the centre of the city.”

Note: Ondřej admitted that the syntax of the dialogue act in the Ex. 2 was not exactly the same as the one used throughout the paper, but rather taken out directly from the corpus used by the authors (publicly available).

Short Introduction to NLG

(This paper is concerned mainly about the second and the third topic.)

Section 1

The BAGEL system tries to employ statistics in the NLG field earlier than any of the previous systems, already in the generation phase. The previous systems used statistics either only in the reranking of the generated results, or in a preference of a generating rules (or templates), which were however handcrafted.

Section 2 - Semantic Representation

Section 3 - DBN

Section 4 - Active Learning

Section 5 - Methodology

Overall Notes and Summary

Nice ideas, namely:

One idea to think about (concerning the general NLG):
Would it be meaningful to cycle NLG with SLU for the sake of (automatic) training?


[ Back to the navigation ] [ Back to the content ]