[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

This is an old revision of the document!


Architecture Description

The ASR module based on Hidden-Markov models transforms input speech into text, providing a front-end between the user and the Czech demonstrator. The ASR output is smoothed into a form close to standard written text using statistical machine translation in order to
to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence.

The natural language understanding pipeline starts with part-of-speech tagging. Its result is passed on to Maximum Spanning Tree Syntactic parsing module. Tectogrammatical representation of the utterance is constructed once the syntactic parse is available. Annotation of the meaning of a sentence at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction.
The Named Entity Recognition module marks personal names and geographical locations.

The Czech Companion follows the original idea of Reminiscing about the User's Photos,
taking advantage of the data collected in the first phase of the project (using a Wizard-of-Oz setting). The full recorded corpora was transcribed, a manual speech reconstruction was done on 92.6% of utterances1) and a pilot dialog acts annotation was performed on a sample of 1000 sentences.

The architecture is the same as in the English version, i.e. a set of modules communicating through the Inamode (TID) backbone. However, the set of modules is different, see Figure 1. Regarding the physical settings, the Czech version runs on two notebook computers connected by a local network. One serves as a Speech Client, running modules dealing with ASR, TTS and ECA; the other one as an NLP Server.

The NLU pipeline, DM, and NLG modules at the NLP Server are implemented using a CU's own TectoMT platform that provides access to a single in-memory data representation through a common API. This eliminates the overhead of a repeated serialization and XML parsing that an Inamode based solution would impose otherwise.

The Knowledge Base consists of objects (persons, events, photos) that model the information acquainted in the course of dialog. Those objects also provide a very basic reasoning (e.g. accounting for the link between date of birth and age properties). Each object's property is able to store multiple values with a varying level of confidence2), and values restricted to a defined time span.

1)
Manual speech reconstruction is still in progress.
2)
Provided either by ASR module or from lexical clues contained in respective utterance.

[ Back to the navigation ] [ Back to the content ]