[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki

[ Back to the navigation ]

Architecture Description

The Czech Companion follows the original idea of Reminiscing about the User's Photos,
taking advantage of the data collected in the first phase of the project (using a Wizard-of-Oz setting). The full recorded corpora was transcribed, a manual speech reconstruction was done on 92.6% of utterances1) and a pilot dialog acts annotation was performed on a sample of 1000 sentences.

The architecture is the same as in the English version, i.e. a set of modules communicating through the Inamode (TID) backbone. However, the set of modules is different, see Figure 1. Regarding the physical settings, the Czech version runs on two notebook computers connected by a local network. One serves as a Speech Client, running modules dealing with ASR, TTS and ECA; the other one as an NLP Server.

The NLU pipeline, DM, and NLG modules at the NLP Server are implemented using a CU's own TectoMT platform that provides access to a single in-memory data representation through a common API. This eliminates the overhead of a repeated serialization and XML parsing that an Inamode based solution would impose otherwise.

The ASR module based on Hidden-Markov models transforms input speech into text, providing a front-end between the user and the Czech demonstrator. The ASR output is smoothed into a form close to standard written text by the Speech Reconstruction module in order to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence.

Results of the part-of-speech tagging are passed on to the Maximum Spanning Tree syntactic parsing module. A tectogrammatical representation of the utterance is constructed once the syntactic parse is available. Annotation of the meaning at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction. The Named Entity Recognition module then marks personal names and geographical locations. Afterwards, the dialog act classifier uses number of lexical and morphological features to assess the type of user utterance (such as question, acknowledgement, etc.) that is a useful clue for Dialog Manager decisions.

The dialog is driven by a Dialog Manager component by USFD (originally developed for the English Senior Companion prototype).
CU has supplied the transition networks covering following topics: retired_person, husband, child, wife, wedding and Christmas.
Dialogue Manager provides information about the appropriate communicative function along with the sentence that is to be generated to the NLG module. The TTS module integrated with the TID avatar transforms system responses from the text form into the speech and visual (face expressions, gestures) representation. As such, it provides an interface between the demonstrator and the user.

The Knowledge Base consists of objects (persons, events, photos) that model the information acquainted in the course of dialog. Those objects also provide a very basic reasoning (e.g. accounting for the link between date of birth and age properties). Each object's property is able to store multiple values with a varying level of confidence2), and values restricted to a defined time span.

Manual speech reconstruction is still in progress.
Provided either by ASR module or from lexical clues contained in respective utterance.

[ Back to the navigation ] [ Back to the content ]