[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
draft [2009/09/30 20:36]
ptacek
draft [2009/09/30 20:54] (current)
ptacek
Line 1: Line 1:
 ====== Architecture Description ====== ====== Architecture Description ======
  
-The ASR module based on Hidden-Markov models transforms input speech into textproviding a front-end between the user and the Czech demonstrator. The ASR output is smoothed into form close to standard written text using statistical machine translation in order to  +The Czech Companion follows the original idea of Reminiscing about the User's Photos, 
-to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence.+taking advantage of the data collected in the first phase of the project (using a Wizard-of-Oz setting). The full recorded corpora was transcribed, manual speech reconstruction was done on 92.6% of utterances((Manual speech reconstruction is still in progress.)) and a pilot dialog acts annotation was performed on a sample of 1000 sentences.
  
-The natural language understanding pipeline starts with part-of-speech tagging. Its result is passed on to Maximum Spanning Tree Syntactic parsing module. A tectogrammatical representation of the utterance is constructed once the syntactic parse is availableAnnotation of the meaning at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction.  +The architecture is the same as in the English version, i.e. a set of modules communicating through the Inamode (TID) backboneHowever, the set of modules is differentsee Figure 1. Regarding the physical settingsthe Czech version runs on two notebook computers connected by a local networkOne serves as Speech Client, running modules dealing with ASR, TTS and ECA; the other one as an NLP Server
-The Named Entity Recognition module then marks personal names and geographical locationsC5 based Dialog Act classifier combines lexical and morphological features to assess the type of user utterance (such as questionacknowledgementetc.) that is useful clue for Dialog Manager decisions.+
  
 +The NLU pipeline, DM, and NLG modules at the NLP Server are implemented using a CU's own TectoMT platform that provides access to a single in-memory data representation through a common API. This eliminates the overhead of a repeated serialization and XML parsing that an Inamode based solution would impose otherwise.
  
-In addition, when generating the system response, the dialogue manager will pass through the NLG module the information about the appropriate communicative function tag (CF, see the CZ TTS module) along with the sentence that is to be generated. NLG is also used to generate paraphrases of user input sentences.The TTS module integrated with the TID avatar transforms system responses from the text form into the speech and visual (face expressions, gestures) representation. As such, it provides an interface between the demonstrator and the user.+<html><br/><hr/><br/></html>
  
 +The ASR module based on Hidden-Markov models transforms input speech into text, providing a front-end between the user and the Czech demonstrator. The ASR output is smoothed into a form close to standard written text by the Speech Reconstruction module in order to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence.
  
-<html><hr/></html>+Results of the part-of-speech tagging are passed on to the Maximum Spanning Tree syntactic parsing module. A tectogrammatical representation of the utterance is constructed once the syntactic parse is available. Annotation of the meaning at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction. The Named Entity Recognition module then marks personal names and geographical locations. Afterwards, the dialog act classifier uses number of lexical and morphological features to assess the type of user utterance (such as question, acknowledgement, etc.) that is a useful clue for Dialog Manager decisions.
  
-The Czech Companion follows the original idea of Reminiscing about the User's Photos, +The dialog is driven by Dialog Manager component by USFD (originally developed for the English Senior Companion prototype). 
-taking advantage of the data collected in the first phase of the project (using Wizard-of-Oz setting). The full recorded corpora was transcribed, a manual speech reconstruction was done on 92.6% of utterances((Manual speech reconstruction is still in progress.)) and a pilot dialog acts annotation was performed on a sample of 1000 sentences. +CU has supplied the transition networks covering following topics: retired_personhusbandchildwifewedding and Christmas
- +Dialogue Manager provides information about the appropriate communicative function along with the sentence that is to be generated to the NLG module. The TTS module integrated with the TID avatar transforms system responses from the text form into the speech and visual (face expressions, gestures) representation. As such, it provides an interface between the demonstrator and the user.
-The architecture is the same as in the English version, i.e. a set of modules communicating through the Inamode (TIDbackboneHowever, the set of modules is differentsee Figure 1. Regarding the physical settingsthe Czech version runs on two notebook computers connected by a local network. One serves as a Speech Clientrunning modules dealing with ASRTTS and ECA; the other one as an NLP Server.  +
- +
-The NLU pipeline, DM, and NLG modules at the NLP Server are implemented using a CU's own TectoMT platform that provides access to a single in-memory data representation through a common APIThis eliminates the overhead of a repeated serialization and XML parsing that an Inamode based solution would impose otherwise.+
  
 The Knowledge Base consists of objects (persons, events, photos) that model the information acquainted in the course of dialog. Those objects also provide a very basic reasoning (e.g. accounting for the link between date of birth and age properties). Each object's property is able to store multiple values with a varying level of confidence((Provided either by ASR module or from lexical clues contained in respective utterance.)), and values restricted to a defined time span. The Knowledge Base consists of objects (persons, events, photos) that model the information acquainted in the course of dialog. Those objects also provide a very basic reasoning (e.g. accounting for the link between date of birth and age properties). Each object's property is able to store multiple values with a varying level of confidence((Provided either by ASR module or from lexical clues contained in respective utterance.)), and values restricted to a defined time span.
  

[ Back to the navigation ] [ Back to the content ]