[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
draft [2009/09/30 20:53]
ptacek
draft [2009/09/30 20:54] (current)
ptacek
Line 12: Line 12:
 The ASR module based on Hidden-Markov models transforms input speech into text, providing a front-end between the user and the Czech demonstrator. The ASR output is smoothed into a form close to standard written text by the Speech Reconstruction module in order to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence. The ASR module based on Hidden-Markov models transforms input speech into text, providing a front-end between the user and the Czech demonstrator. The ASR output is smoothed into a form close to standard written text by the Speech Reconstruction module in order to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence.
  
-Results of the part-of-speech tagging are passed on to the Maximum Spanning Tree Syntactic Parsing ​module. A tectogrammatical representation of the utterance is constructed once the syntactic parse is available. Annotation of the meaning at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction. The Named Entity Recognition module then marks personal names and geographical locations. Afterwards, the dialog act classifier uses number of lexical and morphological features to assess the type of user utterance (such as question, acknowledgement,​ etc.) that is a useful clue for Dialog Manager decisions.+Results of the part-of-speech tagging are passed on to the Maximum Spanning Tree syntactic parsing ​module. A tectogrammatical representation of the utterance is constructed once the syntactic parse is available. Annotation of the meaning at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction. The Named Entity Recognition module then marks personal names and geographical locations. Afterwards, the dialog act classifier uses number of lexical and morphological features to assess the type of user utterance (such as question, acknowledgement,​ etc.) that is a useful clue for Dialog Manager decisions.
  
 The dialog is driven by a Dialog Manager component by USFD (originally developed for the English Senior Companion prototype). The dialog is driven by a Dialog Manager component by USFD (originally developed for the English Senior Companion prototype).

[ Back to the navigation ] [ Back to the content ]