Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
draft [2009/07/15 15:43] ptacek |
draft [2009/09/30 20:26] ptacek |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== |
- | [[Progress Report]] | + | The ASR module based on Hidden-Markov models transforms input speech into text, providing a front-end between the user and the Czech demonstrator. The ASR output is smoothed into a form close to standard written text using statistical machine translation in order to |
+ | to bridge the gap between dis-fluent spontaneous speech and a standard grammatical sentence. | ||
+ | The natural language understanding pipeline starts with part-of-speech tagging. Its result is passed on to Maximum Spanning Tree Syntactic parsing module. Tectogrammatical representation of the utterance is constructed once the syntactic parse is available. Annotation of the meaning of a sentence at tectogrammatical layer is more explicit than its syntactic parse and lends itself for information extraction. | ||
+ | The Named Entity Recognition module marks personal names and geographical locations. | ||
- | ====== Description of Czech Companion November Prototype ====== | ||
- | The Czech version of Companion | + | The Czech Companion |
- | however the set of modules differs | + | taking advantage of the data collected |
- | photopal domena, nahranej korpus, ze na to sou dafy (reusing SHEFF DM intergrated | + | The architecture is the same as in the English version, i.e. a set of modules communicating |
- | typy odpovedi | + | |
- | NLP server s tectomt, ASR/TTS/SR client, connected over network | + | |
- | XXX JPta | + | |
- | advances in Czech NLU (on reconstructed spoken data): 300-500vet(? | + | The NLU pipeline, DM, and NLG modules at the NLP Server are implemented using a CU's own TectoMT platform that provides access to a single |
- | pos ? analyzovat, generovat | + | |
+ | The Knowledge Base consists of objects (persons, events, photos) that model the information acquainted in the course of dialog. Those objects also provide a very basic reasoning (e.g. accounting for the link between date of birth and age properties). Each object' | ||
- | |||
- | |||
- | |||
- | |||
- | ===== Automatic Speech Recognition (WP 5.1)===== | ||
- | features: improved language models, real-time speaker adaptation | ||
- | performance indicator: WER | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | ===== Speech Reconstruction (WP 5.1 ???) ===== | ||
- | features: omit filler phrases, remove irrelevant speech events, handle false starts, repetitions, | ||
- | performance indicator: BLEU score between actual output and manually reconstructed sentences from corpora (T5.2.1), baseline: Moses with default settings | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | ===== Morphology Analyzer and POS tagging (WP 5.2) ===== | ||
- | features: coverage of photo-pal domain, domain adapted tagger (XXX prida nam Jarka OOV slova co najdeme, bude PDTSC rucne oznackovane - do listopadu?) | ||
- | performance indicator: OOV rate, accuracy | ||
- | |||
- | |||
- | ===== Syntactic Parsing (WP 5.2) ===== | ||
- | features: induce dependencies and labels | ||
- | performance indicator: accuracy (correctly induced edges, labels) | ||
- | v tipu je natrenovat MacDonnalda na dialog datech, ten task je do M42, ted ne. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | ===== Semantic Parsing (WP 5.2) ===== | ||
- | features: assignment of semantic roles (69 roles), coordinations, | ||
- | performance indicator: accuracy (correctly induced edges, labels) | ||
- | |||
- | ===== Information Extraction (WP 5.2) ===== | ||
- | features: template based identification of predicates | ||
- | covering predicates from before-mentioned set of DAFs. | ||
- | performance indicator: accuracy | ||
- | |||
- | |||
- | |||
- | ===== Named Entities Recognition (WP 5.2) ===== | ||
- | features: detect person names, geographical locations, organizations | ||
- | performance indicator: f-measure | ||
- | |||
- | |||
- | |||
- | |||
- | ===== Dialog Act Tagging (WP 5.2) ===== | ||
- | features: domain tailored tagset (variation of DAMSL-SWBD) | ||
- | performance indicator: accuracy | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | ===== Dialog Manager (WP 5.3) ===== | ||
- | features: reply types, using (language independed) predicates (prakticky to znamena, ze pojmenuju testy na prechodech v dafech anglicky) | ||
- | Handmade DAF covering following topics: Person_Retired, | ||
- | performance indicator: rucni hodnoceni prijatelnosti vybrane akce | ||
- | |||
- | |||
- | AZ PO LISTOPADU: | ||
- | ===== Complete System Evaluation ===== | ||
- | T5.2.7 tohle zminuje, nick webb to pro nas asi neudela | ||
- | performance indicator: number of tokens in user reply utterances, post-session questionare | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | ===== Natural Language Generation (WP 5.4) ===== | ||
- | features: variations, underspecified input (dott format), emotional markup (natvrdo v dafech a templatech u hodnoticich vet) | ||
- | performance indicator: BLEU score |