Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
draft [2009/07/15 16:02] ptacek |
draft [2009/07/15 23:56] ptacek |
||
---|---|---|---|
Line 5: | Line 5: | ||
- | ====== Description of Czech Companion November Prototype ====== | ||
- | The Czech version of Companion | + | ====== Description of Czech Companion |
- | however the set of modules differs (see Figure 1). Regarding the physical settings: the Czech version runs on two notebook computers connected by local network; one can be seen as a Speech Client, running modules dealing with ASR,TTS and ECA, second as an NLP Server. | + | |
- | photopal domena, nahranej korpus, ze na to sou dafy (reusing SHEFF DM intergrated | + | The Czech version of the Companion deals with the Reminiscing about User's Photos scenario, taking advantage of data recorded in first phase of the project. The basic architecture is same as of the English version, i.e. set of modules communicating |
- | typy odpovedi | + | |
- | NLP server s tectomt, ASR/TTS/SR client, connected over network | + | |
- | XXX JPta | + | |
+ | The dialog is driven by a dialog manager component by USFD (originally developed for the English Senior Companion prototype). The selection is backed by (a) appropriateness for the type of dialog we aim for (the corpus reveals frequent reoccurring topics to be handled by DAFs) , (b) availability of mature enough package within time frame that allows for integration, | ||
+ | DAFs covering selected topics contain not only Companion replies mined from the corpora, but also new human-authored assessments, | ||
+ | |||
+ | {{user: | ||
Line 27: | Line 26: | ||
- | ===== Speech Reconstruction (WP 5.1 ???) ===== | + | |
+ | ===== Speech Reconstruction (WP 5.2) ===== | ||
features: omit filler phrases, remove irrelevant speech events, handle false starts, repetitions, | features: omit filler phrases, remove irrelevant speech events, handle false starts, repetitions, | ||
performance indicator: BLEU score between actual output and manually reconstructed sentences from corpora (T5.2.1), baseline: Moses with default settings | performance indicator: BLEU score between actual output and manually reconstructed sentences from corpora (T5.2.1), baseline: Moses with default settings | ||
Line 37: | Line 37: | ||
===== Morphology Analyzer and POS tagging (WP 5.2) ===== | ===== Morphology Analyzer and POS tagging (WP 5.2) ===== | ||
- | features: coverage of photo-pal domain, domain adapted tagger (XXX prida nam Jarka OOV slova co najdeme, bude PDTSC rucne oznackovane - do listopadu?) | + | features: coverage of photo-pal domain, domain adapted tagger |
performance indicator: OOV rate, accuracy | performance indicator: OOV rate, accuracy | ||
Line 52: | Line 52: | ||
===== Semantic Parsing (WP 5.2) ===== | ===== Semantic Parsing (WP 5.2) ===== | ||
- | features: assignment of semantic roles (69 roles), coordinations, | + | features: assignment of semantic roles (69 roles), coordinations, |
performance indicator: accuracy (correctly induced edges, labels) | performance indicator: accuracy (correctly induced edges, labels) | ||
Line 63: | Line 63: | ||
===== Named Entities Recognition (WP 5.2) ===== | ===== Named Entities Recognition (WP 5.2) ===== | ||
- | features: detect person names, geographical locations, | + | features: detect person names, geographical locations, |
performance indicator: f-measure | performance indicator: f-measure | ||
Line 70: | Line 70: | ||
===== Dialog Act Tagging (WP 5.2) ===== | ===== Dialog Act Tagging (WP 5.2) ===== | ||
- | features: domain tailored | + | features: domain tailored |
performance indicator: accuracy | performance indicator: accuracy | ||
+ | |||
+ | |||
Line 80: | Line 82: | ||
===== Dialog Manager (WP 5.3) ===== | ===== Dialog Manager (WP 5.3) ===== | ||
- | features: | + | features: |
- | Manually created | + | manual creation of DAFs covering following topics: |
performance indicator: acceptability - manual evaluation of actions selected by DM | performance indicator: acceptability - manual evaluation of actions selected by DM | ||
+ | |||
Line 92: | Line 95: | ||
performance indicator: BLEU score | performance indicator: BLEU score | ||
+ | |||
+ | |||
+ | ===== Emotional TTS (WP 5.5) ===== | ||
+ | features: emotions will be expressed implicitly, through the usage of communicative functions; new female voice database was recorded for this purposes | ||
+ | performance indicator: listening tests | ||
+ | |||
+ | |||
+ | ===== Emotional Avatar Integration (WP 5.5) ===== | ||
+ | features: new Czech female voice with affective features will be integrated with the TID avatar | ||
+ | performance indicator: subjective evaluation of the naturalness and the ability to convey emotions (small-scale, | ||
+ | |||
====== AZ PO LISTOPADU ====== | ====== AZ PO LISTOPADU ====== |