[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
draft [2009/07/15 17:47]
147.228.47.190
draft [2009/09/01 00:25]
ufal
Line 4: Line 4:
  
  
 +[[http://72.55.153.148/mediawiki-1.8.2/index.php/CZ_Demo_%28November_2009%29_Specs_and_Components_%282009-07-17%29#Natural_Language_Generation_.28WP_5.4.29|final version of description]]
  
-====== Description of Czech Companion November Prototype ======+====== Description of Czech Companion November Demonstrator ======
  
-The Czech version of Companion deals with the Reminiscing about User's Photos scenario taking advantage of data recorded in first phase of the project. The basic architecture is same as of the English version, i.e. set of modules communicating through the Inamode Relayer (TID) backbone; +The Czech version of the Companion deals with the Reminiscing about User's Photos scenariotaking advantage of data recorded in first phase of the project. The basic architecture is same as of the English version, i.e. set of modules communicating through the Inamode Relayer (TID) backbone; however the set of modules is different (see Figure 1). Regarding the physical settingsthe Czech version runs on two notebook computers connected by local network; one can be seen as a Speech Client, running modules dealing with ASR, TTS and ECA, second as an NLP Server.
-however the set of modules differs (see Figure 1). Regarding the physical settingsthe Czech version runs on two notebook computers connected by local network; one can be seen as a Speech Client, running modules dealing with ASR,TTS and ECA, second as an NLP Server.+
  
-{{user:ptacek:czech_companion_diagram.png.jpg|}}+The dialog is driven by a dialog manager component by USFD (originally developed for the English Senior Companion prototype), we supply the transition network (DAFs)The selection is backed by (a) appropriateness for the type of dialog we aim for (the corpus reveals frequent reoccurring topics to be handled by DAFs) , (b) availability of mature package within time frame that allows for integration, (c) possibility of reusing created DAF states, tests and specified actions for the upcoming statistical DM by UOXF (however this is post November work).
  
-photopal domena, nahranej korpus, ze na to sou dafy (reusing SHEFF DM intergrated through Inamode Relayer (TID)) vhodny, moreover reusable for expected pomdp DM from UOX (reuse states, let pomdp's do the topology and handle the states transitionspost november work), +Our DAFs covering selected topics contain not only Companion replies mined from the corporabut also new human-authored assessmentsremarks and glosses to provide longer system utterances in order to encourage user to tell more.
-typy odpovedi a zpusob jejich implementace,  +
-NLP server s tectomt, ASR/TTS/SR client, connected over network +
-XXX JPta+
  
 +For a sample dialogue, see the Scenario Brief below. 
  
 +{{user:ptacek:czech_companion_diagram.png|}}
  
  
 ===== Automatic Speech Recognition (WP 5.1)===== ===== Automatic Speech Recognition (WP 5.1)=====
 features: improved language models, real-time speaker adaptation features: improved language models, real-time speaker adaptation
-performance indicator: WER+performance indicator: WER 
  
  
 +
 +
 +
 +===== Speech Reconstruction (WP 5.2) =====
 +features: omit filler phrases, remove irrelevant speech events, handle false starts, repetitions, and corrections, polish word ordering performance indicator: BLEU score between actual output and manually reconstructed sentences from corpora (T5.2.1), baseline: Moses with default settings 
  
  
  
  
-===== Speech Reconstruction (WP 5.1 ???) ===== 
-features: omit filler phrases, remove irrelevant speech events, handle false starts, repetitions, and corrections, polish word ordering 
-performance indicator: BLEU score between actual output and manually reconstructed sentences from corpora (T5.2.1), baseline: Moses with default settings 
  
  
Line 39: Line 40:
  
 ===== Morphology Analyzer and POS tagging (WP 5.2) ===== ===== Morphology Analyzer and POS tagging (WP 5.2) =====
-features: coverage of photo-pal domain, domain adapted tagger (XXX prida nam Jarka OOV slova co najdeme, bude PDTSC rucne oznackovane - do listopadu?) +features: coverage of photo-pal domain, domain adapted tagger 
-performance indicator: OOV rate, accuracy+performance indicator: OOV rate, accuracy (Morce 95.1%) 
 + 
  
  
Line 47: Line 50:
 ===== Syntactic Parsing (WP 5.2) ===== ===== Syntactic Parsing (WP 5.2) =====
 features: induce dependencies and labels features: induce dependencies and labels
-performance indicator: accuracy (correctly induced edges, labels)+performance indicator: accuracy (correctly induced edges (84%), labels)
  
  
Line 54: Line 57:
  
 ===== Semantic Parsing (WP 5.2) ===== ===== Semantic Parsing (WP 5.2) =====
-features: assignment of semantic roles (69 roles), coordinations, argument structure, partial ellipsis resolution, pronominal anaphora resolution, post parsing detection of ungrammatical edges (caused by long utterances) +features: assignment of semantic roles (69 roles), coordinations, argument structure, partial ellipsis resolution, pronominal anaphora resolution, post parsing detection of ungrammatical edges (caused by long user utterances) 
-performance indicator: accuracy (correctly induced edges, labels) +performance indicator: accuracy (correctly induced edges, labels) 
  
 ===== Information Extraction (WP 5.2) ===== ===== Information Extraction (WP 5.2) =====
Line 65: Line 69:
  
 ===== Named Entities Recognition (WP 5.2) ===== ===== Named Entities Recognition (WP 5.2) =====
-features: detect person names, geographical locations, organizations+features: detect person names, geographical locations, organization names
 performance indicator: f-measure performance indicator: f-measure
  
Line 72: Line 76:
  
 ===== Dialog Act Tagging (WP 5.2) ===== ===== Dialog Act Tagging (WP 5.2) =====
-features: domain tailored tagset (variation of DAMSL-SWBD)+features: domain tailored tag-set (variation of DAMSL-SWBD)
 performance indicator: accuracy performance indicator: accuracy
 +
  
  
Line 86: Line 91:
 manual creation of DAFs covering following topics: Person_retired, Person_in_productive_age, Child, Husband, Wife, Wedding, Christmas, Death, Handling_stalled_dialog (most frequent topics in corpora), using customized DAF Editor provided by USFD. manual creation of DAFs covering following topics: Person_retired, Person_in_productive_age, Child, Husband, Wife, Wedding, Christmas, Death, Handling_stalled_dialog (most frequent topics in corpora), using customized DAF Editor provided by USFD.
 performance indicator: acceptability - manual evaluation of actions selected by DM performance indicator: acceptability - manual evaluation of actions selected by DM
 +
  
  
Line 93: Line 99:
  
 ===== Natural Language Generation (WP 5.4) ===== ===== Natural Language Generation (WP 5.4) =====
-features: adding of functional words, morphological adjustments, generating paraphrases for hard-coded utterances, underspecified input (dott format), passing-through of emotional markup (originating in DAFs)+features: adding of functional words, morphological adjustments, generating paraphrases for hard-coded utterances, underspecified input (dott format).
 performance indicator: BLEU score performance indicator: BLEU score
 +
 +
  
 ===== Emotional TTS (WP 5.5) ===== ===== Emotional TTS (WP 5.5) =====
 +features: emotions will be expressed implicitly, through the usage of communicative functions; new female voice database was recorded for this purposes
 +performance indicator: listening tests 
 +
 +
 +
  
 ===== Emotional Avatar Integration (WP 5.5) ===== ===== Emotional Avatar Integration (WP 5.5) =====
 +features: new Czech female voice with affective features will be integrated with the TID avatar
 +performance indicator: subjective evaluation of the naturalness and the ability to convey emotions (small-scale, given the time constraint)
  
-====== AZ PO LISTOPADU ====== 
  
-===== Syntactic Parsing (WP 5.2) ===== 
-features: adapted to domain (McD trained on manual PDTSC trees) 
-performance indicator: accuracy (correctly induced edges, labels) 
  
 +====== Scenario Brief ======
  
-===== Sentiment Analysis (WP 5.2===== +C1 Dobrý den, jak se jmenujete? (intro-daf-based
-features: za tohle bych vydaval klasifikatorco rozhoduje ,jestli se rekne 'To je smutné/veselé'Tem adjektivum rucne priradim negative/positive sentiment+H1 Marie. 
-performance indicator: f-measure+ 
 +C2 Těší mě. Já jsem Pavla a chtěla bych Vás blíž poznat. Jsem rádaže mi ukážete Vaše fotografie. 
 + 
 +C2 Co je tohle? (photo_maintenance) 
 +H2 To je moje rodinarodiče, sestra, její manžel, moje dvě děti a její dvě děti. 
 +H2 Byli u mě na Vánoce na návštěvě. 
 + 
 +C3 Trávíte Vánoce takhle pohromadě? 
 +H3 Teď už ne. 
 +H3 Teď už všechny moje dětičky vylétaly z hnízda. 
 + 
 +C4 Jak se jmenuje váš manžel? 
 +H4 Manžel už nežije, jmenoval se Karel Šístek. 
 + 
 +C5 Máte ráda Vánoce? 
 +H5 Teď už ne. 
 +H5 Měla jsem je ráda, když byly děti malé. 
 +H5 Teď už to ale není ono. 
 + 
 +C6 Takže teď trávíte Vánoce sama? (ask_on_detail-daf-based) 
 +H6 Nejsem úplně sama, chodím každé Vánoce k dceři, ale už to není, jak to bývalo, když byly doma malé děti. 
 + 
 +C7 Jak děti vyrostou, tak má člověk najednou příliš mnoho času. 
 +C7 Kdo je kdo na té fotce? 
 +H7 U stromečku je můj synovec, ta holčička vedle je moje dcera, chlapeček s hračkou je můj syn, vedle je sestra, její manžel a nad ním je můj otec. 
 + 
 +C8 To jste ale velká rodina. 
 +C8 Jak se jmenuje vaše sestra? (companion commenting last utterance  bundled with follow-up question - to achieve longer responses) 
 +H8 Sestra se jmenuje Marie Havlová, její manžel je Vladimír, ten bohužel nedávno zemřel. 
 + 
 +C9 To je smutné. 
 +C9 Znala jste ho dobře? 
 +H9 Velmi dobře, trávili jsme obě rodiny společně hodně času
 + 
 +C10 Máte už nějaká vnoučata? 
 +H10 Mám vnučku Natálku. 
 +H10 Ona je takové živé děťátko, taková neposeda. 
 + 
 +C11 Takže Natálka je neposeda. repeat-rephrase 
 +C11 Máte ještě nějaké další vnučky nebo vnuky? 
 +H11 Čekáme ještě další přírustek. 
 + 
 +C12 Hlídáte je někdy? 
 +H12 Hlídám dost často. 
 +H12 Já mám děti ráda a jsem s nimi moc ráda.
  
-===== Complete System Evaluation ===== +C13 To je fajn. 
-T5.2.7 tohle zminuje, nick webb to pro nas asi neudela +C13 S dětmi je legrace
-performance indicator: number of tokens in user reply utterances, post-session questionare+H13 Ano.
  
 +C Podíváme se na další fotku?
  
-===== advances ===== 
  
-advances in Czech NLU (on reconstructed spoken data): 300-500vet(?) rucne anotovat pos, a-tree, t-tree, IE predicates, Named Entities, DA pro eval in-domain testy after Nov. 
-pos ? analyzovat, generovat a kontrolovat 'jen' kde je rozdil ve forme?{{user:ptacek:czech_companion_diagram.png.jpg|}} 

[ Back to the navigation ] [ Back to the content ]