<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.ufal.ms.mff.cuni.cz/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="https://wiki.ufal.ms.mff.cuni.cz/feed.php">
        <title>ufal wiki courses:rg:2013</title>
        <description></description>
        <link>https://wiki.ufal.ms.mff.cuni.cz/</link>
        <image rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/lib/tpl/ufal/images/favicon.ico" />
       <dc:date>2026-04-28T02:22:57+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:composite-activities?rev=1380483332&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:convolution-kernels?rev=1363084036&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:crf?rev=1364544050&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:dep-tree-kernels?rev=1363083283&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:false-positive-psychology?rev=1381758439&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:hmm-perc-experiments?rev=1364812381&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:jacana-align?rev=1383600131&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:jokes?rev=1385406890&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:memm?rev=1413119085&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:meteor?rev=1384860342&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:paraphrase-corpora?rev=1386664896&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:semantic-textual-similarity?rev=1384262879&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:significance-bootstrap?rev=1386613689&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:stanford-dependencies?rev=1382357468&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:ut-and-udt?rev=1382984667&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:visiterms?rev=1366037937&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:watermarking?rev=1365427225&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="https://wiki.ufal.ms.mff.cuni.cz/lib/tpl/ufal/images/favicon.ico">
        <title>ufal wiki</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/</link>
        <url>https://wiki.ufal.ms.mff.cuni.cz/lib/tpl/ufal/images/favicon.ico</url>
    </image>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:composite-activities?rev=1380483332&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-09-29T21:35:32+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:composite-activities</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:composite-activities?rev=1380483332&amp;do=diff</link>
        <description>*  After reading the first three chapters: 
			*  list the main parts/components/structures of the model. 
			*  Is their creation dependent on other components?

	*  Thinking about the scripts:
			*  What is the main reason (the biggest advantage) of using scripts? What kind of information does it bring? (Hint: page 2, page 8)</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:convolution-kernels?rev=1363084036&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-03-12T11:27:16+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:convolution-kernels</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:convolution-kernels?rev=1363084036&amp;do=diff</link>
        <description>Michael Collins, Nigel Duffy: Convolution kernels for natural language

Paper link

Questions

	*  What is a generative model, what is a discriminative model and what is their main difference?
	*  What are the “fairly strong independence assumptions” in PCFG? Come up with an example tree that can't be modelled by a PCFG.</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:crf?rev=1364544050&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-03-29T09:00:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:crf</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:crf?rev=1364544050&amp;do=diff</link>
        <description>Questions

(1) Definition of CRF in Section 3 contains a formula with a shortcut notation: &lt;latex&gt;P(Y_v | X, Y_w, w \neq v) = P(Y_v | X, Y_w, w \sim v)&lt;/latex&gt;.

a) Try to rewrite this general formula using some more clear notation (or explain it in your words).
b) Rewrite the formula for the chain-structured case of CRF.</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:dep-tree-kernels?rev=1363083283&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-03-12T11:14:43+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:dep-tree-kernels</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:dep-tree-kernels?rev=1363083283&amp;do=diff</link>
        <description>Questions

Aron Culotta, Jeffrey Sorensen: Dependency Tree Kernels for Relation Extraction, ACL 2004.

	*  Given Figure 1, what is the smallest common subtree that includes both t1 (Troops) and t2 (near)?
	*  Section 5: “Therefore, d(a)=l(a).” When is this true and why? (Assume this holds for the following questions.)</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:false-positive-psychology?rev=1381758439&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-10-14T15:47:19+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:false-positive-psychology</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:false-positive-psychology?rev=1381758439&amp;do=diff</link>
        <description>False-Positive Psychology

Joseph P. Simmons, Leif D. Nelson, Uri Simonsohn: False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant, Psychological Science, 2011.

Questions

	*  Are the described issues (researcher degrees of freedom etc.) relevant also for NLP research (papers)? Can you name some similarities and differences (concerning the described issues) between NLP and psychological research?</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:hmm-perc-experiments?rev=1364812381&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-04-01T12:33:01+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:hmm-perc-experiments</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:hmm-perc-experiments?rev=1364812381&amp;do=diff</link>
        <description>Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms

Questions

1. Suppose you have a tagset consisting of two tags, N(noun), X(not noun) and a training sentence:
  luke/N i/X am/X your/X father/N</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:jacana-align?rev=1383600131&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-11-04T22:22:11+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:jacana-align</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:jacana-align?rev=1383600131&amp;do=diff</link>
        <description>Jacana word aligner

Slides accompanying the article are located at &lt;http://cs.jhu.edu/~xuchen/paper/yao-jacana-wordalign-acl2013.ppt&gt;

1/ Describe parameters of feature function in section 3.1. Is there something unclear in the formula?

2/ Section 3.1 describes a problem with unknown number of states.
For example, we have the following source sentence:</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:jokes?rev=1385406890&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-11-25T20:14:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:jokes</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:jokes?rev=1385406890&amp;do=diff</link>
        <description>Unsupervised joke generation from big data

Saša Petrović and David Matthews
&lt;http://homepages.inf.ed.ac.uk/s0894589/petrovic13unsupervised.pdf&gt;

	*  What do you think about the fact that according to the raters only 33% of human-generated jokes are funny?
	*  Model 1 seems to be better than Model 2 according to Table 1, but worse according to Table 2. Why?</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:memm?rev=1413119085&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2014-10-12T15:04:45+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:memm</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:memm?rev=1413119085&amp;do=diff</link>
        <description>Maximum Entropy Markov Models - Questions

1. Explain (roughly) how the new formula for α_t+1(s) is derived (i.e. formula 1 in the paper).

2. Section 2.1 states “we will split P(s|s',o) into |S| separately trained transition functions”. What are the advantages and disadvantages of this approach?</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:meteor?rev=1384860342&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-11-19T12:25:42+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:meteor</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:meteor?rev=1384860342&amp;do=diff</link>
        <description>Satanjeev Banerjee and Alon Lavie
METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments
&lt;http://aclweb.org/anthology//W/W05/W05-0909.pdf&gt;

1) Why is correlation of METEOR higher in Table 1 (0.964) than in Table 2 (0.347)?

2) For the following two sentences:</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:paraphrase-corpora?rev=1386664896&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-12-10T09:41:36+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:paraphrase-corpora</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:paraphrase-corpora?rev=1386664896&amp;do=diff</link>
        <description>Unsupervised Construction of Large Paraphrase Corpora

Bill Dolan, Chris Quirk, and Chris Brockett: Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In: Proceedings of the 20th international conference on Computational Linguistics (COLING '04), 2004. 

Questions

	*  First, check the formula for AER presented in the paper - what do you think about it?</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:semantic-textual-similarity?rev=1384262879&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-11-12T14:27:59+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:semantic-textual-similarity</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:semantic-textual-similarity?rev=1384262879&amp;do=diff</link>
        <description>Semantic textual similarity using maximal weighted bipartite graph matching

1) What are the drawbacks of WordNet-based word similarity?

2) Suppose we changed randomly the word order of the input sentences.
Which systems' (baseline, system 1, 2, 3) output similarity scores will be affected?</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:significance-bootstrap?rev=1386613689&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-12-09T19:28:09+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:significance-bootstrap</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:significance-bootstrap?rev=1386613689&amp;do=diff</link>
        <description>Notes

[This handout] includes some notes from the paper as well as a list of statistical tests for the difference of the means.

Questions

	*  (warm-up) The abbreviation “i.i.d.” is used several times throughout the text. What does it mean?    independent and identically distributed</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:stanford-dependencies?rev=1382357468&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-10-21T14:11:08+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:stanford-dependencies</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:stanford-dependencies?rev=1382357468&amp;do=diff</link>
        <description>The Stanford typed dependencies representation

Marie-Catherine de Marneffe, Christopher D. Manning: The Stanford typed dependencies representation, Coling 2008.

Answers to Questions

	*  What do the authors give as a good example of a frequently used linguistic data resource? What reason(s) do they give for its frequent use?</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:ut-and-udt?rev=1382984667&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-10-28T19:24:27+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:ut-and-udt</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:ut-and-udt?rev=1382984667&amp;do=diff</link>
        <description>Universal Tagset

1. Why do you think that the best results are obtained by training on original tagsets and testing on the universal one (why are the numbers in the O/U column higher than in O/O and U/U)? (Table 1)

2. What is the advantage of USR-I over USR-G? (Why would someone use the former, when the latter gives better results?)</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:visiterms?rev=1366037937&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-04-15T16:58:57+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:visiterms</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:visiterms?rev=1366037937&amp;do=diff</link>
        <description>*  Having described an input image with SIFT descriptors, why do they cluster them (using the K-means clustering algorithm, top of p. 834)?
	*  What kind of things, in Feng and Lapata's work, have topics? What do these topics predict?
	*  They have defined</description>
    </item>
    <item rdf:about="https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:watermarking?rev=1365427225&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2013-04-08T15:20:25+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>courses:rg:2013:watermarking</title>
        <link>https://wiki.ufal.ms.mff.cuni.cz/courses:rg:2013:watermarking?rev=1365427225&amp;do=diff</link>
        <description>Watermarking the Outputs of Structured Prediction - Questions

1. What is (capital-letter) X in Equations 3 and 5?

2. What do we need to detect whether a given set of French sentences downloaded from web was produced by an MT system and watermarked?</description>
    </item>
</rdf:RDF>
