[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
courses:rg [2013/02/20 00:01]
fiala
courses:rg [2014/01/06 17:46]
popel
Line 3: Line 3:
 Official name of this course is [[https://is.cuni.cz/studium/predmety/index.php?do=predmet&kod=NPFL095|NPFL095]] **Modern Methods in Computational Linguistics**. It is a continuation of informal Reading Group (RG) meetings. Requirements for getting credits:  Official name of this course is [[https://is.cuni.cz/studium/predmety/index.php?do=predmet&kod=NPFL095|NPFL095]] **Modern Methods in Computational Linguistics**. It is a continuation of informal Reading Group (RG) meetings. Requirements for getting credits: 
   * presenting one paper,   * presenting one paper,
-    * Select a term (write your name to the schedule below) before Feb 25+    * Select a term (write your name to the schedule below) before October 14
-    * If no paper is assigned to the term, suggest [[mailto:popel@ufal.mff.cuni.cz|me]] 2--3 papers you would like to present (with pdf links, and your preferences) before March 4. Ideally, make a group of 2--4 students presenting papers on a common topic (starting from basics to more advance papers). +    * If no paper is assigned to the term, suggest [[mailto:popel@ufal.mff.cuni.cz|me]] 2--3 papers you would like to present (with pdf links, and your preferences) before October 21. Ideally, make a group of 2--4 students presenting papers on a common topic (starting from basics to more advance papers). 
-    * Prepare your presentation and 3--quiz questions. At least 3 of the questions should ask for a specific answer, e.g. "write an equation for...", "given training set X=([dog,N],[cat,Y]), what is the number..." (Not "what do you think about..."). The first question should be quite easy to answer for those who have read the whole paper. The last question may be a tricky one. Send me the questions two weeks before your presentation. We may discuss the paper and refine the questions. +    * Prepare your presentation and 3--quiz questions. At least 3 of the questions should ask for a specific answer, e.g. "write an equation for...", "given training set X=([dog,N],[cat,Y]), what is the number..." (Not "what do you think about..."). The first question should be quite easy to answer for those who have read the whole paper. The last question may be a tricky one. Send me the questions two weeks before your presentation. We may discuss the paper and refine the questions. 
-    * One week before the presentation, write the the questions to a dedicated wiki page here. Send a reminder (questions and a link to the pdf of the paper) to rg@ufal.mff.cuni.cz ([[http://ufal.mff.cuni.cz/mailman/listinfo/rg|subscribe]] first) by Monday 16:00. +    * One week before the presentation, write the questions to a dedicated wiki page here. Send a reminder (questions and a link to the pdf of the paper) to rg@ufal.mff.cuni.cz by Monday 15:50.
-    * One week after the presentation (at the latest), write your (or the best) answers below the questions, plus a summary of interesting points discussed at the presentation (report).+
  
   * active participation in the discussions, which is conditioned by reading the papers in advance and attending the meetings,   * active participation in the discussions, which is conditioned by reading the papers in advance and attending the meetings,
-  * sending your answers to me and the presenter by Friday 23:59 (so the presenter can go through all answers before the presentation and focus more on problematic parts). +  * sending your answers to me and the presenter by Saturday 23:59 (so the presenter can go through all answers before the presentation and focus more on problematic parts). 
-  * In case of more than three missed meetings or deadlines, additional reports (or answers to tricky questions) are required.+  * In case of more than three missed meetings or deadlines, additional work (e.g. reports or answers to tricky questions) will be required.
  
 All questions, reports and presented papers must be in English. The presentations are in English by default, but if all present people agree it may be in Czech. All questions, reports and presented papers must be in English. The presentations are in English by default, but if all present people agree it may be in Czech.
Line 17: Line 16:
 ^ Contact      | popel@ufal.mff.cuni.cz | ^ Contact      | popel@ufal.mff.cuni.cz |
 ^ Mailing list | rg@ufal.mff.cuni.cz     | ^ Mailing list | rg@ufal.mff.cuni.cz     |
-^ List Archive | [[http://ufal.mff.cuni.cz/mailman/listinfo/rg]] | +^ Meetings     | Mondays 15:50, room S1 |
-^ Meetings     | Mondays 16:00, room S1 |+
 ^ Past meetings| [[courses:rg:past|courses:rg:past]] | ^ Past meetings| [[courses:rg:past|courses:rg:past]] |
 ^ Inspiration  | [[courses:rg:wishlist|courses:rg:wishlist]] | ^ Inspiration  | [[courses:rg:wishlist|courses:rg:wishlist]] |
  
-=== Spring&Summer 2013 === +=== Autumn&Winter 2013/2014 === 
-^ date   | **speaker** | **paper** | +^ date   | **speaker**  | **paper** | 
-Feb 18             | startup meeting| +Oct  7              | startup meeting | 
-Feb 25 | Martin Popel| Mark Johnson: [[http://cs.brown.edu/courses/cs195-5/fall2009/docs/lecture_10-27.pdf|A brief introduction to kernel classifiers]], 2009You can also read [[http://ciml.info/dl/v0_8/ciml-v0_8-ch09.pdf|a chapter from cimpl.info]] | +Oct 14 | Martin Popel | Joseph P. Simmons, Leif D. Nelson, Uri Simonsohn: [[http://people.psych.cornell.edu/~jec7/pcd%20pubs/simmonsetal11.pdf|False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant]], Psychological Science, 2011. [[courses:rg:2013:false-positive-psychology|Questions]]| 
-Mar 4  Ondřej Dušek    Michael CollinsNigel Duffy: [[http://www.cs.cmu.edu/Groups/NIPS/NIPS2001/papers/psgz/AA58.ps.gz|Convolution kernels for natural language]], NIPS 2001. | +Oct 21 Rudolf Rosa Marie-Catherine de MarneffeChristopher D. Manning: [[http://www.aclweb.org/anthology/W08-1301.pdf|The Stanford typed dependencies representation]], Coling 2008[[courses:rg:2013:stanford-dependencies|Questions and Answers]] 
-Mar 11    Aron CulottaJeffrey Sorensen: [[http://www.newdesign.aclweb.org/anthology-new/P/P04/P04-1054.pdf|Dependency Tree Kernels for Relation Extraction]], ACL 2004. | +Oct 28 —           no RG, Independent Czechoslovak State Day  | 
-^ Mar 18 |    | Intro to Structured prediction and Michael Collins: [[http://ucrel.lancs.ac.uk/acl/W/W02/W02-1001.pdf|Discriminative Training Methods for Hidden Markov ModelsTheory and Experiments with Perceptron Algorithms]], EMNLP 2002. As for the intro, [[http://people.mmci.uni-saarland.de/~titov/teaching/seminar-struct-prediction/struct-pred-class-01.pdf|Ivan Titov]] or [[http://nlpers.blogspot.cz/2006/04/what-is-structured-prediction.html|Hal Daumé]] have nice materials ([[http://nlpers.blogspot.cz/2006/01/structured-prediction-1-whats-out.html|Hal has many more]]). +^ Nov  4 | Jan Mašek   | Slav Petrov, Dipanjan DasRyan McDonald: [[http://www.petrovi.de/data/universal.pdf|A Universal Part-of-Speech Tagset]] and McDonald et al.: [[http://ryanmcd.com/papers/treebanksACL2013.pdf|Universal Multilingual Annotation for Dependency Parsing]] [[courses:rg:2013:ut-and-udt|Questions]]
-Mar 25 |    Andrew McCallumDayne Freitag, Fernando Pereira: [[http://www.ai.mit.edu/courses/6.891-nlp/READINGS/maxent.pdf|Maximum Entropy Markov Models for Information Extraction and Segmentation]], Conference on Machine Learning 2000, [[http://courses.ischool.berkeley.edu/i290-dm/s11/SECURE/gidofalvi.pdf|slides]] | +^ Nov 11 | Ondřej Fiala | Xuchen YaoBenjamin Van DurmeChris Callison-Burch, Peter Clark: [[http://cs.jhu.edu/~xuchen/paper/yao-jacana-wordalign-acl2013.pdf|A Lightweight and High Performance Monolingual Word Aligner]], Proceedings of ACL, 2013. [[courses:rg:2013:jacana-align|Questions]]| 
-^ Apr 1  |    | no RG, Easter (and April Fool's Day) +^ Nov 18 | Shadi Saleh |Bhagwani, Sumit and Satapathy, Shrutiranjan and Karnick, Harish:[[http://aclweb.org/anthology/S/S12/S12-1085.pdfSemantic textual similarity using maximal weighted bipartite graph matching]] [[courses:rg:2013:semantic-textual-similarity|Questions]]|
-Apr 8     | John LaffertyAndrew McCallumFernando Pereira: [[http://www.cis.upenn.edu/~pereira/papers/crf.pdf|Conditional Random FieldsProbabilistic Models for Segmenting and Labeling Sequence Data]], 2001 +Nov 25 | Matous Machacek Satanjeev Banerjee Alon Lavie [[http://www.aclweb.org/anthology/W05-0909|METEOR: An Automatic Metric for MT Evaluation with 
-Apr 15 Ondřej Fiala Ashish VenugopalJakob UszkoreitDavid Talbot, Franz J. Och, Juri Ganitkevitch: [[http://aclweb.org/anthology-new/D/D11/D11-1126.pdf|Watermarking the Outputs of Structured Prediction with an application in Statistical Machine Translation]], 2011 | +Improved Correlation with Human Judgments]] [[courses:rg:2013:meteor|Questions]]| 
-^ Apr 22 |    | | +^ Dec  2 | Petr Jankovský | PetrovicMathews: [[http://homepages.inf.ed.ac.uk/s0894589/petrovic13unsupervised.pdf | Unsupervised joke generation from big data ]] [[courses:rg:2013:jokes|Questions]]  \\ <html><font color="red">there will be a double Monday seminar in S1 before RG, I hope we will start not much later than at 16:00</font></html> 
-Apr 1     +Dec  Anna Vernerová    T. Berg-KirkpatrickD. BurkettD. Klein: [[http://www.aclweb.org/anthology/D/D12/D12-1091.pdf|An Empirical Investigation of Statistical Significance in NLP]] [[courses:rg:2013:significance-bootstrap|Questions]] | 
-^ May 6  |    | | +Dec 16 Petra Barančíková Bill DolanChris Quirkand Chris Brockett: [[http://research.microsoft.com/pubs/68974/para_coling2004.pdf|Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources ]] [[courses:rg:2013:paraphrase-corpora|Questions]] 
-^ May 13 |    | | +Jan              |  last RG, scientific discussion|
-^ May 20 |    | last RG |+

[ Back to the navigation ] [ Back to the content ]