[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
external:tectomt:tutorial [2009/01/21 15:25]
popel
external:tectomt:tutorial [2010/11/10 16:39] (current)
popel SEnglishM_to_SEnglishA::Clone_MTree is needed now
Line 1: Line 1:
 ====== TectoMT Tutorial ====== ====== TectoMT Tutorial ======
  
-Welcome at TectoMT Tutorial. This tutorial should take about hours. +Welcome to the TectoMT Tutorial. This tutorial should take about hours.
  
  
 ===== What is TectoMT ===== ===== What is TectoMT =====
  
-TectoMT is a highly modular NLP (Natural Language Processing) software system implemented in Perl programming language under Linux. It is primarily aimed at Machine Translation, making use of the ideas and technology created during the Prague Dependency Treebank project. At the same time, it is also hoped to significantly facilitate and accelerate development of software solutions of many other NLP tasks, especially due to re-usability of the numerous integrated processing modules (called blocks), which are equipped with uniform object-oriented interfaces.  +TectoMT is a highly modular NLP (Natural Language Processing) software system implemented in Perl programming language under Linux. It is primarily aimed at Machine Translation, making use of the ideas and technology created during the Prague Dependency Treebank project. At the same time, it is also hoped to facilitate and significantly accelerate development of software solutions of many other NLP tasks, especially due to re-usability of the numerous integrated processing modules (called blocks), which are equipped with uniform object-oriented interfaces. 
- +
- +
  
  
Line 20: Line 16:
   * Your shell is bash   * Your shell is bash
   * You have basic experience with bash and can read basic Perl   * You have basic experience with bash and can read basic Perl
- 
- 
- 
- 
- 
- 
- 
- 
- 
  
  
 ==== Installation and setup ==== ==== Installation and setup ====
  
-  * Checkout SVN repository. If you are running this installation in computer lab in Prague, you have to checkout the repository into directory ''/home/BIG'' (because data quotas don't apply here):+  * Checkout SVN repository. If you are running this installation in computer lab in Prague, you have to checkout the repository into directory ''~/BIG'' (because bigger disk quota applies here):
  
 <code bash> <code bash>
     cd ~/BIG     cd ~/BIG
-    svn --username <username> co https://svn.ms.mff.cuni.cz/svn/tectomt_devel/trunk tectomt+    svn --username public co https://svn.ms.mff.cuni.cz/svn/tectomt_devel/trunk tectomt
 </code> </code>
 +
 +  * accept the certificate and provide a password which is same as the username ie. : //public//
  
   * In ''tectomt/install/'' run ''./install.sh'':   * In ''tectomt/install/'' run ''./install.sh'':
Line 47: Line 36:
 </code> </code>
  
-  * In your ''.bashrc'' file, add line (or source this file every time before experimenting with TectoMT):+  * In your ''.bashrc'' file, add line (or source the specified file every time before experimenting with TectoMT):
  
 <code bash> <code bash>
Line 53: Line 42:
 </code> </code>
  
 +  * In your ''.bash_profile'' file, add line 
  
- +<code bash> 
- +    source .bashrc 
- +</code>
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
  
  
 ===== TectoMT Architecture ===== ===== TectoMT Architecture =====
- 
- 
- 
  
 ==== Blocks, scenarios and applications ==== ==== Blocks, scenarios and applications ====
Line 81: Line 55:
 In TectoMT, there is the following hierarchy of processing units (software components that process data): In TectoMT, there is the following hierarchy of processing units (software components that process data):
  
-  * The basic units are blocks. They serve for some very limited, well defined, and often linguistically interpretable tasks (e.g., tokenization, tagging, parsing). Technically, blocks are Perl classes inherited from ''TectoMT::Block'', each saved in a separate file. The blocks repository is in ''libs/blocks/''+  * The basic units are **blocks**. They serve for some very limited, well defined, and often linguistically interpretable tasks (e.g., tokenization, tagging, parsing). Technically, blocks are Perl classes inherited from ''TectoMT::Block'', each saved in a separate file. The blocks repository is in ''libs/blocks/''
-  * To solve a more complex task, selected blocks can be chained into a block sequence, called also a scenario. Technically, scenarios are instances of ''TectoMT::Scenario'' class, but in some situations (e.g. on the command line) it is sufficient to specify the scenario simply by listing block names separated with spaces+  * To solve a more complex task, selected blocks can be chained into a block sequence, called **scenario**Scenarios are stored in ''*.scen'' files (alternativelythe block names separated by spaces can be simply listed on the command line) and at runtime the scenarios are represented by instances of ''TectoMT::Scenario'' class. 
-  * The highest unit is called application. Applications correspond to end-to-end tasks, be they real end-user applications (such as machine translation), or 'only' NLP-related experiments. Technically, applications are often implemented as ''Makefiles'', which only glue the components existing in TectoMT. Some demo applications can be found in ''applications''.+  * The highest unit is called **application**. Applications correspond to end-to-end tasks, be they real end-user applications (such as machine translation), or 'only' NLP-related experiments. Technically, applications are often implemented as ''Makefiles'', which only glue the components existing in TectoMT. Some demo applications can be found in ''applications''.
  
 This tutorial itself has its blocks in ''libs/blocks/Tutorial'' and the application in ''applications/tutorial''. This tutorial itself has its blocks in ''libs/blocks/Tutorial'' and the application in ''applications/tutorial''.
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
  
  
Line 112: Line 76:
 Blocks in block repository ''libs/blocks'' are located in directories indicating their purpose in machine translation. Blocks in block repository ''libs/blocks'' are located in directories indicating their purpose in machine translation.
  
-//Example//: Block adding Czech morphological tags (pos, case, gender, etc.) can be found in ''libs/blocks/SCzechW_to_SCzechM/Simple_tagger.pm''.+//Example//: A block adding Czech morphological tags (pos, case, gender, etc.) can be found in ''libs/blocks/SCzechW_to_SCzechM/Simple_tagger.pm''.
  
 There are also other directories for other purpose blocks, for example blocks which only print out some information go to ''libs/Print''. Our tutorial blocks are in ''libs/blocks/Tutorial/''. There are also other directories for other purpose blocks, for example blocks which only print out some information go to ''libs/Print''. Our tutorial blocks are in ''libs/blocks/Tutorial/''.
- 
- 
- 
- 
- 
- 
- 
  
  
Line 128: Line 85:
 Once you have TectoMT installed on your machine, you can find this tutorial in ''applications/tutorial/''. After you ''cd'' into this directory, you can see our plain text sample data in ''sample.txt'' Once you have TectoMT installed on your machine, you can find this tutorial in ''applications/tutorial/''. After you ''cd'' into this directory, you can see our plain text sample data in ''sample.txt''
  
-Most applications are defined in Makefiles, which describe sequence of blocks to be applied on our data. In our particular ''Makefile''four blocks are going to be applied on our sample text: sentence segmentation, tokenization, tagging and lemmatization. Since we have our input text in plain text format, the file is going to be converted into ''tmt'' format beforehand (the ''in'' target in the Makefile).+Most applications are defined in ''Makefiles'' and ''*.scen'' files, which describe sequence of blocks to be applied on our data. In our case, ''tutorial.scen'' lists four blocks to be applied on our sample text: sentence segmentation, tokenization, part-of-speech tagging and lemmatization. Since we have our input text in plain text format, the file is going to be converted into ''tmt'' format beforehand (the ''in'' target in the ''Makefile'').
  
 We can run the application: We can run the application:
Line 139: Line 96:
  
   * One physical ''tmt'' file corresponds to one document.   * One physical ''tmt'' file corresponds to one document.
-  * A document consists of a sequence of bundles (''<bundle>''), mirroring a sequence of natural language sentences originating from the text. So, for one sentence we have one ''<bundle>''+  * A document consists of a sequence of bundles (element ''<bundles>''), mirroring a sequence of natural language sentences originating from the text. So, for each sentence we have one bundle. 
-  * Each bundle contains tree shaped sentence representations on various linguistic layers. In our example ''sample.tmt'' we have morphological tree (''SEnglishM'') in each bundle. Later on, also an analytical layer (''SEnglishA'') will appear in each bundle as we proceed with our analysis.  +  * Each bundle contains tree shaped sentence representations on various linguistic layers. In our example ''sample.tmt'' we have morphological tree (''SEnglishM'') in each bundle (actually, it is a flat tree: one technical root and its children are the tokens). Later on, also an analytical layer (''SEnglishA'') will appear in each bundle as we proceed with our analysis.  
-  * Trees are formed by nodes and edges. Attributes can be attached only to nodes. Edge's attributes must be equivalently stored as the lower node's attributes. Tree's attributes must be stored as attributes of the root node. +  * Trees are formed by nodes and edges. Attributes can be attached only to nodes. Edge's attributes must be stored as the lower node's attributes. Tree's attributes must be stored as attributes of the root node.
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
  
  
 ===== Changing the scenario ===== ===== Changing the scenario =====
  
-We'll now add a syntax analysis (dependency parsing) to our scenario by adding three more blocks. Instead of +We'll now add a syntax analysis (dependency parsing) to our scenario by adding five more blocks to ''tutorial.scen''. Instead of 
  
-<code bash+<code> 
-analyze: +SEnglishW_to_SEnglishM::Sentence_segmentation_simple 
-        brunblocks -S -o \ +SEnglishW_to_SEnglishM::Tokenization 
-                SEnglishW_to_SEnglishM::Sentence_segmentation_simple \ +SEnglishW_to_SEnglishM::TagMxPost 
-                SEnglishW_to_SEnglishM::Penn_style_tokenization \ +SEnglishW_to_SEnglishM::Lemmatize_mtree
-                SEnglishW_to_SEnglishM::TagTnT \ +
-                SEnglishW_to_SEnglishM::Lemmatize_mtree +
-        -- sample.tmt+
 </code> </code>
  
Line 180: Line 115:
  
 <code bash> <code bash>
-analyze: +SEnglishW_to_SEnglishM::Sentence_segmentation_simple 
-        brunblocks -S -o \ +SEnglishW_to_SEnglishM::Tokenization 
-                SEnglishW_to_SEnglishM::Sentence_segmentation_simple \ +SEnglishW_to_SEnglishM::TagMxPost 
-                SEnglishW_to_SEnglishM::Penn_style_tokenization \ +SEnglishW_to_SEnglishM::Lemmatize_mtree 
-                SEnglishW_to_SEnglishM::TagTnT \ +SEnglishM_to_SEnglishA::Clone_MTree 
-                SEnglishW_to_SEnglishM::Lemmatize_mtree \ +SEnglishM_to_SEnglishA::McD_parser 
-                SEnglishM_to_SEnglishA::McD_parser_local \ +SEnglishM_to_SEnglishA::Fill_is_member_from_deprel 
-                SEnglishM_to_SEnglishA::Fix_McD_Tree \ +SEnglishM_to_SEnglishA::Fix_McD_topology 
-                SEnglishM_to_SEnglishA::Fill_afun_after_McD \ +SEnglishM_to_SEnglishA::Fill_afun_AuxCP_Coord 
-        -- sample.tmt+SEnglishM_to_SEnglishA::Fill_afun
 </code> </code>
- 
-//Note//: Makefiles use tabulators to mark command lines. Make sure your lines start with tabulator (or two tabulators) and not, for example, with 4 spaces. 
  
 After running After running
Line 202: Line 135:
 we can examine our ''sample.tmt'' again. Really, an analytical layer ''SEnglishA'' describing a dependency tree with analytical functions (''<afun>'') has been added to each bundle. we can examine our ''sample.tmt'' again. Really, an analytical layer ''SEnglishA'' describing a dependency tree with analytical functions (''<afun>'') has been added to each bundle.
  
-You can view the trees in ''sample.tmt'' with TrEd by typing+Blocks can also be parametrizedFor syntax parser, we might want to use a smaller but faster model. To achieve this, replace the line
  
 <code bash> <code bash>
-tmttred sample.tmt+SEnglishM_to_SEnglishA::McD_parser
 </code> </code>
  
 +with
  
 +<code bash>
 +SEnglishM_to_SEnglishA::McD_parser TMT_PARAM_MCD_EN_MODEL=conll_mcd_order2_0.1.model
 +</code>
  
 +You can view the trees in ''sample.tmt'' with TrEd by typing
  
 +<code bash>
 +tmttred sample.tmt
 +</code>
  
 +Try to click on some nodes to see their parameters (tag, lemma, form, analytical function etc).
  
 +//Note//: For more information about tree editor TrEd, see [[http://ufal.mff.cuni.cz/~pajas/tred/ar01-toc.html|TrEd User's Manual]].
  
 +If you are not familiar with ''Makefile'' syntax, you can run the scenario with a simple ''bash'' script (see ''applications/tutorial/run_all.sh''):
  
- +<code bash> 
- +./run_all.sh 
- +</code>
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
  
  
Line 265: Line 191:
  
 Attributes of documents, bundles or nodes can be accessed by attribute getters and setters, for example:  Attributes of documents, bundles or nodes can be accessed by attribute getters and setters, for example: 
 +
   * ''$node<nowiki>-></nowiki>get_attr($attr_name)''   * ''$node<nowiki>-></nowiki>get_attr($attr_name)''
   * ''$node<nowiki>-></nowiki>set_attr($attr_name, $attr_value)''   * ''$node<nowiki>-></nowiki>set_attr($attr_name, $attr_value)''
  
-Our tutorial block ''Print_node_info.pm'' is ready to use. You only need to add this block to our scenario, e.g. as a new Makefile target:+Some interesting attributes on morphologic layer are ''form'', ''lemma'' and ''tag''. Some interesting attributes on analytical layer are ''afun'' (analytical function) and ''ord'' (surface word order). To reach ''form'', ''lemma'' or ''tag'' from analytical layer, that is, when calling this attribute on an ''a-node'', you use ''$a_node<nowiki>-></nowiki>get_attr('m/form')'' and the same way for ''lemma'' and ''tag''. The easiest way to see the node attributes is to click on the node in TrEd: 
 + 
 +<code bash> 
 +tmttred sample.tmt 
 +</code> 
 + 
 +Our tutorial block ''Print_node_info.pm'' is ready to use. You only need to add this block to our scenario, e.g. as a new ''Makefile'' target:
  
 <code bash> <code bash>
 print_info: print_info:
-        brunblocks -S -o Tutorial::Print_node_info -- sample.tmt+        brunblocks -o Tutorial::Print_node_info -- sample.tmt
 </code>         </code>        
  
Line 282: Line 215:
  
 Try to change the block so that it prints out the information only for verbs. (You need to look at an attribute ''tag'' at the ''m'' level). The tagset used is Penn Treebank Tagset. Try to change the block so that it prints out the information only for verbs. (You need to look at an attribute ''tag'' at the ''m'' level). The tagset used is Penn Treebank Tagset.
- 
- 
- 
- 
  
  
 ===== Advanced block: finite clauses ===== ===== Advanced block: finite clauses =====
- 
- 
- 
- 
- 
  
 ==== Motivation ==== ==== Motivation ====
  
-It is assumed that finite clauses can be translated independently, which would reduce computational complexity or make parallel translation possible. We could even use hybrid translation - each finite clause could be translated by the most self-confident translation system. In this task, we are going to split the sentence into finite clauses. +It is assumed that finite clauses can be translated independently, which would reduce combinatorial complexity or make parallel translation possible. We could even use hybrid translation - each finite clause could be translated by the most self-confident translation system. In this task, we are going to split the sentence into finite clauses.
- +
- +
- +
  
 ==== Task ==== ==== Task ====
 A block which, given an analytical tree (''SEnglishA''), fills each ''a-node'' with boolean attribute ''is_clause_head'' which is set to ''1'' if the ''a-node'' corresponds to a finite verb, and to ''0'' otherwise. A block which, given an analytical tree (''SEnglishA''), fills each ''a-node'' with boolean attribute ''is_clause_head'' which is set to ''1'' if the ''a-node'' corresponds to a finite verb, and to ''0'' otherwise.
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
  
 ==== Instructions ==== ==== Instructions ====
Line 339: Line 232:
 <code bash> <code bash>
 finite_clauses: finite_clauses:
-        brunblocks -S -o +        brunblocks -S -o Tutorial::Mark_heads Tutorial::Print_finite_clauses -- sample.tmt
-                Tutorial::Mark_heads +
-                Tutorial::Print_finite_clauses +
-        -- sample.tmt+
 </code> </code>
  
Line 352: Line 242:
   * ''my @eff_children = $node<nowiki>-></nowiki>get_eff_children()''   * ''my @eff_children = $node<nowiki>-></nowiki>get_eff_children()''
  
-//Note//: ''get_children()'' returns topological node children in a tree, while ''get_eff_children()'' returns node children in a linguistic sense. Mostly, these do not differ.+//Note//: ''get_children()'' returns topological node children in a tree, while ''get_eff_children()'' returns node children in a linguistic sense. Mostly, these do not differ. If interested, see Figure 1 in [[http://ufal.mff.cuni.cz/pdt2.0/doc/tools/tred/bn-tutorial.html#i-effective|btred tutorial]].
  
 +//Hint//: Finite clauses in English usually require grammatical subject to be present.
  
 +==== Advanced version ====
  
-//Advanced version//: The output of our block might still be incorrect in special cases - we don't solve coordination (see the second sentence in sample.txt) and subordinate conjunctions. +The output of our block might still be incorrect in special cases - we don't solve coordination (see the second sentence in sample.txt) and subordinate conjunctions.
  
  
 ===== Your turn: more tasks ===== ===== Your turn: more tasks =====
- 
- 
- 
- 
- 
- 
- 
- 
- 
- 
  
 ==== SVO to SOV ==== ==== SVO to SOV ====
  
-**Motivation**: During translation from an SVO based language (e.g. English) to an SOV based language (e.g. Korean) we might need to change the word order from SVO to SOV. +**Motivation**: During translation from an SVO based language (e.g. English) to an SOV based language (e.g. Korean)we might need to change the word order from SVO to SOV. 
  
 **Task**: Change the word order from SVO to SOV. **Task**: Change the word order from SVO to SOV.
Line 380: Line 261:
 **Instructions**:  **Instructions**: 
  
-  * To find an object to a verb, look for objects among effective children of a verb (''$child<nowiki>-></nowiki>get_attr('afun') eq 'Obj' ''). That implies working on analytical layer.+  * You can use block template in ''libs/blocks/BlockTemplate.pm''.  
 +  * To find an object of a verb, look for objects among effective children of a verb (''$child<nowiki>-></nowiki>get_attr('afun') eq 'Obj' ''). That implies working on analytical layer.
   * For debugging, a method returning surface word order of a node is useful: ''$node<nowiki>-></nowiki>get_attr('ord')''. It can be used to print out nodes sorted by attribute ''ord''.   * For debugging, a method returning surface word order of a node is useful: ''$node<nowiki>-></nowiki>get_attr('ord')''. It can be used to print out nodes sorted by attribute ''ord''.
-  * Once you have node ''$object'' and node ''$verb'', use method ''$object<nowiki>-></nowiki>shift_before_node($verb)''. This method takes the whole subtree under node ''$object'' and re-counts the attributes ''ord'' (surface word order) so that all nodes in subtree under ''$object'' have smaller ''ord'' than ''$verb''. That is, the method rearranges the surface word order from VO to OV. +  * Once you have the node ''$object'' and the node ''$verb'', use the method ''$object<nowiki>-></nowiki>shift_before_node($verb)''. This method takes the whole subtree under the node ''$object'' and recalculates the attributes ''ord'' (surface word order) so that all the nodes in the subtree under ''$object'' have smaller ''ord'' than ''$verb''. That is, the method rearranges the surface word order from VO to OV.
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
- +
  
 +**Advanced version**: This solution shifts object (or more objects) of a verb just in front of that verb node. So f.e.: //Mr. Brown has urged MPs.// changes to: //Mr. Brown has MPs urged.// You can try to change this solution, so the final sentence would be: //Mr. Brown MPs has urged.// You may need a method ''$node<nowiki>-></nowiki>shift_after_subtree($root_of_that_subtree)''. Subjects should have attribute '''afun' eq 'Sb'''.
  
  
Line 417: Line 273:
 {{ external:tectomt:preps.png?200x80|Prepositions example}} {{ external:tectomt:preps.png?200x80|Prepositions example}}
  
-**Motivation**: In dependency approach question "where to hang prepositions" arises. In praguian style (PDT), prepositions are heads of the subtree and the noun/pronoun is dependent on the preposition. However, another ordering might be preferable: The noun/pronoun might be the head of subtree, while the preposition would take the role of a modifier.+**Motivation**: In dependency approach the question "where to hang prepositions" arises. In the praguian style (PDT), prepositions are heads of the subtree and the noun/pronoun is dependent on the preposition. However, another ordering might be preferable: The noun/pronoun might be the head of subtree, while the preposition would take the role of a modifier.
  
 **Task**: The task is to rehang all prepositions as indicated at the picture. You may assume that prepositions have at most 1 child. **Task**: The task is to rehang all prepositions as indicated at the picture. You may assume that prepositions have at most 1 child.
Line 430: Line 286:
 //Hint//:  //Hint//: 
   * On analytical layer, you can use this test to recognize prepositions: ''$node<nowiki>-></nowiki>get_attr('afun') eq 'AuxP' ''    * On analytical layer, you can use this test to recognize prepositions: ''$node<nowiki>-></nowiki>get_attr('afun') eq 'AuxP' '' 
-  * You can use block template in ''libs/blocks/BlockTemplate.pm'' 
   * To see the results, you can again use TrEd (''tmttred sample.tmt'')   * To see the results, you can again use TrEd (''tmttred sample.tmt'')
  
- +**Advanced version**: What happens in case of multiword prepositions? For example, ''because of'', ''instead of''. Can you handle it?
-//Advanced version//: What happens in case of multiword prepositions? For example, ''because of'', ''instead of''. Can you handle it?+
  
  
 ===== Further information ===== ===== Further information =====
-  * [[http://ufallab2.ms.mff.cuni.cz/~bojar/cruise_control_tmt/last_doc/generated/guide/guidelines.html|TectoMT Developer's Guide]] - obsolete+  * [[http://ufal.mff.cuni.cz/tectomt|TectoMT Homepage]]
   * Questions? Ask ''kravalova'' at ''ufal.mff.cuni.cz''   * Questions? Ask ''kravalova'' at ''ufal.mff.cuni.cz''
-  * Solutions to this tutorial tasks are in ''libs/blocks/Tutorial/*solution.pm''+  * Solutions to this tutorial tasks are in ''libs/blocks/Tutorial/*solution*.pm''
-  * [[http://ufal.mff.cuni.cz/~pajas/tred/|TrEd]] - tree editor +  * [[http://ufal.mff.cuni.cz/~pajas/tred/|TrEd]], [[http://ufal.mff.cuni.cz/~pajas/tred/ar01-toc.html|TrEd User's Manual]] - tree editor
- +
- +
  
 +If you are missing some files from //share//, you can download it from [[http://ufallab.ms.mff.cuni.cz/tectomt/share/]].

[ Back to the navigation ] [ Back to the content ]