[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
grid [2017/10/02 17:28]
popel [Job monitoring]
grid [2017/10/16 17:57]
popel [Advanced usage]
Line 1: Line 1:
 ====== ÚFAL Grid Engine (LRC) ====== ====== ÚFAL Grid Engine (LRC) ======
  
-LRC (Linguistic Research Cluster) is a name of ÚFAL's computational grid/cluster, which has (as of 2017/09) about 1600 CPU cores (115 servers + 2 submission heads), with a total 10 TiB of RAM. It uses [[https://en.wikipedia.org/wiki/Oracle_Grid_Engine|(Sun/Oracle/Son of) Grid Engine]] software (SGE) for job scheduling etc. You can submit many computing tasks (jobs) at once, they will be placed in a queue and once a machine (slot) with the required capabilities (e.g. RAM, number of cores) is available, your job will be executed (scheduled) on this machine. This way we can maximize the usefulness of the computing resources and divide it among users in a fair way.+LRC (Linguistic Research Cluster) is a name of ÚFAL's computational grid/cluster, which has (as of 2017/09) about 1800 CPU cores (115 servers + 2 submission heads), with a total 10 TiB of RAM. It uses [[https://en.wikipedia.org/wiki/Oracle_Grid_Engine|(Sun/Oracle/Son of) Grid Engine]] software (SGE) for job scheduling etc. You can submit many computing tasks (jobs) at once, they will be placed in a queue and once a machine (slot) with the required capabilities (e.g. RAM, number of cores) is available, your job will be executed (scheduled) on this machine. This way we can maximize the usefulness of the computing resources and divide it among users in a fair way.
  
-If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]] (which is actually a subsystem of LRC with independent queue ''gpu.q'').+If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]] (which is actually a subsystem of LRC with an independent queue ''gpu.q''). 
 +TODO: describe alternatives, e.g.: MetaCentrum / Cesnet cluster (all MFF students can use it), Amazon EC2, Microsoft Azure (some colleagues may have sometime free vouchers).
  
 ===== List of Machines ===== ===== List of Machines =====
Line 36: Line 37:
 ^ Name                ^ CPU type            ^ GHz ^cores ^ RAM(GB)^ note  ^ ^ Name                ^ CPU type            ^ GHz ^cores ^ RAM(GB)^ note  ^
 | lrc[1,2]            | Intel               | 2.3 |    4 |   45 | **no computing here**, just submit jobs | | lrc[1,2]            | Intel               | 2.3 |    4 |   45 | **no computing here**, just submit jobs |
-| pandora[1-10]       | 2xCore2 Intel Xeon  | 2.6 |      |   16 | special cluster&queue ''ms-guests.q''   | 
 | sol[1-5]            | Intel               | 2.6 |    4 |   16 | you can ssh here and compute | | sol[1-5]            | Intel               | 2.6 |    4 |   16 | you can ssh here and compute |
 | sol[6-8]            | Intel               | 2.0 |    8 |   16 | you can ssh here and compute | | sol[6-8]            | Intel               | 2.0 |    8 |   16 | you can ssh here and compute |
Line 43: Line 43:
  
 Alternatively, you can ssh to one of the **sol machines** and submit jobs from here. It is allowed to compute here, which is useful e.g. when you have a script which submits your jobs, but it also collects statistics from the jobs outputs (and possibly submits new jobs conditioned on the statistics). However, the sol machines are relatively slow and may be occupied by your colleagues, so for bigger (longer) tasks, always prefer submission as separate jobs. Alternatively, you can ssh to one of the **sol machines** and submit jobs from here. It is allowed to compute here, which is useful e.g. when you have a script which submits your jobs, but it also collects statistics from the jobs outputs (and possibly submits new jobs conditioned on the statistics). However, the sol machines are relatively slow and may be occupied by your colleagues, so for bigger (longer) tasks, always prefer submission as separate jobs.
- 
-The **pandora machines** are in a special cluster (not accessible from lrc) and queue **ms-guests.q** available for our colleagues from KSVI and for students of [[http://ufal.mff.cuni.cz/courses/npfl102|Data intensive computing]] (see the 2016 handouts if you missed the course). 
  
 ===== Installation ===== ===== Installation =====
Line 60: Line 58:
  
 First, you need to ssh to the cluster head (lrc1 or lrc2) or to one of the sol machines. The full address is ''lrc1.ufal.hide.ms.mff.cuni.cz'', but you can use just ''ssh lrc1'' ("hide" means it is accessible only from the ÚFAL network, not from outside; if working from home/Eduroam, you need to [[internal:remote-access|login/VPN]] to the ÚFAL network first). First, you need to ssh to the cluster head (lrc1 or lrc2) or to one of the sol machines. The full address is ''lrc1.ufal.hide.ms.mff.cuni.cz'', but you can use just ''ssh lrc1'' ("hide" means it is accessible only from the ÚFAL network, not from outside; if working from home/Eduroam, you need to [[internal:remote-access|login/VPN]] to the ÚFAL network first).
 +In the following tutorial, we will prepare a wrapper shell script ''script.sh'' with a toy task. In practice you can name the script whatever you want and you can execute the real task, e.g. a Python/Perl/... script. It is recommended to use the wrapper shell scripts, but with ''-b y'' (see [[#advanced usage]]) you can execute a Python/Perl/... directly without any wrapper.
  
 <code> <code>
Line 164: Line 163:
 Specify the emails where you want to be notified when the job has been **b** started, **e** ended, **a** aborted or rescheduled, **s** suspended. Specify the emails where you want to be notified when the job has been **b** started, **e** ended, **a** aborted or rescheduled, **s** suspended.
  
-''qsub **-hold_jid** 121144,121145''+''qsub **-hold_jid** 121144,121145'' (or ''qsub **-hold_jid** get_src.sh,get_tgt.sh'')
 The current job is not executed before all the specified jobs are completed. The current job is not executed before all the specified jobs are completed.
  
Line 172: Line 171:
 ''qsub **-N** my-name'' ''qsub **-N** my-name''
 By default the name of a job (which you can see e.g. in ''qstat'') is the name of the ''script.sh''. This way you can override it. By default the name of a job (which you can see e.g. in ''qstat'') is the name of the ''script.sh''. This way you can override it.
- 
  
 ''qsub **-S** /bin/bash'' ''qsub **-S** /bin/bash''
 The hashbang (''!#/bin/bash'') in your ''script.sh'' is ignored, but you can change the interpreter with ''-S''. I think ''/bin/bash'' is now (2017/09) the default (but it used to be ''csh''). The hashbang (''!#/bin/bash'') in your ''script.sh'' is ignored, but you can change the interpreter with ''-S''. I think ''/bin/bash'' is now (2017/09) the default (but it used to be ''csh'').
  
-''qsub **-v** PATH''+''qsub **-v** PATH[=value]''
 Export a given environment variable from the current shell to the job. Export a given environment variable from the current shell to the job.
  
Line 246: Line 244:
   * ''-hold_jid_ad comma_separated_job_list'' array jobs that must finish before this job starts; task //i// of the current job depends only on task //i// of the specified jobs   * ''-hold_jid_ad comma_separated_job_list'' array jobs that must finish before this job starts; task //i// of the current job depends only on task //i// of the specified jobs
  
 +=== Ssh to random sol ===
 +Ondřej Bojar suggests to add the following alias to your .bashrc (cf. [[#sshcwd]]):
 +<code>alias cluster='comp=$(($RANDOM /4095 +1)); ssh -o "StrictHostKeyChecking no" sol$comp'</code>
 ===== Job monitoring ===== ===== Job monitoring =====
  
Line 259: Line 260:
   * ''/SGE/REPORTER/LRC-UFAL/bin/lrc_state_overview'' -- overall summary (with per-user stats for users with running jobs)   * ''/SGE/REPORTER/LRC-UFAL/bin/lrc_state_overview'' -- overall summary (with per-user stats for users with running jobs)
   * ''cat /SGE/REPORTER/LRC-UFAL/stats/userlist.weight'' -- all users sorted according to their activity (number of submitted jobs × their average duration), updated each night   * ''cat /SGE/REPORTER/LRC-UFAL/stats/userlist.weight'' -- all users sorted according to their activity (number of submitted jobs × their average duration), updated each night
-  * [[https://ufaladm2.ufal.hide.ms.mff.cuni.cz/munin/ufal.hide.ms.mff.cuni.cz/lrc1.ufal.hide.ms.mff.cuni.cz/lrc_users.html|Munin: graph of cluster usage by day and user]] (accessible only from ÚFAL network, after accepting a security exception)+  * [[http://ufaladm2/munin/ufal.hide.ms.mff.cuni.cz/lrc-headnode.ufal.hide.ms.mff.cuni.cz/index.html|Munin: graph of cluster usage by day and user]] and  [[http://ufaladm2/munin/ufal.hide.ms.mff.cuni.cz/apophis.ufal.hide.ms.mff.cuni.cz/index.html|Munin monitoring of Apophis disk server]] (both accessible only from ÚFAL network)
  
-==== Other ====+===== Other ====
 +  * There is a **great course [[http://ufal.mff.cuni.cz/courses/npfl102|Data intensive computing]]**, see the 2016 handouts if you missed the course. It covers the usage of [[http://spark.apache.org/|Spark]] (MapReduce/Hadoop alternative, but better) and HDFS (Hadoop filesystem). 
 +  * This course had used a special **DLRC (Demo LRC) cluster** (students had to login with ''ssh -p 11422 ufallab.ms.mff.cuni.cz'' and special NPFL102-only LDAP logins) with six virtual machines on one physical. During the years when NPFL102 is not taught (e.g. 2017), the DLRC cluster has just one virtual machine. 
 +  * **Note:** soma hadoop basics and a lot of NoSQL technologies are covered by [[https://is.cuni.cz/studium/predmety/index.php?do=predmet&kod=NDBI040|Big Data Management and NoSQL Databases]]
   * You can use environment variables ''$JOB_ID'', ''$JOB_NAME''.   * You can use environment variables ''$JOB_ID'', ''$JOB_NAME''.
   * One job can submit other jobs (but be careful with recursive:-)). A job submitted to the CPU cluster may submit GPU jobs (to the ''qpu.q'' queue).   * One job can submit other jobs (but be careful with recursive:-)). A job submitted to the CPU cluster may submit GPU jobs (to the ''qpu.q'' queue).

[ Back to the navigation ] [ Back to the content ]