[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
grid [2018/04/25 21:28]
popel all flags in a comment
grid [2018/07/15 21:18] (current)
popel [Job monitoring]
Line 1: Line 1:
 ====== ÚFAL Grid Engine (LRC) ====== ====== ÚFAL Grid Engine (LRC) ======
  
-LRC (Linguistic Research Cluster) is a name of ÚFAL'​s computational grid/​cluster,​ which has (as of 2017/09) about 1800 CPU cores (115 servers + submission heads), with a total 10 TiB of RAM. It uses [[https://​en.wikipedia.org/​wiki/​Oracle_Grid_Engine|(Sun/​Oracle/​Son of) Grid Engine]] software (SGE) for job scheduling etc. You can submit many computing tasks (jobs) at once, they will be placed in a queue and once a machine (slot) with the required capabilities (e.g. RAM, number of cores) is available, your job will be executed (scheduled) on this machine. This way we can maximize the usefulness of the computing resources and divide it among users in a fair way.+LRC (Linguistic Research Cluster) is a name of ÚFAL'​s computational grid/​cluster,​ which has (as of 2018/06) about 1728 CPU cores (65 servers + 10 submission heads), with a total 7.2 TiB of RAM. It uses [[https://​en.wikipedia.org/​wiki/​Oracle_Grid_Engine|(Sun/​Oracle/​Son of) Grid Engine]] software (SGE) for job scheduling etc. You can submit many computing tasks (jobs) at once, they will be placed in a queue and once a machine (slot) with the required capabilities (e.g. RAM, number of cores) is available, your job will be executed (scheduled) on this machine. This way we can maximize the usefulness of the computing resources and divide it among users in a fair way.
  
-If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]] (which is actually a subsystem of LRC with an independent queue ''​gpu.q''​).+If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]] (which is actually a subsystem of LRC with an independent queue ''​gpu-ms.q''​).
 TODO: describe alternatives,​ e.g.: MetaCentrum / Cesnet cluster (all MFF students can use it), Amazon EC2, Microsoft Azure (some colleagues may have sometime free vouchers). TODO: describe alternatives,​ e.g.: MetaCentrum / Cesnet cluster (all MFF students can use it), Amazon EC2, Microsoft Azure (some colleagues may have sometime free vouchers).
  
 ===== List of Machines ===== ===== List of Machines =====
-Last update: 2017/09. All machines have Ubuntu ​14.04.+Last update: 2017/06. All machines have Ubuntu ​18.04.
 Some machines are at Malá Strana (ground floor, new server room built from Lindat budget), some are at Troja (5 km north-east). Some machines are at Malá Strana (ground floor, new server room built from Lindat budget), some are at Troja (5 km north-east).
 If you need to quickly distinguish which machine is located where, you can use your knowledge of [[https://​en.wikipedia.org/​wiki/​Trojan_War|Trojan war]]-related heroes, ''​qhost -q'',​ or the tables below. ​ If you need to quickly distinguish which machine is located where, you can use your knowledge of [[https://​en.wikipedia.org/​wiki/​Trojan_War|Trojan war]]-related heroes, ''​qhost -q'',​ or the tables below. ​
  
-==== Troja (troja-all.q) ==== +==== Troja (cpu-troja.q) ==== 
-^ Name                ^ CPU type            ^ GHz ^cores ^RAM(GB)^ note ^ +^ Name                ^ CPU type                                  ^ GHz ^cores ^RAM(GB)^ note ^ 
-| achilles[1-8] ​      | Intel               ​3.2 |   32 |  ​128 |  | +| achilles[1-8] ​      | Intel(R) Xeon(R) CPU E5-2630 v3           | 2.4 |   31 |  ​123  |      ​
-| hector[1-8] ​        | Intel               ​1.|   32 |  ​128 |  | +| hector[1-8] ​        | Intel(R) Xeon(R) CPU E5-2630 v3           2.|   31 |  ​123  |      ​
-| helena[1-8] ​        | Intel               ​| 2.|   32 |  ​128 |  | +| helena[1-8] ​        | Intel(R) Xeon(R) CPU E5-2630 v3           | 2.|   31 |  ​123  |      ​
-| paris[1-8] ​         | Intel               ​| 2.4 |   32 |  ​128 |  |+| paris[1-8] ​         | Intel(R) Xeon(R) CPU E5-2630 v3           | 2.4 |   31 |  ​123  |      ​|
  
-==== MS = Malá Strana (ms-all.q) ====+==== MS = Malá Strana (cpu-ms.q) ====
  
-^ Name                ^ CPU type and flags   ​^ GHz ^cores ^RAM(GB)^ note ^ +^ Name                ^ CPU type and flags              ^ GHz ^cores ^RAM(GB)^ note ^ 
-| andromeda[1-13] ​    | AMD Opteron ​         | 2.8 |    ​|   32 |  | +| andromeda[1-13] ​    | AMD Opteron ​                    ​| 2.8 |    ​|   30  |      | 
-| hydra[1-4] ​         | AMD Opteron SSE4 AVX | 2.6 |   16 |  ​128 |  | +| lucifer[1-10] ​      | Intel(R) Xeon(R) CPU E5620      | 2.4 |   ​15 ​|  ​122  |      ​
-fireball[1-10     | Intel Xeon           ​3.0 |    4 |   32 |  | +| hydra[1-4] ​         | AMD Opteron SSE4 AVX            | 2.6 |   15 |  ​122  |      ​
-hyperion[1-9] ​      | Intel Xeon           ​3.|    ​4 |   ​32 ​|  | +orion[1-8         | Intel(R) Xeon(R) CPU E5-2630 v4 2.|   39  ​122 ​ |      ​
-lucifer[1-10] ​      | Intel Xeon SSE4      | 2.|   16 |  ​128 |  | +cosmos ​             ​| Intel Xeon                      2.|    ​|  ​249  |      ​
-orion[1-6] ​         ​| Intel Xeon           ​2.3 |    8 |   32 |  | +belzebub ​           ​| Intel Xeon SSE4 AVX             | 2.|   31 |  ​249  |      ​
-orion[7-10]         ​| Intel Xeon           ​| ​2.0 |    4 |   32 |  +iridium ​            | Intel Xeon SSE4                 1.|   15  ​501 ​ |      ​| 
-tauri[1-10        ​| Intel Xeon           | 3.0 |    4 |   32 |  + 
-cosmos ​             ​| Intel Xeon           ​| ​2.|    ​8 |  256 |  | +Machines from old cluster (do not use!): 
-| belzebub ​           | Intel Xeon SSE4 AVX  | 2.9 |   32 |  256 |  | + 
-| iridium ​            | Intel Xeon SSE4      | 1.9 |   16 |  512 | also in ''​gpu.q'' ​+^ Name                ^ CPU type and flags              ^ GHz ^cores ^RAM(GB)^ note ^ 
-| twister[1,​2] ​       | Intel Xeon SSE4      | 2.4 |    8 |   48 | also in ''​gpu.q'' ​|+fireball[1-10]      | Intel Xeon           ​| ​3.0 |    4 |   32 | removed ​
 +hyperion[1-9      ​| Intel Xeon           | 3.0 |    4 |   32 | removed ​
 +tauri[1-10] ​        | Intel Xeon           ​| ​3.|    ​|   32 | removed ​
 +| twister[1,​2] ​       | Intel Xeon SSE4      | 2.4 |    8 |   48 | moved to GPU cluster|
  
 <​html><​!-- <​html><​!--
Line 51: Line 54:
 --> -->
 </​html>​ </​html>​
-=== Outside LRC cluster (but located as MS) === +=== Submit hosts / test machines ​=== 
-^ Name                ^ CPU type            ^ GHz ^cores ^ RAM(GB)^ note  ^ +^ Name                ^ CPU type                        ^ GHz ^cores ^ RAM(GB) ^ note  ^ 
-lrc[1,2           | Intel               ​| 2.3 |    ​|   45 | **no computing here**, just submit jobs | +sol[1-10          ​| Intel(R) Xeon(R) CPU E5345      ​| 2.3 |    ​|   31    | you can ssh here and compute ​or submit jobs 
-| sol[1-5] ​           | Intel               | 2.6 |    4 |   ​16 ​| you can ssh here and compute | +lrc[12            ​| Intel(R) Xeon(R) CPU E5-2630 v4 | 2.|    ​   ​4 ​   ​| you can submit jobs here or monitor job execution - NO COMPUTATION IS ALLOWED HERE !!! |
-sol[6-8           | Intel               ​| 2.|    ​  16 | you can ssh here and compute ​|+
  
-The two **lrc machines** are so called heads of the cluster. **No computation is allowed here**, i.e. no CPU-intensive,​ disk-intensive nor RAM-intensive computation (very simple scripts are OK). You should just ssh to ''​lrc1''​ or ''​lrc2''​ and submit your jobs as described bellow. +You can ssh to one of the **sol machines** and submit jobs from here. It is allowed to compute here, which is useful e.g. when you have a script which submits your jobs, but it also collects statistics from the jobs outputs (and possibly submits new jobs conditioned on the statistics). However, the sol machines are relatively slow and may be occupied by your colleagues, so for bigger (longer) tasks, always prefer submission as separate jobs.
- +
-Alternatively,​ you can ssh to one of the **sol machines** and submit jobs from here. It is allowed to compute here, which is useful e.g. when you have a script which submits your jobs, but it also collects statistics from the jobs outputs (and possibly submits new jobs conditioned on the statistics). However, the sol machines are relatively slow and may be occupied by your colleagues, so for bigger (longer) tasks, always prefer submission as separate jobs.+
  
 ===== Installation ===== ===== Installation =====
Line 65: Line 65:
 Add the following line into your '​~/​.bash_profile'​. Add the following line into your '​~/​.bash_profile'​.
  
-  source /net/projects/SGE/user/sge_profile+  source /opt/LRC/sge_profile 
 + 
 +Or call one of these scripts directly: 
 + 
 +  ​/opt/LRC/​common/​settings.sh (for bash)  
 +  /​opt/​LRC/​common/​settings.csh (for tcsh/csh)
  
-This detects if you are on one of the cluster machines (including ​lrc and sol) and sets env variables accordingly. It also prints a status message.+This detects if you are on one of the cluster machines (including sol) and sets env variables accordingly. It also prints a status message.
 Usually, this is the first line of your '​~/​.bash_profile'​ and the second-and-last line is Usually, this is the first line of your '​~/​.bash_profile'​ and the second-and-last line is
  
Line 75: Line 80:
  
   export LC_ALL=en_US.UTF-8   export LC_ALL=en_US.UTF-8
 +  ​
 ===== Basic usage ===== ===== Basic usage =====
  
-First, you need to ssh to the cluster head (lrc1 or lrc2) or to one of the sol machines. The full address is ''​lrc1.ufal.hide.ms.mff.cuni.cz'',​ but you can use just ''​ssh ​lrc1''​ ("​hide"​ means it is accessible only from the ÚFAL network, not from outside; if working from home/​Eduroam,​ you need to [[internal:​remote-access|login/​VPN]] to the ÚFAL network first).+First, you need to ssh to one of the submit hosts (sol[1-10]). The full address is (for example) ​''​sol1.ufal.hide.ms.mff.cuni.cz'',​ but you can use just ''​ssh ​sol1''​ ("​hide"​ means it is accessible only from the ÚFAL network, not from outside; if working from home/​Eduroam,​ you need to [[internal:​remote-access|login/​VPN]] to the ÚFAL network first).
 In the following tutorial, we will prepare a wrapper shell script ''​script.sh''​ with a toy task. In practice you can name the script whatever you want and you can execute the real task, e.g. a Python/​Perl/​... script. It is recommended to use the wrapper shell scripts, but with ''​-b y''​ (see [[#advanced usage]]) you can execute a Python/​Perl/​... directly without any wrapper. In the following tutorial, we will prepare a wrapper shell script ''​script.sh''​ with a toy task. In practice you can name the script whatever you want and you can execute the real task, e.g. a Python/​Perl/​... script. It is recommended to use the wrapper shell scripts, but with ''​-b y''​ (see [[#advanced usage]]) you can execute a Python/​Perl/​... directly without any wrapper.
  
 <​code>​ <​code>​
-ssh lrc1+ssh sol1
 echo '​hostname;​ pwd; echo The second parameter is $2' > script.sh echo '​hostname;​ pwd; echo The second parameter is $2' > script.sh
   # prepare a shell script describing your task   # prepare a shell script describing your task
 qsub -cwd -j y script.sh Hello World qsub -cwd -j y script.sh Hello World
-  # This submits your job to the default queue, which is currently ''​ms-all.q''​.+  # This submits your job to the default queue, which is currently ''​cpu-ms.q''​.
   # Usually, there is a free slot, so the job will be scheduled within few seconds.   # Usually, there is a free slot, so the job will be scheduled within few seconds.
   # We have used two handy qsub parameters:   # We have used two handy qsub parameters:
Line 183: Line 189:
 ''​qsub **-M** popel@ufal.mff.cuni.cz,​rosa@ufal.mff.cuni.cz **-m** beas''​ ''​qsub **-M** popel@ufal.mff.cuni.cz,​rosa@ufal.mff.cuni.cz **-m** beas''​
 Specify the emails where you want to be notified when the job has been **b** started, **e** ended, **a** aborted or rescheduled,​ **s** suspended. Specify the emails where you want to be notified when the job has been **b** started, **e** ended, **a** aborted or rescheduled,​ **s** suspended.
 +The default is now ''​-m ea''​ (TODO check this) and the default email address is forwarded to you (so there is no need to use ''​-M''​). You can use ''​-m n''​ to override the defaults and send no emails.
  
 ''​qsub **-hold_jid** 121144,​121145''​ (or ''​qsub **-hold_jid** get_src.sh,​get_tgt.sh''​) ''​qsub **-hold_jid** 121144,​121145''​ (or ''​qsub **-hold_jid** get_src.sh,​get_tgt.sh''​)
Line 255: Line 262:
  
 and you execute it now simply with ''​qsub script.sh''​. and you execute it now simply with ''​qsub script.sh''​.
 +
 +=== ~/​.sge_request ===
 +
 +You can change the defaults for any option by creating a personal configuration file ''​~/​.sge_request''​. For example, you can add there a line ''​-m n'',​ so you will get no email notifications (unless overridden from the command line or in-script options).
  
 === Array jobs === === Array jobs ===
Line 277: Line 288:
 === Ssh to random sol === === Ssh to random sol ===
 Ondřej Bojar suggests to add the following alias to your .bashrc (cf. [[#​sshcwd]]):​ Ondřej Bojar suggests to add the following alias to your .bashrc (cf. [[#​sshcwd]]):​
-<​code>​alias cluster='​comp=$(($RANDOM ​/4095 +1)); ssh -o "​StrictHostKeyChecking no" sol$comp'</​code>​+<​code>​alias cluster='​comp=$(( ​(RANDOM ​% 10) +1)); ssh -o "​StrictHostKeyChecking no" sol$comp'</​code>​
  
 ===== Job monitoring ===== ===== Job monitoring =====
Line 283: Line 294:
   * ''​qstat [-u user]''​ -- print a list of running/​waiting jobs of a given user   * ''​qstat [-u user]''​ -- print a list of running/​waiting jobs of a given user
   * ''​qhost''​ -- print available/​total resources   * ''​qhost''​ -- print available/​total resources
-  * ''/​SGE/​REPORTER/​LRC-UFAL/​bin/​lrc_users_real_mem_usage -u user -w''​ -- current memory usage of a given user +  ​* ''​qacct -j job_id''​ -- print info even for ended job (for which ''​qstat -j job_id''​ does not work). See ''​man qacct''​ for more. 
-  * ''/​SGE/​REPORTER/​LRC-UFAL/​bin/​lrc_users_limits_requested -w''​ -- required resources of all users + 
-  * ''/​SGE/​REPORTER/​LRC-UFAL/​bin/​lrc_nodes_meminfo''​ -- memory usage of all nodes+  ​* ''/​opt/LRC/​REPORTER/​LRC-UFAL/​bin/​lrc_users_real_mem_usage -u user -w''​ -- current memory usage of a given user 
 +  * ''/​opt/LRC/​REPORTER/​LRC-UFAL/​bin/​lrc_users_limits_requested -w''​ -- required resources of all users 
 +  * ''/​opt/LRC/​REPORTER/​LRC-UFAL/​bin/​lrc_nodes_meminfo''​ -- memory usage of all nodes
     * mem_total:     * mem_total:
     * mem_free: total memory minus reserved memory (using ''​qsub -l mem_free''​) for each node     * mem_free: total memory minus reserved memory (using ''​qsub -l mem_free''​) for each node
     * act_mem_free:​ really free memory     * act_mem_free:​ really free memory
     * mem_used: really used memory     * mem_used: really used memory
-  * ''/​SGE/​REPORTER/​LRC-UFAL/​bin/​lrc_state_overview''​ -- overall summary (with per-user stats for users with running jobs) +  * ''/​opt/LRC/​REPORTER/​LRC-UFAL/​bin/​lrc_state_overview''​ -- overall summary (with per-user stats for users with running jobs) 
-  * ''​cat /SGE/​REPORTER/​LRC-UFAL/​stats/​userlist.weight''​ -- all users sorted according to their activity (number of submitted jobs × their average duration), updated each night+  * ''​cat /opt/LRC/​REPORTER/​LRC-UFAL/​stats/​userlist.weight''​ -- all users sorted according to their activity (number of submitted jobs × their average duration), updated each night 
   * [[http://​ufaladm2/​munin/​ufal.hide.ms.mff.cuni.cz/​lrc-headnode.ufal.hide.ms.mff.cuni.cz/​index.html|Munin:​ graph of cluster usage by day and user]] and  [[http://​ufaladm2/​munin/​ufal.hide.ms.mff.cuni.cz/​apophis.ufal.hide.ms.mff.cuni.cz/​index.html|Munin monitoring of Apophis disk server]] (both accessible only from ÚFAL network)   * [[http://​ufaladm2/​munin/​ufal.hide.ms.mff.cuni.cz/​lrc-headnode.ufal.hide.ms.mff.cuni.cz/​index.html|Munin:​ graph of cluster usage by day and user]] and  [[http://​ufaladm2/​munin/​ufal.hide.ms.mff.cuni.cz/​apophis.ufal.hide.ms.mff.cuni.cz/​index.html|Munin monitoring of Apophis disk server]] (both accessible only from ÚFAL network)
  

[ Back to the navigation ] [ Back to the content ]