[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
grid [2017/09/26 20:38]
popel
grid [2017/09/27 19:10]
popel
Line 104: Line 104:
 <code> <code>
 qdel 121144 qdel 121144
-  # This way you can delete a job with a given number, or comma-or-space separated list of job numbers.+  # This way you can delete ("kill"a job with a given number, or comma-or-space separated list of job numbers.
 qdel \* qdel \*
   # This way you can delete all your jobs. Don't be afraid - you cannot delete others jobs.   # This way you can delete all your jobs. Don't be afraid - you cannot delete others jobs.
Line 110: Line 110:
  
 ===== Rules ===== ===== Rules =====
 +The purpose of these rules is to prevent your jobs to damage the work of your colleagues and to divide the resources among users in a fair way.
  
 +  * Read about our [[internal:linux-network|network]] first (so you know that e.g. reading big data from your home in 200 parallel jobs is not a good idea, but regular cleanup of your data is a good idea). Ask your colleagues (possibly via [[internal:mailing-lists|devel]]) if you are not sure, esp. if you plan to submit jobs with unusual/extreme disk/mem/CPU requirements.
 +  * While your jobs are running (or queued), check your jobs (esp. previously untested setups) and your email (esp. [[internal:mailing-lists|devel]]) regularly. If you really need to leave e.g. for two-week vacation offline, consult it first with it@ufal (whether they can kill your jobs if needed).
   * You can ssh to any cluster machine, which can be useful e.g. to diagnose what's happening there (using ''htop'' etc.).   * You can ssh to any cluster machine, which can be useful e.g. to diagnose what's happening there (using ''htop'' etc.).
-  * However, **never execute any computing manually** on a cluster machine, unless via ''qsub'' or ''qrsh''. If you break this rule, your task will take CPU and memory, but the SGE will not know, so it may schedule other users' jobs on the same machine and **their jobs may fail** or run slowly. The sol machines are an exception from this rule. +  * However, **never execute any computing manually** on a cluster machine where you are sshed (i.e. not via ''qsub'' or ''qrsh''). If you break this rule, your task will take CPU and memory, but the SGE will not know, so it may schedule other users' jobs on the same machine and **their jobs may fail** or run slowly. The sol machines are an exception from this rule. 
-  * For interactive work, you can use ''qrsh'', but please try to end the job (exit with Ctrl+D) once finished with your work, especially if you ask for a lot of memory or CPUs (see below). +  * For interactive work, you can use ''qrsh'', but please try to end the job (exit with Ctrl+D) once finished with your work, especially if you ask for a lot of memory or CPUs (see below). One semi-permanent qrsh job (with non-extreme CPU/mem requirements) per user is acceptable
-  * **Specify the memory requirements** of your job, e.g. with <code>qsub -hard -l mem_free=8G -l act_mem_free=8G -l h_vmem=8G</code>+  * **Specify the memory and CPU requirements** (if higher than the defaults) and **don't exceed them**. 
 +    * If your job needs more than one CPU (on a single machine) for most of the time, reserve the given number of CPU cores (and SGE slots) with <code>qsub -pe smp <number-of-CPU-cores></code> As you can see in [[#List of Machines]], the maximum is 32 cores. If your job needs e.g. up to 110% CPU most of the time and just occasionally 200%it is OK to reserve just one core (so you don't waste). TODO: when using ''-pe smp -l mf=8G,amf=8G,h_vmem=12G'', which memory limits are per machine and which are per core? 
 +    * If you are sure your job needs less than 1GB RAM, then you can skip this. Otherwise, if you need e.g. 8 GiB, you must always use ''qsub'' (or ''qrsh''with ''-l mem_free=8G''. You should specify also ''act_mem_free'' with the same value and ''h_vmem'' with possibly a slightly bigger value. See [[#memory]] for details. TL;DR: <code>qsub -l mem_free=8G,act_mem_free=8G,h_vmem=12G</code>  
 +  * Be kind to your colleagues. If you are going to submit jobs that effectively occupy more than one fifth of our cluster for more than several hours, check if the cluster is free (with ''qstat -g c'' or ''qstat -u \*'') and/or ask your colleagues if they don't plan to use the cluster intensively in the near future. Note that if you allocate one slot (CPU core) on a machine, but (almost) all its RAM, you have effectively occupied the whole machine and all its cores.
  
-TODO explanation 
-Informovat SGE, kolik paměti úloha žere, aby na strojích nedošla paměť (a používat "hard" limit, kdy SGE úlohu zabije, pokud rezervovanou paměť překročí):  
  
 +=== Memory ===
 +
 +  * There are three commonly used options for specifying memory requirements: ''mem_free, act_mem_free'' and ''h_vmem''. Each has a different purpose.
 +  * ''mem_free=1G'' means 1024×1024×1024 bytes, i.e. one [[https://en.wikipedia.org/wiki/Gibibyte|GiB (gibibyte)]]. ''mem_free=1g'' means 1000×1000×1000 bytes, i.e. one gigabyte. Similarly for the other options and other prefixes (k, K, m, M).
 +  * **mem_free** (or mf) specifies a //consumable resource// tracked by SGE and it affects job scheduling. Each machine has an initial value assigned (slightly lower than the real total physical RAM capacity). When you specify ''qsub -l mem_free=4G'', SGE finds a machine with mem_free >= 4GB, and subtracts 4GB from it. This limit is not enforced, so if a job exceeds this limit, **it is not automatically killed** and thus the SGE value of mem_free may not represent the real free memory. The default value is 1G. By not using this option and eating more than 1 GiB, you are breaking the rules.
 +  * **act_mem_free** (or amf) is a ÚFAL-specific option, which specifies the real amount of free memory (at the time of scheduling). You can specify it when submitting a job and it will be scheduled to a machine with at least this amount of memory free. In an ideal world, where no jobs are exceeding their ''mem_free'' requirements, we would not need this options. However, in real world it is recommended to use this option with the same value as ''mem_free'' to protect your job from failing with out-of-memory error (because of naughty jobs of other users).
 +  * **h_vmem** is equivalent to setting ''ulimit -v'', i.e. it is a hard limit on the size of virtual memory (see RLIMIT_AS in ''man setrlimit''). If your job exceeds this limit, memory allocation fails (i.e., malloc or mmap will return NULL), and your job will probably crash on SIGSEGV. TODO: according to ''man queue_conf'', the job is killed with SIGKILL, not with SIGSEGV. Note that ''h_vmem'' specifies the maximal size of **allocated_memory, not used_memory**, in other words it is the VIRT column in ''top'', not the RES column. SGE does not use this parameter in any other way. Notably, job scheduling is not affected by this parameter and therefore there is no guarantee that there will be this amount of memory on the chosen machine. The problem is that some programs (e.g. Java with the default setting) allocate much more (virtual) memory than they actually use in the end. If we want to be ultra conservative, we should set ''h_vmem'' to the same value as ''mem_free''. If we want to be only moderately conservative, we should specify something like h_vmem=1.5*mem_free, because some jobs will not use the whole mem_free requested, but still our job will be killed if it allocated much more than declared. The default effectively means that your job has no limits.
 +  * It is recommended to **profile your task first**, so you can estimate reasonable memory requirements before submitting many jobs with the same task (varying in parameters which do not affect memory consumption). So for the first time, declare mem_free with much more memory than expected and ssh to a given machine and check ''htop'' (sum all processes of your job) or (if the job is done quickly) check the epilog. When running other jobs of this type, set ''mem_free'' (and ''act_mem_free'' and ''h_vmem'') so you are not wasting resources, but still have some reserve.
 +  * **s_vmem** is similar to ''h_vmem'', but instead of SIGSEGV/SIGKILL, the job is sent a SIGXCPU signal which can be caught by the job and exit gracefully before it is killed. So if you need it, set ''s_vmem'' to a lower value than ''h_vmem'' and implement SIGXCPU handling and cleanup.
  
-Další doporučení: 
-  * Pokud možno používat ''nice''. 
-      *  Dotaz: jak se kombinuje ''nice'' s ''qsub''em? SGE je snad nyní nastaveno tak, že vše bude nicenuté. Každopádně je dobré do submitovaného skriptu na začátek napsat ''renice 10 $$''. 
-  * Uklízet po sobě lokální data, protože jinak si tam už nikdo nic užitečného nepustí. 
-  * Vyhnout se hodně divokému paralelnímu přístupu ke sdíleným diskům. NFS server to pak nepěkně zpomalí pro všechny. Distribuujte tedy i data. 
-  * Pokud chci spouštět úlohy, které poběží dlouhou dobu (hodiny, dny), nepustím je všechny najednou, aby cluster mohli využívat i ostatní. 
  
 ===== Advanced usage ===== ===== Advanced usage =====
  
-Other useful commands: +  * <code>qsub -o LOG.stdout -e LOG.stderr</code> 
 +    redirect std{out,err} to separate files with given names 
 +    
 <code> <code>
-qsub -o LOG.stdout -e LOG.stderr skript.sh 
-  # když chcete přesměrovat výstup skriptu do určených souborů 
 qsub -S /bin/bash qsub -S /bin/bash
-  # když chcete, aby skript běžel bashi+  # Choose the interpreter of your script. I think ''/bin/bash'' is now (2017/09) the default (but it used to be ''csh''). 
 +qsub -PATH 
 +  # export a given environment variable from the current shell to the job
 qsub -V qsub -V
-  # když chcete předat proměnné prostředí +  # export all environment variables 
-qdel \* +man qsub qstat qhold queue_conf sge_types complex 
-  # když chcete zrušit všechny své joby (rušit cizí nesmíte)+  # Find out all the gory details which are missing here. You'll have to do it one anyway:-).
 </code> </code>
  
- +  * By default, all the resource requirements (specified with ''-l'') and queue requirements (specified with ''-q'') are //hard//, i.e. your job won'be scheduled unless they can be fulfilled. You can use **''-soft''** to mark all following requirements as nice-to-have. And with **''-hard''** you can switch back to hard requirements. 
- +  * If you often run (ad-hoc) bash commands via ''qsub'', check two alternative qsub wrappers ''~bojar/tools/shell/qsubmit'' or ''~stepanek/bin/qcmd'', which allow you to enter the command on command line without creating any temp script filesThe wrappers have also other features (some qsub options have changed default values)''qcmd'' is older, but unlike ''qsubmit'' it has POD documentation, correct time computation and you don't need to quote the command.
-==== ~bojar/tools/shell/qsubmit ==== +
- +
-qsubmit je jako qsub, ale příjemnější: +
- +
-  * nemusíte vyrábět skriptvyrobí ho sám (pozn.: nemusíte vyráběskript, když použijete přepínač ''-b y''+
-  nemusíte připisovat ''-cwd -j y -S /bin/bash'' +
- +
-<code> +
-~bojar/tools/shell/qsubmit "bashovy_prikaz < prismeruj > presmeruj 2> atd..." +
-</code>  +
- +
-lépe funguje ''~{stepanek,pajas}/bin/qcmd'' (nemusí se kvotovat parametrysprávně počítá čas běhu...)+
  
 ===== Monitorování úloh ===== ===== Monitorování úloh =====

[ Back to the navigation ] [ Back to the content ]