[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
grid [2018/03/24 11:18]
popel [Rules]
grid [2018/03/24 12:58]
ufal
Line 122: Line 122:
     * If your job needs more than one CPU (on a single machine) for most of the time, reserve the given number of CPU cores (and SGE slots) with <code>qsub -pe smp <number-of-CPU-cores></code> As you can see in [[#List of Machines]], the maximum is 32 cores. If your job needs e.g. up to 110% CPU most of the time and just occasionally 200%, it is OK to reserve just one core (so you don't waste). TODO: when using ''-pe smp -l mf=8G,amf=8G,h_vmem=12G'', which memory limits are per machine and which are per core?     * If your job needs more than one CPU (on a single machine) for most of the time, reserve the given number of CPU cores (and SGE slots) with <code>qsub -pe smp <number-of-CPU-cores></code> As you can see in [[#List of Machines]], the maximum is 32 cores. If your job needs e.g. up to 110% CPU most of the time and just occasionally 200%, it is OK to reserve just one core (so you don't waste). TODO: when using ''-pe smp -l mf=8G,amf=8G,h_vmem=12G'', which memory limits are per machine and which are per core?
     * If you are sure your job needs less than 1GB RAM, then you can skip this. Otherwise, if you need e.g. 8 GiB, you must always use ''qsub'' (or ''qrsh'') with ''-l mem_free=8G''. You should specify also ''act_mem_free'' with the same value and ''h_vmem'' with possibly a slightly bigger value. See [[#memory]] for details. TL;DR: <code>qsub -l mem_free=8G,act_mem_free=8G,h_vmem=12G</code>      * If you are sure your job needs less than 1GB RAM, then you can skip this. Otherwise, if you need e.g. 8 GiB, you must always use ''qsub'' (or ''qrsh'') with ''-l mem_free=8G''. You should specify also ''act_mem_free'' with the same value and ''h_vmem'' with possibly a slightly bigger value. See [[#memory]] for details. TL;DR: <code>qsub -l mem_free=8G,act_mem_free=8G,h_vmem=12G</code> 
-  * Be kind to your colleagues. If you are going to submit jobs that effectively occupy **more than one fifth of our cluster for more than several hours**, check if the cluster is free (with ''qstat -g c'' or ''qstat -u \*'') and/or ask your colleagues if they don't plan to use the cluster intensively in the near future. Note that if you allocate one slot (CPU core) on a machine, but (almost) all its RAM, you have effectively occupied the whole machine and all its cores. If you are submitting **more than 100 jobs**, consider using setting them a low priority (e.g. ''-p -1024'', see below) or use [[#qunhold]] or [[#array jobs]].+  * Be kind to your colleagues. If you are going to submit jobs that effectively occupy **more than one fifth of our cluster for more than several hours**, check if the cluster is free (with ''qstat -g c'' or ''qstat -u \*'') and/or ask your colleagues if they don't plan to use the cluster intensively in the near future. Note that if you allocate one slot (CPU core) on a machine, but (almost) all its RAM, you have effectively occupied the whole machine and all its cores. If you are submitting **more than 100 jobs**, consider using setting them a low priority (e.g. ''-p -1024'', see below) or use [[#qunhold]] or (even better) [[#array jobs]].
   * **Don't submit more than ca 5000 jobs at once**, even if you make sure that at most 100 are running/waiting and the rest is in the //hold// state (e.g. using ''qunhold''). More than 5000 jobs in the queue can overload the SGE, so then no one can execute ''qstat'' (or it takes too long).   * **Don't submit more than ca 5000 jobs at once**, even if you make sure that at most 100 are running/waiting and the rest is in the //hold// state (e.g. using ''qunhold''). More than 5000 jobs in the queue can overload the SGE, so then no one can execute ''qstat'' (or it takes too long).
  
Line 203: Line 203:
 === qunhold === === qunhold ===
 ''~stepanek/bin/qunhold'' tries to keep the number of running SGE jobs under a given threshold: all jobs over the threshold are held. If the number of running jobs goes below the threshold (default: 100), 10 jobs (by default) are unheld. Beware: if your jobs submit new jobs, you can get far over the threshold! ''~stepanek/bin/qunhold'' tries to keep the number of running SGE jobs under a given threshold: all jobs over the threshold are held. If the number of running jobs goes below the threshold (default: 100), 10 jobs (by default) are unheld. Beware: if your jobs submit new jobs, you can get far over the threshold!
 +
 +Don't submit more than ca 5000 jobs with qunhold - it overloads the SGE queue and slows done e.g. ''qstat'' (and qunhold uses ''qstat'' internally).
  
 === sshcwd === === sshcwd ===
Line 240: Line 242:
  
 If you have a set of tasks (of the same type) and want to run them on multiple machines, use ''qsub -t''. If you have a set of tasks (of the same type) and want to run them on multiple machines, use ''qsub -t''.
-  * ''-t 1-n'' start array job with n jobs numbered 1 ... n+  * ''-t 1-n'' start array job with //n// tasks numbered //1 ... n//
   * environmental variable ''SGE_TASK_ID''   * environmental variable ''SGE_TASK_ID''
   * output and error files ''$JOB_NAME.[eo]$JOB_ID.$TASK_ID''   * output and error files ''$JOB_NAME.[eo]$JOB_ID.$TASK_ID''
-  * ''-t m-n[:s]'' start array job with jobs m, m + s, ..., n+  * ''-t m-n[:s]'' start array job with tasks //m, m + s, ..., n//
   * environmental variables ''SGE_TASK_FIRST, SGE_TASK_LAST, SGE_TASK_STEPSIZE''   * environmental variables ''SGE_TASK_FIRST, SGE_TASK_LAST, SGE_TASK_STEPSIZE''
-  * ''-tc j'' run at most j jobs simultaneously+  * ''-tc j'' run at most //j// tasks simultaneously
   * ''-hold_jid_ad comma_separated_job_list'' array jobs that must finish before this job starts; task //i// of the current job depends only on task //i// of the specified jobs   * ''-hold_jid_ad comma_separated_job_list'' array jobs that must finish before this job starts; task //i// of the current job depends only on task //i// of the specified jobs
 +
 +If you use ''-tc'', then SGE can handle array jobs of virtually any size. It only starts as many tasks as specified in ''-tc'' at any time, and each scheduling interval (15 seconds in our current configuration) it starts new tasks if less than the specified ''-tc'' limit are running. However, note that it means the maximum throughput is //4 * tc// tasks per minute, so the individual array job tasks need to run for at least tens of seconds for this to be effective.
 +
 +The advantage of array jobs over [[#qunhold]] is that it does not overload the SGE job queue. Also if you start an array job, the others can see that it is an array job, how many individual tasks there are and how many of them have already finished.
  
 === Delete many jobs at once === === Delete many jobs at once ===

[ Back to the navigation ] [ Back to the content ]