[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
slurm [2022/08/29 16:40]
vodrazka created
slurm [2022/10/17 10:35]
fucik [Interactive mode]
Line 1: Line 1:
 ====== ÚFAL Grid Engine (LRC) ====== ====== ÚFAL Grid Engine (LRC) ======
  
-LRC (Linguistic Research Cluster) is name of ÚFAL's computational grid/cluster.+LRC (Linguistic Research Cluster) is the name of ÚFAL's computational grid/cluster. The cluster is built on top of [[https://slurm.schedmd.com/|SLURM]] and is using [[https://www.lustre.org/|Lustre]] for [[internal:linux-network#directory-structure|data storage]].
  
 +Currently there are following partitions (queues) available for computing:
 +
 +| **Partition name** | **Nodes**  | **Note** |
 +| cpu-troja      | 7x CPU | default partition |
 +| gpu-troja      | 6x GPU | features: gpuram48G,gpuram40G |
 +| gpu-ms         | 7x GPU | features: gpuram48G,gpuram24G |
 +
 +In order to submit a job you need to login to one of the head nodes:
 +
 +   lrc1.ufal.hide.ms.mff.cuni.cz
 +   lrc2.ufal.hide.ms.mff.cuni.cz
 ===== Basic usage ===== ===== Basic usage =====
  
 +==== Batch mode ====
 +
 +The core idea is that you write a batch script containing the commands you wish to run as well as a list of ''SBATCH'' directives specifying the resources or parameters that you need for your job.
 +Then the script is submitted to the cluster with:
 +
 +<code>sbatch myJobScript.sh</code>
 +
 +Here is a simple working example:
 +
 +<code>
 +#!/bin/bash
 +#SBATCH -J helloWorld   # name of job
 +#SBATCH -p cpu-troja   # name of partition or queue (if not specified default partition is used)
 +#SBATCH -o helloWorld.out   # name of output file for this submission script
 +#SBATCH -e helloWorld.err   # name of error file for this submission script
 +
 +# run my job (some executable)
 +sleep 5
 +echo "Hello I am running on cluster!"
 +</code>
 +
 +After submitting this simple code you should end up with the two files (''helloWorld.out'' and ''helloWorld.err'') in the directory where you called the ''sbatch'' command.
 +
 +Here is the list of other useful ''SBATCH'' directives:
 +<code>
 +#SBATCH -D /some/path/                        # change directory before executing the job   
 +#SBATCH -N 2                                  # number of nodes (default 1)
 +#SBATCH --nodelist=node1,node2...             # required node, or comma separated list of required nodes
 +#SBATCH --cpus-per-task=4                     # number of cores/threads per task (default 1)
 +#SBATCH --gres=gpu:                         # number of GPUs to request (default 0)
 +#SBATCH --mem=10G                             # request 10 gigabytes memory (per node, default depends on node)
 +</code>
 +
 +If you need you can have slurm report to you:
 +
 +<code>
 +#SBATCH --mail-type=begin        # send email when job begins
 +#SBATCH --mail-type=end          # send email when job ends
 +#SBATCH --mail-type=fail         # send email if job fails
 +#SBATCH --mail-user=<YourUFALEmailAccount>
 +</code>
 +
 +As usuall the complete set of options can be found by typing:
 +
 +<code>
 +man sbatch
 +</code>
 +
 +==== Running jobs ====
 +
 +In order to inspect all running jobs on the cluster use:
 +
 +<code>
 +squeue
 +</code>
 +
 +filter only jobs of user ''linguist'':
 +
 +<code>
 +squeue -u linguist
 +</code>
 +
 +filter only jobs on partition ''gpu-ms'':
 +
 +<code>
 +squeue -p gpu-ms
 +</code>
 +
 +filter jobs in specific state (see ''man squeue'' for list of valid job states):
 +<code>
 +squeue -t RUNNING
 +</code>
 +
 +filter jobs running on a specific node:
 +<code>
 +squeue -w dll-3gpu1
 +</code>
 +
 +==== Cluster info ====
 +
 +The command ''sinfo'' can give you useful information about nodes available in the cluster. Here is a short list of some examples:
 +
 +List available partitions(queues). The default partition is marked with ''*'':
 +<code>
 +sinfo
 +</code>
 +
 +List detailed info about nodes:
 +<code>
 +sinfo -l -N
 +</code> 
 +
 +List nodes with some custom format info:
 +<code>
 +sinfo -N -o "%N %P %.11T %.15f"
 +</code>
 +
 +=== CPU core allocation ===
 +
 +The minimal computing resource in SLURM is one CPU core. However, CPU count advertised by SLURM corresponds to the number of CPU threads.
 +If you ask for 1 CPU core with <code>--cpus-per-task=1</code> SLURM will allocate all threads of 1 CPU core.
 +
 +For example ''dll-8gpu1'' will allocate 2 threads since its ThreadsPerCore=2:
 +
 +<code>
 +$> scontrol show node dll-8gpu1
 +$ scontrol show node dll-8gpu1
 +NodeName=dll-8gpu1 Arch=x86_64 CoresPerSocket=16 
 +   CPUAlloc=0 CPUTot=64 CPULoad=0.05                                               // CPUAlloc - allocated threads, CPUTot - total threads
 +   AvailableFeatures=gpuram24G
 +   ActiveFeatures=gpuram24G
 +   Gres=gpu:nvidia_a30:8(S:0-1)
 +   NodeAddr=10.10.24.63 NodeHostName=dll-8gpu1 Version=21.08.8-2
 +   OS=Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200) 
 +   RealMemory=515838 AllocMem=0 FreeMem=507650 Sockets=2 Boards=1
 +   CoreSpecCount=1 CPUSpecList=62-63                                               // CoreSpecCount - cores reserved for OS, CPUSpecList - list of threads reserved for system
 +   State=IDLE ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/         // ThreadsPerCore - count of threads for 1 CPU core
 +   Partitions=gpu-ms 
 +   BootTime=2022-09-01T14:07:50 SlurmdStartTime=2022-09-02T13:54:05
 +   LastBusyTime=2022-10-02T20:17:09
 +   CfgTRES=cpu=64,mem=515838M,billing=64
 +   AllocTRES=
 +   CapWatts=n/a
 +   CurrentWatts=0 AveWatts=0
 +   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
 +</code>
 +
 +In the example above you can see comments at all lines relevant to CPU allocation.
 +
 +
 +
 +
 +==== Interactive mode ====
 +
 +This mode can be useful for testing You should be using batch mode for any serious computation.
 +You can use **''srun''** command to get an interactive shell on an arbitrary node from the default partition (queue):
 +
 +<code>srun --pty bash</code>
 +
 +There are many more parameters available to use. For example:
 +
 +<code>srun -p cpu-troja --mem=64G --pty bash</code>
 +
 +  * ''-p cpu-troja'' explicitly requires partition ''cpu-troja''. If not specified slurm will use default partition.
 +  * ''--mem=64G'' requires 64G of memory for the job
 +
 +<code>srun -p gpu-troja,gpu-ms --nodelist=tdll-3gpu1 --mem=64G --gres=gpu:2 --pty bash</code>
 +  * ''-p gpu-troja,gpu-ms'' require only nodes from these two partitions
 +  * ''--nodelist=tdll-3gpu1'' explicitly requires one specific node
 +  * ''--gres=gpu:2'' requires 2 GPUs
 +
 +<code>srun -p gpu-troja --constraint="gpuram48G|gpuram40G" --mem=64G --gres=gpu:2 --pty bash</code>
 +  * ''--constraint="gpuram48G|gpuram40G"'' only consider nodes that have either ''gpuram48G'' or ''gpuram40G'' feature defined
  
 +To see all the available options type:
  
 +<code>man srun</code>
  

[ Back to the navigation ] [ Back to the content ]