Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
slurm [2022/09/06 15:35] vodrazka [Running jobs] |
slurm [2022/10/17 10:35] fucik [Interactive mode] |
====== ÚFAL Grid Engine (LRC) ====== | ====== ÚFAL Grid Engine (LRC) ====== |
| |
LRC (Linguistic Research Cluster) is a name of ÚFAL's computational grid/cluster. | LRC (Linguistic Research Cluster) is the name of ÚFAL's computational grid/cluster. The cluster is built on top of [[https://slurm.schedmd.com/|SLURM]] and is using [[https://www.lustre.org/|Lustre]] for [[internal:linux-network#directory-structure|data storage]]. |
| |
| Currently there are following partitions (queues) available for computing: |
| |
| | **Partition name** | **Nodes** | **Note** | |
| | cpu-troja | 7x CPU | default partition | |
| | gpu-troja | 6x GPU | features: gpuram48G,gpuram40G | |
| | gpu-ms | 7x GPU | features: gpuram48G,gpuram24G | |
| |
| In order to submit a job you need to login to one of the head nodes: |
| |
| lrc1.ufal.hide.ms.mff.cuni.cz |
| lrc2.ufal.hide.ms.mff.cuni.cz |
===== Basic usage ===== | ===== Basic usage ===== |
| |
#!/bin/bash | #!/bin/bash |
#SBATCH -J helloWorld # name of job | #SBATCH -J helloWorld # name of job |
#SBATCH -p cpu-troja # name of partition or queue | #SBATCH -p cpu-troja # name of partition or queue (if not specified default partition is used) |
#SBATCH -o helloWorld.out # name of output file for this submission script | #SBATCH -o helloWorld.out # name of output file for this submission script |
#SBATCH -e helloWorld.err # name of error file for this submission script | #SBATCH -e helloWorld.err # name of error file for this submission script |
#SBATCH -N 2 # number of nodes (default 1) | #SBATCH -N 2 # number of nodes (default 1) |
#SBATCH --nodelist=node1,node2... # required node, or comma separated list of required nodes | #SBATCH --nodelist=node1,node2... # required node, or comma separated list of required nodes |
#SBATCH -c 4 # number of cores/threads per task (default 1) | #SBATCH --cpus-per-task=4 # number of cores/threads per task (default 1) |
#SBATCH --gres=gpu:1 # number of GPUs to request (default 0) | #SBATCH --gres=gpu:1 # number of GPUs to request (default 0) |
#SBATCH --mem=10G # request 10 gigabytes memory (per node, default depends on node) | #SBATCH --mem=10G # request 10 gigabytes memory (per node, default depends on node) |
| </code> |
| |
| If you need you can have slurm report to you: |
| |
| <code> |
| #SBATCH --mail-type=begin # send email when job begins |
| #SBATCH --mail-type=end # send email when job ends |
| #SBATCH --mail-type=fail # send email if job fails |
| #SBATCH --mail-user=<YourUFALEmailAccount> |
</code> | </code> |
| |
==== Cluster info ==== | ==== Cluster info ==== |
| |
The command ''sinfo'' can give you usefull information about nodes available in the cluster. Here is a short list of some usefull examples: | The command ''sinfo'' can give you useful information about nodes available in the cluster. Here is a short list of some examples: |
| |
List types of available GPUs: | List available partitions(queues). The default partition is marked with ''*'': |
<code> | <code> |
sinfo -o %G | sinfo |
</code> | </code> |
| |
| List detailed info about nodes: |
| <code> |
| sinfo -l -N |
| </code> |
| |
| List nodes with some custom format info: |
| <code> |
| sinfo -N -o "%N %P %.11T %.15f" |
| </code> |
| |
| === CPU core allocation === |
| |
| The minimal computing resource in SLURM is one CPU core. However, CPU count advertised by SLURM corresponds to the number of CPU threads. |
| If you ask for 1 CPU core with <code>--cpus-per-task=1</code> SLURM will allocate all threads of 1 CPU core. |
| |
| For example ''dll-8gpu1'' will allocate 2 threads since its ThreadsPerCore=2: |
| |
| <code> |
| $> scontrol show node dll-8gpu1 |
| $ scontrol show node dll-8gpu1 |
| NodeName=dll-8gpu1 Arch=x86_64 CoresPerSocket=16 |
| CPUAlloc=0 CPUTot=64 CPULoad=0.05 // CPUAlloc - allocated threads, CPUTot - total threads |
| AvailableFeatures=gpuram24G |
| ActiveFeatures=gpuram24G |
| Gres=gpu:nvidia_a30:8(S:0-1) |
| NodeAddr=10.10.24.63 NodeHostName=dll-8gpu1 Version=21.08.8-2 |
| OS=Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200) |
| RealMemory=515838 AllocMem=0 FreeMem=507650 Sockets=2 Boards=1 |
| CoreSpecCount=1 CPUSpecList=62-63 // CoreSpecCount - cores reserved for OS, CPUSpecList - list of threads reserved for system |
| State=IDLE ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A // ThreadsPerCore - count of threads for 1 CPU core |
| Partitions=gpu-ms |
| BootTime=2022-09-01T14:07:50 SlurmdStartTime=2022-09-02T13:54:05 |
| LastBusyTime=2022-10-02T20:17:09 |
| CfgTRES=cpu=64,mem=515838M,billing=64 |
| AllocTRES= |
| CapWatts=n/a |
| CurrentWatts=0 AveWatts=0 |
| ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s |
| </code> |
| |
| In the example above you can see comments at all lines relevant to CPU allocation. |
| |
| |
<code>srun -p cpu-troja --mem=64G --pty bash</code> | <code>srun -p cpu-troja --mem=64G --pty bash</code> |
| |
Where: | * ''-p cpu-troja'' explicitly requires partition ''cpu-troja''. If not specified slurm will use default partition. |
* ''-p cpu-troja'' explicitly requires partition ''cpu-troja'' | |
* ''--mem=64G'' requires 64G of memory for the job | * ''--mem=64G'' requires 64G of memory for the job |
| |
| <code>srun -p gpu-troja,gpu-ms --nodelist=tdll-3gpu1 --mem=64G --gres=gpu:2 --pty bash</code> |
| * ''-p gpu-troja,gpu-ms'' require only nodes from these two partitions |
| * ''--nodelist=tdll-3gpu1'' explicitly requires one specific node |
| * ''--gres=gpu:2'' requires 2 GPUs |
| |
| <code>srun -p gpu-troja --constraint="gpuram48G|gpuram40G" --mem=64G --gres=gpu:2 --pty bash</code> |
| * ''--constraint="gpuram48G|gpuram40G"'' only consider nodes that have either ''gpuram48G'' or ''gpuram40G'' feature defined |
| |
To see all the available options type: | To see all the available options type: |