Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
slurm [2022/08/31 11:08] vodrazka [Basic usage] |
slurm [2022/10/25 15:24] vodrazka [gpu-troja] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== ÚFAL Grid Engine (LRC) ====== | ====== ÚFAL Grid Engine (LRC) ====== | ||
- | LRC (Linguistic Research Cluster) is a name of ÚFAL' | + | LRC (Linguistic Research Cluster) is the name of ÚFAL' |
+ | Currently there are following partitions (queues) available for computing: | ||
+ | |||
+ | ===== Node list by partitions ===== | ||
+ | |||
+ | ==== cpu-troja ==== | ||
+ | |||
+ | | Node name | Thread count | Socket: | ||
+ | | achilles1 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles2 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles3 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles4 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles5 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles6 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles7 | 32 | 2:8:2 | 128810 | | ||
+ | | achilles8 | 32 | 2:8:2 | 128810 | | ||
+ | | hector1 | 32 | 2:8:2 | 128810 | | ||
+ | | hector2 | 32 | 2:8:2 | 128810 | | ||
+ | | hector3 | 32 | 2:8:2 | 128810 | | ||
+ | | hector4 | 32 | 2:8:2 | 128810 | | ||
+ | | hector5 | 32 | 2:8:2 | 128810 | | ||
+ | | hector6 | 32 | 2:8:2 | 128810 | | ||
+ | | hector7 | 32 | 2:8:2 | 128810 | | ||
+ | | hector8 | 32 | 2:8:2 | 128810 | | ||
+ | | helena1 | 32 | 2:8:2 | 128811 | | ||
+ | | helena2 | 32 | 2:8:2 | 128811 | | ||
+ | | helena3 | 32 | 2:8:2 | 128811 | | ||
+ | | helena4 | 32 | 2:8:2 | 128811 | | ||
+ | | helena5 | 32 | 2:8:2 | 128810 | | ||
+ | | helena6 | 32 | 2:8:2 | 128811 | | ||
+ | | helena7 | 32 | 2:8:2 | 128810 | | ||
+ | | helena8 | 32 | 2:8:2 | 128811 | | ||
+ | | paris1 | 32 | 2:8:2 | 128810 | | ||
+ | | paris2 | 32 | 2:8:2 | 128810 | | ||
+ | | paris3 | 32 | 2:8:2 | 128810 | | ||
+ | | paris4 | 32 | 2:8:2 | 128810 | | ||
+ | | paris5 | 32 | 2:8:2 | 128810 | | ||
+ | | paris6 | 32 | 2:8:2 | 128810 | | ||
+ | | paris7 | 32 | 2:8:2 | 128810 | | ||
+ | | paris8 | 32 | 2:8:2 | 128810 | | ||
+ | | hyperion2 | 64 | 2:16:2 | 257667 | | ||
+ | | hyperion3 | 64 | 2:16:2 | 257667 | | ||
+ | | hyperion4 | 64 | 2:16:2 | 257667 | | ||
+ | | hyperion5 | 64 | 2:16:2 | 257667 | | ||
+ | | hyperion6 | 64 | 2:16:2 | 257667 | | ||
+ | | hyperion7 | 64 | 2:16:2 | 257667 | | ||
+ | | hyperion8 | 64 | 2:16:2 | 257667 | | ||
+ | ==== cpu-ms ==== | ||
+ | |||
+ | | Node name | Thread count | Socket: | ||
+ | | iridium | 16 | 2:4:2 | 515977 | | ||
+ | | orion1 | 40 | 2:10:2 | 128799 | | ||
+ | | orion2 | 40 | 2:10:2 | 128799 | | ||
+ | | orion3 | 40 | 2:10:2 | 128799 | | ||
+ | | orion4 | 40 | 2:10:2 | 128799 | | ||
+ | | orion5 | 40 | 2:10:2 | 128799 | | ||
+ | | orion6 | 40 | 2:10:2 | 128799 | | ||
+ | | orion7 | 40 | 2:10:2 | 128799 | | ||
+ | | orion8 | 40 | 2:10:2 | 128799 | | ||
+ | ==== gpu-troja ==== | ||
+ | |||
+ | | Node name | Thread count | Socket: | ||
+ | | tdll-3gpu1 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | tdll-3gpu2 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | tdll-3gpu3 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | tdll-3gpu4 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | tdll-8gpu1 | 64 | 2:16:2 | 257666 | gpuram40G gpu_cc8.0 | | ||
+ | | tdll-8gpu2 | 64 | 2:16:2 | 257666 | gpuram40G gpu_cc8.0 | | ||
+ | | tdll-8gpu3 | 32 | 2:8:2 | 253725 | gpuram16G gpu_cc7.5 | | ||
+ | | tdll-8gpu4 | 32 | 2:8:2 | 253725 | gpuram16G gpu_cc7.5 | | ||
+ | | tdll-8gpu5 | 32 | 2:8:2 | 253725 | gpuram16G gpu_cc7.5 | | ||
+ | | tdll-8gpu6 | 32 | 2:8:2 | 253725 | gpuram16G gpu_cc7.5 | | ||
+ | | tdll-8gpu7 | 32 | 2:8:2 | 253725 | gpuram16G gpu_cc7.5 | | ||
+ | ==== gpu-ms ==== | ||
+ | |||
+ | | Node name | Thread count | Socket: | ||
+ | | dll-3gpu1 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | dll-3gpu2 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | dll-3gpu3 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | dll-3gpu4 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | dll-3gpu5 | 64 | 2:16:2 | 128642 | gpuram48G gpu_cc8.6 | | ||
+ | | dll-4gpu1 | 40 | 2:10:2 | 187978 | gpuram24G gpu_cc8.6 | | ||
+ | | dll-4gpu2 | 40 | 2:10:2 | 187978 | gpuram24G gpu_cc8.6 | | ||
+ | | dll-8gpu1 | 64 | 2:16:2 | 515838 | gpuram24G gpu_cc8.0 | | ||
+ | | dll-8gpu2 | 64 | 2:16:2 | 515838 | gpuram24G gpu_cc8.0 | | ||
+ | | dll-8gpu3 | 32 | 2:8:2 | 257830 | gpuram16G gpu_cc8.6 | | ||
+ | | dll-8gpu4 | 32 | 2:8:2 | 253721 | gpuram16G gpu_cc8.6 | | ||
+ | | dll-8gpu5 | 40 | 2:10:2 | 385595 | gpuram16G gpu_cc7.5 | | ||
+ | | dll-8gpu6 | 40 | 2:10:2 | 385595 | gpuram16G gpu_cc7.5 | | ||
+ | | dll-10gpu1 | 32 | 2:8:2 | 257830 | gpuram16G gpu_cc8.6 | | ||
+ | | dll-10gpu2 | 32 | 2:8:2 | 257830 | gpuram11G gpu_cc6.1 | | ||
+ | | dll-10gpu3 | 32 | 2:8:2 | 257830 | gpuram11G gpu_cc6.1 | | ||
+ | |||
+ | |||
+ | ==== Submit nodes ==== | ||
+ | |||
+ | |||
+ | In order to submit a job you need to login to one of the head nodes: | ||
+ | |||
+ | | ||
+ | | ||
===== Basic usage ===== | ===== Basic usage ===== | ||
+ | |||
+ | ==== Batch mode ==== | ||
+ | |||
+ | The core idea is that you write a batch script containing the commands you wish to run as well as a list of '' | ||
+ | Then the script is submitted to the cluster with: | ||
+ | |||
+ | < | ||
+ | |||
+ | Here is a simple working example: | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | #SBATCH -J helloWorld | ||
+ | #SBATCH -p cpu-troja | ||
+ | #SBATCH -o helloWorld.out | ||
+ | #SBATCH -e helloWorld.err | ||
+ | |||
+ | # run my job (some executable) | ||
+ | sleep 5 | ||
+ | echo "Hello I am running on cluster!" | ||
+ | </ | ||
+ | |||
+ | After submitting this simple code you should end up with the two files ('' | ||
+ | |||
+ | Here is the list of other useful '' | ||
+ | < | ||
+ | #SBATCH -D / | ||
+ | #SBATCH -N 2 # number of nodes (default 1) | ||
+ | #SBATCH --nodelist=node1, | ||
+ | #SBATCH --cpus-per-task=4 | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --mem=10G | ||
+ | </ | ||
+ | |||
+ | If you need you can have slurm report to you: | ||
+ | |||
+ | < | ||
+ | #SBATCH --mail-type=begin | ||
+ | #SBATCH --mail-type=end | ||
+ | #SBATCH --mail-type=fail | ||
+ | #SBATCH --mail-user=< | ||
+ | </ | ||
+ | |||
+ | As usuall the complete set of options can be found by typing: | ||
+ | |||
+ | < | ||
+ | man sbatch | ||
+ | </ | ||
+ | |||
+ | ==== Running jobs ==== | ||
+ | |||
+ | In order to inspect all running jobs on the cluster use: | ||
+ | |||
+ | < | ||
+ | squeue | ||
+ | </ | ||
+ | |||
+ | filter only jobs of user '' | ||
+ | |||
+ | < | ||
+ | squeue -u linguist | ||
+ | </ | ||
+ | |||
+ | filter only jobs on partition '' | ||
+ | |||
+ | < | ||
+ | squeue -p gpu-ms | ||
+ | </ | ||
+ | |||
+ | filter jobs in specific state (see '' | ||
+ | < | ||
+ | squeue -t RUNNING | ||
+ | </ | ||
+ | |||
+ | filter jobs running on a specific node: | ||
+ | < | ||
+ | squeue -w dll-3gpu1 | ||
+ | </ | ||
+ | |||
+ | ==== Cluster info ==== | ||
+ | |||
+ | The command '' | ||
+ | |||
+ | List available partitions(queues). The default partition is marked with '' | ||
+ | < | ||
+ | sinfo | ||
+ | </ | ||
+ | |||
+ | List detailed info about nodes: | ||
+ | < | ||
+ | sinfo -l -N | ||
+ | </ | ||
+ | |||
+ | List nodes with some custom format info: | ||
+ | < | ||
+ | sinfo -N -o "%N %P %.11T %.15f" | ||
+ | </ | ||
+ | |||
+ | === CPU core allocation === | ||
+ | |||
+ | The minimal computing resource in SLURM is one CPU core. However, CPU count advertised by SLURM corresponds to the number of CPU threads. | ||
+ | If you ask for 1 CPU core with < | ||
+ | |||
+ | For example '' | ||
+ | |||
+ | < | ||
+ | $> scontrol show node dll-8gpu1 | ||
+ | $ scontrol show node dll-8gpu1 | ||
+ | NodeName=dll-8gpu1 Arch=x86_64 CoresPerSocket=16 | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | In the example above you can see comments at all lines relevant to CPU allocation. | ||
+ | |||
+ | |||
+ | |||
==== Interactive mode ==== | ==== Interactive mode ==== | ||
+ | This mode can be useful for testing You should be using batch mode for any serious computation. | ||
You can use **'' | You can use **'' | ||
Line 13: | Line 246: | ||
There are many more parameters available to use. For example: | There are many more parameters available to use. For example: | ||
- | < | + | **To get an interactive CPU job with 64GB of reserved memory:** |
+ | < | ||
- | Where: | + | |
- | | + | |
* '' | * '' | ||
+ | **To get interactive job with a single GPU of any kind:** | ||
+ | < | ||
+ | * '' | ||
+ | * '' | ||
+ | |||
+ | < | ||
+ | * '' | ||
+ | * '' | ||
+ | * '' | ||
+ | |||
+ | < | ||
+ | * '' | ||
+ | |||
+ | To see all the available options type: | ||
+ | |||
+ | < | ||
+ | |||
+ | ===== See also ===== | ||
+ | |||
+ | https:// | ||