[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
slurm [2022/09/08 13:43]
vodrazka [Interactive mode]
slurm [2022/09/27 13:54]
vodrazka [Batch mode]
Line 1: Line 1:
 ====== ÚFAL Grid Engine (LRC) ====== ====== ÚFAL Grid Engine (LRC) ======
  
-LRC (Linguistic Research Cluster) is name of ÚFAL's computational grid/cluster.+LRC (Linguistic Research Cluster) is the name of ÚFAL's computational grid/cluster. The cluster is built on top of [[https://slurm.schedmd.com/|SLURM]] and is using [[https://www.lustre.org/|Lustre]] for [[internal:linux-network#directory-structure|data storage]].
  
 +Currently there are following partitions (queues) available for computing:
 +
 +| **Partition name** | **Nodes**  | **Note** |
 +| cpu-troja      | 7x CPU | default partition |
 +| gpu-troja      | 6x GPU | features: gpuram48G,gpuram40G |
 +| gpu-ms         | 7x GPU | features: gpuram48G,gpuram24G |
 +
 +In order to submit a job you need to login to one of the head nodes:
 +
 +   lrc1.ufal.hide.ms.mff.cuni.cz
 +   lrc2.ufal.hide.ms.mff.cuni.cz
 ===== Basic usage ===== ===== Basic usage =====
  
Line 17: Line 28:
 #!/bin/bash #!/bin/bash
 #SBATCH -J helloWorld   # name of job #SBATCH -J helloWorld   # name of job
-#SBATCH -p cpu-troja   # name of partition or queue+#SBATCH -p cpu-troja   # name of partition or queue (if not specified default partition is used)
 #SBATCH -o helloWorld.out   # name of output file for this submission script #SBATCH -o helloWorld.out   # name of output file for this submission script
 #SBATCH -e helloWorld.err   # name of error file for this submission script #SBATCH -e helloWorld.err   # name of error file for this submission script
Line 33: Line 44:
 #SBATCH -N 2                                  # number of nodes (default 1) #SBATCH -N 2                                  # number of nodes (default 1)
 #SBATCH --nodelist=node1,node2...             # required node, or comma separated list of required nodes #SBATCH --nodelist=node1,node2...             # required node, or comma separated list of required nodes
-#SBATCH -                                 # number of cores/threads per task (default 1)+#SBATCH --cpus-per-task=                    # number of cores/threads per task (default 1)
 #SBATCH --gres=gpu:                         # number of GPUs to request (default 0) #SBATCH --gres=gpu:                         # number of GPUs to request (default 0)
 #SBATCH --mem=10G                             # request 10 gigabytes memory (per node, default depends on node) #SBATCH --mem=10G                             # request 10 gigabytes memory (per node, default depends on node)

[ Back to the navigation ] [ Back to the content ]