[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
gpu [2018/02/08 15:20]
kocmanek [Rules]
gpu [2018/06/12 12:42]
vodrazka [Servers with GPU units]
Line 4: Line 4:
  
 ===== Servers with GPU units ===== ===== Servers with GPU units =====
-GPU cluster ''gpu.q'' at Malá Strana:+GPU cluster ''gpu-ms.q'' at Malá Strana: 
 +| machine | GPU type | GPU driver version | [[https://en.wikipedia.org/wiki/CUDA#GPUs_supported|cc]] | GPU cnt | GPU RAM (GB) | machine  
 +RAM (GB)| 
 +| dll1 |  GeForce GTX 1080 |  396.24 |  6.1 |  8 |  8 |  249 | 
 +| dll2 |  GeForce GTX 1080 |  396.24 |  6.1 |  8 |  8 |  249 | 
 +| dll3 |  GeForce GTX 1080 Ti |  396.24 |  6.1 |  9 |  11 |  249 | 
 +| dll4 |  GeForce GTX 1080 Ti |  396.24 |  6.1 |  10 |  11 |  249 | 
 +| dll5 |  GeForce GTX 1080 Ti |  396.24 |  6.1 |  10 |  11 |  249 | 
 +| dll6 |  GeForce GTX 1080 Ti |  396.24 |  6.1 |  9 |  11 |  123 | 
 + 
 +To be migrated to new cluster: 
 + 
 +| titan-gpu |  GeForce GTX TITAN Z |  381.22 |  3.5 |  2 |  6 |  31 | 
 +| kronos |  GeForce GTX 1080 Ti |  384.81 |  6.1 |  1 |  11 |  125 | 
 +| titan |  GeForce GTX 1080 |  381.22 |  6.1 |  1 |  8 |  31 | 
 +| twister1 |  Tesla K40c |  ? |  ? |  1 | 11 |  47 | 
 +| twister2 |  Tesla K40c |  384.81 |  3.5 |  1 |  11 |  47 |
  
-| machine                    | GPU type | [[https://en.wikipedia.org/wiki/CUDA#GPUs_supported|cc]] | GPUs | GPU RAM | RAM | Comment | 
-| iridium                    | Quadro K2000        |  cc3.0 |   1|   2 GB | 52 GB | driver(iridium)=367.48 | 
-| titan-gpu                  | GeForce GTX Titan Z |  cc3.5 |   2|   6 GB | 32 GB | driver(titan-gpu)=381.22 | 
-| twister1; twister2; kronos | Tesla K40c          |  cc3.5 |   1|  12 GB | 48 GB; 48GB; 128 GB | driver(twister*)=367.48, driver(kronos)=384.81 | 
-| titan                      | GeForce GTX 1080    |  cc6.1 |   1|   8 GB | 32 GB  | driver(titan)=381.22| 
-| dll1; dll2                 | GeForce GTX 1080    |  cc6.1 |   8|   8 GB | 265 GB | driver(dll1)=375.66, driver(dll2)=387.26 | 
-| dll4; dll5                 | GeForce GTX 1080 Ti |  cc6.1 |  10|  11 GB | 256 GB | driver(dll4)=375.66, driver(dll5)=384.69 | 
-| dll3                       | GeForce GTX 1080 Ti |  cc6.1 |   9|  11 GB | 256 GB | driver(dll3)=375.66 | 
-| dll6                       | GeForce GTX 1080 Ti |  cc6.1 |   9|  11 GB | 128 GB | driver(dll6)=384.69 | 
  
 Desktop machines: Desktop machines:
Line 30: Line 37:
   * First, read [[internal:Linux network]] and [[:Grid]].   * First, read [[internal:Linux network]] and [[:Grid]].
   * All the rules from [[:Grid]] apply, even more strictly than for CPU because there are too many GPU users and not as many GPUs available. So as a reminder: always use GPUs via ''qsub'' (or ''qrsh''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. Don't forget to specify you RAM requirements with e.g. ''-l mem_free=8G,act_mem_free=8G,h_vmem=12G''.   * All the rules from [[:Grid]] apply, even more strictly than for CPU because there are too many GPU users and not as many GPUs available. So as a reminder: always use GPUs via ''qsub'' (or ''qrsh''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. Don't forget to specify you RAM requirements with e.g. ''-l mem_free=8G,act_mem_free=8G,h_vmem=12G''.
-  * Always specify the number of GPU cards (e.g. ''gpu=1''), the minimal Cuda capability you need (e.g. ''gpu_cc_min3.5=1'') and you GPU memory requirements (e.g. ''gpu_ram=2G''). Thus e.g. <code>qsub -q gpu.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G</code>+  * Always specify the number of GPU cards (e.g. ''gpu=1''), the minimal Cuda capability you need (e.g. ''gpu_cc_min3.5=1'') and your GPU memory requirements (e.g. ''gpu_ram=2G''). Thus e.g. <code>qsub -q gpu.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G</code>
   * If you need more than one GPU card (on a single machine), always require as many CPU cores (''-pe smp X'') as many GPU cards you need. E.g. <code>qsub -q gpu.q -l gpu=4,gpu_cc_min3.5=1,gpu_ram=7G -pe smp 4</code> **Warning**: currently, this does not work, so you can omit the ''-pe smp X'' part. Milan Fučík is working on a fix.   * If you need more than one GPU card (on a single machine), always require as many CPU cores (''-pe smp X'') as many GPU cards you need. E.g. <code>qsub -q gpu.q -l gpu=4,gpu_cc_min3.5=1,gpu_ram=7G -pe smp 4</code> **Warning**: currently, this does not work, so you can omit the ''-pe smp X'' part. Milan Fučík is working on a fix.
   * For interactive jobs, you can use ''qrsh'', but make sure to end your job as soon as you don't need the GPU (so don't use qrsh for long training). **Warning: ''-pty yes bash'' is necessary**, otherwise the variable ''$CUDA_VISIBLE_DEVICES'' will not be set correctly. E.g. <code>qrsh -q gpu.q -l gpu=1,gpu_ram=2G -pty yes bash</code>In general: don't reserve a GPU (as described above) without actually using it for longer time. (E.g. try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.) Ondřej Bojar has a script /home/bojar/tools/servers/watch_gpus for watching reserved but unused GPU on most machines which will e-mail you, but don't rely on in only.   * For interactive jobs, you can use ''qrsh'', but make sure to end your job as soon as you don't need the GPU (so don't use qrsh for long training). **Warning: ''-pty yes bash'' is necessary**, otherwise the variable ''$CUDA_VISIBLE_DEVICES'' will not be set correctly. E.g. <code>qrsh -q gpu.q -l gpu=1,gpu_ram=2G -pty yes bash</code>In general: don't reserve a GPU (as described above) without actually using it for longer time. (E.g. try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.) Ondřej Bojar has a script /home/bojar/tools/servers/watch_gpus for watching reserved but unused GPU on most machines which will e-mail you, but don't rely on in only.
-  * Note that the dll machines have typically 10 cards, but "just" 250 GB RAM (the DLL6 have only 128 GB). So the expected (maximal) ''mem_free'' requirement for jobs with 1 GPU is 25GB. If your 1-GPU job takes e.g. 80 GB and you submit three such jobs on the same machine, you have effectively blocked the whole machine and seven GPUs remain unused.+  * Note that the dll machines have typically 10 cards, but "just" 250 GB RAM (DLL6 has only 128 GB). So the expected (maximal) ''mem_free'' requirement for jobs with 1 GPU is 25GB. If your 1-GPU job takes e.g. 80 GB and you submit three such jobs on the same machine, you have effectively blocked the whole machine and seven GPUs remain unused. If you really need to submit more high-memory jobs, send each on different machine.
  
 ===== How to use cluster ===== ===== How to use cluster =====
Line 54: Line 61:
  
 When not using Theano, just Tensorflow this can be simplified to ''export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda-8.0/cudnn/6.0/lib64:/opt/cuda-8.0/lib64''. Note that on some machines (dll*, twister*), this is the current default even without setting LD_LIBRARY_PATH, but on other machines (kronos, titan, titan-gpu, iridium) you need to set LD_LIBRARY_PATH explicitly. When not using Theano, just Tensorflow this can be simplified to ''export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda-8.0/cudnn/6.0/lib64:/opt/cuda-8.0/lib64''. Note that on some machines (dll*, twister*), this is the current default even without setting LD_LIBRARY_PATH, but on other machines (kronos, titan, titan-gpu, iridium) you need to set LD_LIBRARY_PATH explicitly.
 +
 +TensorFlow 1.5 precompiled binaries need CUDA 9.0, for this you need to
 +
 +  export LD_LIBRARY_PATH=/opt/cuda-9.0/lib64/:/opt/cuda/cudnn/7.0/lib64/
 +
 +You also need to use ''qsub -q gpu.q@dll[256]'' because only those machines have drivers which support CUDA 9.
 +
 +**THE NEW CLUSTER (SGE 8.1.9)**
 +
 +Multiple versions of ''cuda'' can be accessed in ''/opt/cuda''. **Compared to the old cluster there is a difference in setting the CUDA_DIR_OPT variable!!**
 +
 +You need to set library path from your ''~/.bashrc'':
 +
 +  CUDNN_version=7.0
 +  CUDA_version=9.0
 +  CUDA_DIR_OPT=/opt/cuda/$CUDA_version
 +  if [ -d "$CUDA_DIR_OPT" ] ; then
 +    CUDA_DIR=$CUDA_DIR_OPT
 +    export CUDA_HOME=$CUDA_DIR
 +    export THEANO_FLAGS="cuda.root=$CUDA_HOME,device=gpu,floatX=float32"
 +    export PATH=$PATH:$CUDA_DIR/bin
 +    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_DIR/cudnn/$CUDNN_version/lib64:$CUDA_DIR/lib64
 +    export CPATH=$CUDA_DIR/cudnn/$CUDNN_version/include:$CPATH
 +  fi
 +
 +  * When not using Theano, just Tensorflow this can be simplified to ''export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/9.0/lib64:/opt/cuda/9.0/cudnn/7.0/lib64''.
 +
 +  * There is no default and you always need to set ''LD_LIBRARY_PATH'' explicitly.
 +
 +  * Note that ''cudnn'' library is compiled for specific version of ''cuda''. If you need specific version of ''cudnn'', you can look in ''/opt/cuda/$CUDA_version/cudnn/'' whether it is available for given ''$CUDA_version''.
 +
 +
 +
 +
 +
 ==== TensorFlow Environment ==== ==== TensorFlow Environment ====
  
Line 67: Line 109:
 And then you can activate your environment: And then you can activate your environment:
  
-  source activate tf1 +  source activate tf18 
-  source activate tf1cpu+  source activate tf18cpu
  
-This environment have TensorFlow 1.0 and all necessary requirements for NeuralMonkey.+This environment have TensorFlow 1.8.0 and all necessary requirements for NeuralMonkey.
  
 ==== Pytorch Environment ==== ==== Pytorch Environment ====
Line 125: Line 167:
 | titan      | GeForce GTX 1080 Ti                |   10:45:11 | (new result with correct CUDA version) | | titan      | GeForce GTX 1080 Ti                |   10:45:11 | (new result with correct CUDA version) |
 | dll1       | (2 GPU) GeForce GTX 1080; cc6.1    |   12:34:34 | Probably only one GPU was used | | dll1       | (2 GPU) GeForce GTX 1080; cc6.1    |   12:34:34 | Probably only one GPU was used |
 +| twister2   | Quadro P5000                         13:19:00 |  |
 | dll2       | GeForce GTX 1080; cc6.1            |   13:01:05 | Only one GPU was used | | dll2       | GeForce GTX 1080; cc6.1            |   13:01:05 | Only one GPU was used |
 | titan-gpu  | (2 GPU) GeForce GTX Titan Z; cc3.5 |   16:05:24 | Probably only one GPU was used | | titan-gpu  | (2 GPU) GeForce GTX Titan Z; cc3.5 |   16:05:24 | Probably only one GPU was used |
Line 142: Line 185:
 The previous benchmark only compares the speed of processing units within the GPUs and do not take into account the size of memory. Therefore I have conducted another benchmark, this time for each graphic card I have increased the batch size as much as possible so the model still could fit into the GPU (the previous benchmark model had batch size 20). This way the results should be more representative of the power for each GPU. The previous benchmark only compares the speed of processing units within the GPUs and do not take into account the size of memory. Therefore I have conducted another benchmark, this time for each graphic card I have increased the batch size as much as possible so the model still could fit into the GPU (the previous benchmark model had batch size 20). This way the results should be more representative of the power for each GPU.
  
-| GPU; Cuda capability       | GPU RAM |  Walltime | Batch size | Machine +| GPU; Cuda capability       | GPU RAM |  Walltime | Batch size | Machine   | 
-| Tesla K40c; cc3.5          |   12 GB |                  |  +| GeForce GTX 1080 Ti; cc6.1 |   11 GB |  00:55:56 |       2300 | dll5      
-| GeForce GTX 1080 Ti; cc6.1 |   11 GB |  00:55:56 |       2300 | dll5 | +| GeForce GTX 1080; cc6.1    |    8 GB |  01:10:57 |       1700 | dll1      | 
-| GeForce GTX 1080; cc6.1    |    8 GB |  01:10:57 |       1700 | dll1 |+| Quadro P5000                 16 GB |  01:17:00 |       3400 | twister2  |
 | GeForce GTX Titan Z; cc3.5 |    6 GB |  02:20:47 |       1100 | titan-gpu | | GeForce GTX Titan Z; cc3.5 |    6 GB |  02:20:47 |       1100 | titan-gpu |
-| Quadro K2000; cc3.0        |    2 GB |  28:15:26 |         50 | iridium |+| Quadro K2000; cc3.0        |    2 GB |  28:15:26 |         50 | iridium   |
  
 ===== Links ===== ===== Links =====

[ Back to the navigation ] [ Back to the content ]