[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
gpu [2017/07/19 16:18]
kocmanek [Performance tests]
gpu [2018/02/08 15:16]
kocmanek [Servers with GPU units]
Line 4: Line 4:
  
 ===== Servers with GPU units ===== ===== Servers with GPU units =====
 +GPU cluster ''gpu.q'' at Malá Strana:
  
-| machine                    | GPU[[https://en.wikipedia.org/wiki/CUDA#Supported_GPUs|Capability]] [cc]  cores | GPU RAM | Comment | +| machine                    | GPU type | [[https://en.wikipedia.org/wiki/CUDA#GPUs_supported|cc]] | GPUs | GPU RAM | RAM | Comment | 
-titan                      GeForce GTX 1080 Ti; cc6.| 1  11 GB            +iridium                    Quadro K2000        |  cc3.  1|   2 GB | 52 GB | driver(iridium)=367.48 
-| titan-gpu                  | GeForce GTX Titan Zcc3.5 | 2  | 6 GB each core   +| titan-gpu                  | GeForce GTX Titan Z |  cc3.5 |   2|   6 GB | 32 GB | driver(titan-gpu)=381.22 
-| twister1; twister2; kronos | Tesla K40ccc3.5          | 1  | 12 GB            +| twister1; twister2; kronos | Tesla K40c          |  cc3.5 |   1|  12 GB | 48 GB; 48GB; 128 GB | driver(twister*)=367.48, driver(kronos)=384.81 
-iridium                    Quadro K2000; cc3.0        | 1  GB            |  | +titan                      GeForce GTX 1080    |  cc6.  1|   8 GB | 32 GB  | driver(titan)=381.22
-victoriaarc              | GeForce GT 630; cc3.0       GB            desktop machine +dll1dll2                 | GeForce GTX 1080    |  cc6.1   8|   GB | 265 GB | driver(dll1)=375.66, driver(dll2)=387.26 
-athena                     | GeForce GTX 1080cc6.1     GB            Tom's desktop machine +dll4; dll5                 | GeForce GTX 1080 Ti |  cc6.1 |  10 11 GB | 256 GB | driver(dll4)=375.66, driver(dll5)=384.69 
-dll1; dll2                 | GeForce GTX 1080cc6.1     GB each core   +dll3                       | GeForce GTX 1080 Ti |  cc6.1 |   9|  11 GB 256 GB | driver(dll3)=375.66 
-dll3; dll4; dll5           | GeForce GTX 1080 Ticc6.1 | 10 | 11 GB each core  |+dll6                       | GeForce GTX 1080 Ti |  cc6.1 |   9 11 GB | 128 GB | driver(dll6)=384.69 |
  
-not used at the moment: GeForce GTX 570 (from twister2)+Desktop machines: 
 +| machine                    | GPU type | [[https://en.wikipedia.org/wiki/CUDA#GPUs_supported|cc]] | GPUs | GPU RAM | Comment | 
 +| victoria; arc              | GeForce GT 630   | cc3.0 |  1 |  2 GB | desktop machine | 
 +| athena                     | GeForce GTX 1080 | cc6.1 |  1 |  8 GB | Tom's desktop machine | 
 + 
 +Not used at the moment: GeForce GTX 570 (from twister2)
 All machines have CUDA8.0 and should support both Theano and TensorFlow. All machines have CUDA8.0 and should support both Theano and TensorFlow.
  
-Summary of future plans: +[[https://ufaladm2.ufal.hide.ms.mff.cuni.cz/munin/ufal.hide.ms.mff.cuni.cz/lrc-headnode.ufal.hide.ms.mff.cuni.cz/index.html#dll|GPU usage rolling graphs]]
-  * Current Troja servers won't get any GPUs (the only option would be [[http://www.czc.cz/hp-quadro-k1200-4gb/171662/produkt?ppcbee-adtext-variant=Produkt%3B+kategorie+%2B+cena%3B+Pobo%C4%8Dky&gclid=CKbKkbrWrswCFQUq0wodHDELCw|Quadro K1200 4GB]], horribly cost-inefficient) +
-  * The old Quadro K2000 we have is a much more low end piece, so we can't test is in Troja. +
-  * There is MetaCentrum which also has GPUs, so testing can be done there. +
-  * It is impossible (wasteful in terms of space and forbidden by a dean regulation) to put non-rack machines to our servers roomsSo we won't be buying GeForce GTX 1080 (~20000CZK, out of stock now), for a non-rack machine since we most likely don't have any available. +
-  * Yes, there are grant applications under review which include rack machines with GPUs, e.g5x2 or something like that; more will be known in 2017.+
  
  
-=== Individual acquisitionsNVIDIA Academic Hardware Grants ==+===== Rules ===== 
 +  * First, read [[internal:Linux network]] and [[:Grid]]. 
 +  * All the rules from [[:Grid]] apply, even more strictly than for CPU because there are too many GPU users and not as many GPUs available. So as a reminder: always use GPUs via ''qsub'' (or ''qrsh''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. Don't forget to specify you RAM requirements with e.g. ''-l mem_free=8G,act_mem_free=8G,h_vmem=12G''
 +  * Always specify the number of GPU cards (e.g. ''gpu=1''), the minimal Cuda capability you need (e.g. ''gpu_cc_min3.5=1'') and you GPU memory requirements (e.g. ''gpu_ram=2G''). Thus e.g. <code>qsub -q gpu.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G</code> 
 +  * If you need more than one GPU card (on a single machine), always require as many CPU cores (''-pe smp X'') as many GPU cards you need. E.g. <code>qsub -q gpu.q -l gpu=4,gpu_cc_min3.5=1,gpu_ram=7G -pe smp 4</code> **Warning**: currently, this does not work, so you can omit the ''-pe smp X'' part. Milan Fučík is working on a fix. 
 +  * For interactive jobs, you can use ''qrsh'', but make sure to end your job as soon as you don't need the GPU (so don't use qrsh for long training). **Warning: ''-pty yes bash'' is necessary**, otherwise the variable ''$CUDA_VISIBLE_DEVICES'' will not be set correctly. E.g. <code>qrsh -q gpu.q -l gpu=1,gpu_ram=2G -pty yes bash</code>In general: don't reserve a GPU (as described above) without actually using it for longer time. (E.g. try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.) Ondřej Bojar has a script /home/bojar/tools/servers/watch_gpus for watching reserved but unused GPU on most machines which will e-mail you, but don't rely on in only. 
 +  * Note that the dll machines have typically 10 cards, but "just" 250 GB RAM. So the expected (maximal) ''mem_free'' requirement for jobs with 1 GPU is 25GB. If your 1-GPU job takes e.g. 80 GB and you submit three such jobs on the same machine, you have effectively blocked the whole machine and seven GPUs remain unused.
  
-There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have a GAUK, ~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards (2017 check with Milan about this issue). If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :)+===== How to use cluster =====
  
-Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here.+==== Set-up CUDA and CUDNN ====
  
-Known NVIDIA Academic Hardware Grants: +You should add the following commands into your ~/.bashrc
- +
-  * Ondřej Plátek - granted (2015) +
-  * Jan Hajič jr- granted (early 2016)+
  
 +  CUDNN_version=6.0
 +  CUDA_version=8.0
 +  CUDA_DIR_OPT=/opt/cuda-$CUDA_version
 +  if [ -d "$CUDA_DIR_OPT" ] ; then
 +    CUDA_DIR=$CUDA_DIR_OPT
 +    export CUDA_HOME=$CUDA_DIR
 +    export THEANO_FLAGS="cuda.root=$CUDA_HOME,device=gpu,floatX=float32"
 +    export PATH=$PATH:$CUDA_DIR/bin
 +    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_DIR/cudnn/$CUDNN_version/lib64:$CUDA_DIR/lib64
 +    export CPATH=$CUDA_DIR/cudnn/$CUDNN_version/include:$CPATH
 +  fi
  
-   +When not using Theano, just Tensorflow this can be simplified to ''export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda-8.0/cudnn/6.0/lib64:/opt/cuda-8.0/lib64''. Note that on some machines (dll*, twister*), this is the current default even without setting LD_LIBRARY_PATH, but on other machines (kronos, titan, titan-gpu, iridium) you need to set LD_LIBRARY_PATH explicitly.
- +
-===== How to use cluster ===== +
- +
-In this section will be explained how to use cluster properly+
 ==== TensorFlow Environment ==== ==== TensorFlow Environment ====
  
Line 61: Line 72:
 This environment have TensorFlow 1.0 and all necessary requirements for NeuralMonkey. This environment have TensorFlow 1.0 and all necessary requirements for NeuralMonkey.
  
-==== Using cluster ==== +==== Pytorch Environment ====
- +
-Rule number one, always use the GPU queue (never log in machine by ssh). Always use qsub or qsubmit with proper arguments.+
  
-For testing and using the cluster interactively you can use qrsh (this should not be used for long running experiments since the console is not closed on the end of the experiment). Following command will assign you GPU and creates interactive console.+If you want to use pytorch, there is a ready-made environment in
  
-  qrsh -q gpu.q -l gpu=1,gpu_ram=2G -pty yes bash+  /home/hajicj/anaconda3/envs/pytorch/bin
      
-For running experiments you must use qsub command:+It does rely on the CUDA and CuDNN setup above.
  
-  qsub -q gpu.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G WHAT_SHOULD_BE_RUN +==== Using cluster ==== 
-   + 
-Cleaner way to use cluster is with /home/bojar/tools/shell/qsubmit+As an alternative to ''qsub'', you can use /home/bojar/tools/shell/qsubmit
  
   qsubmit --gpumem=2G --queue="gpu.q" WHAT_SHOULD_BE_RUN   qsubmit --gpumem=2G --queue="gpu.q" WHAT_SHOULD_BE_RUN
      
-It is recommended to use priority -100 if you are not rushing for the results and don't need to leap over your colleagues jobs.+It is recommended to use priority lower than the default -100 if you are not rushing for the results and don't need to leap over your colleagues jobs.
 ==== Basic commands ==== ==== Basic commands ====
  
Line 95: Line 104:
   /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery   /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery
     # shows CUDA capability etc.     # shows CUDA capability etc.
 +  ssh dll1; ~popel/bin/gpu_allocations
 +    # who occupies which card on a given machine
          
 === Select GPU device === === Select GPU device ===
  
-Use variable CUDA_VISIBLE_DEVICES to constrain tensorflow to compute only on the selected oneFor the use of first GPU use (GPU queue do this for you)+The variable CUDA_VISIBLE_DEVICES constrains tensorflow and other toolkits to compute only on the selected GPUs**Do not set this variable yourself** (unless debugging SGE), it is set for you automatically by SGE if you ask for some GPUs (see above).
-  export CUDA_VISIBLE_DEVICES=0+
  
 To list available devices, use: To list available devices, use:
Line 146: Line 156:
 GPU specs for those GPUs we have: GPU specs for those GPUs we have:
   * [[http://www.nvidia.com/content/PDF/kepler/Tesla-K40-Active-Board-Spec-BD-06949-001_v03.pdf|Tesla K40c]]   * [[http://www.nvidia.com/content/PDF/kepler/Tesla-K40-Active-Board-Spec-BD-06949-001_v03.pdf|Tesla K40c]]
 +
 +==== Individual acquisitions: NVIDIA Academic Hardware Grants ====
 +
 +There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have a GAUK, ~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards (2017 check with Milan about this issue). If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :)
 +
 +Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here.
 +
 +Known NVIDIA Academic Hardware Grants:
 +
 +  * Ondřej Plátek - granted (2015)
 +  * Jan Hajič jr. - granted (early 2016)
 +
 +
 +

[ Back to the navigation ] [ Back to the content ]