[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
gpu [2017/01/09 15:19]
kocmanek
gpu [2019/01/08 09:36]
vodrazka [Servers with GPU units]
Line 4: Line 4:
  
 ===== Servers with GPU units ===== ===== Servers with GPU units =====
 +GPU cluster ''gpu-ms.q'' at Malá Strana:
  
-| machine | GPU[[https://en.wikipedia.org/wiki/CUDA#Supported_GPUs|Capability]] [cc]  cores | GPU RAM | Comment +| machine | GPU type | GPU driver version | [[https://en.wikipedia.org/wiki/CUDA#GPUs_supported|cc]GPU cnt | GPU RAM (GB) machine RAM (GB)
-titan-gpu  | GeForce GTX Titan Z; cc3.5 | 2 | 6 GB each core        +dll1  GeForce GTX 1080 |  396.24  6.1  8 |  8 |  249 
-twister1   Tesla K40c; cc3.5          | 1 | 12 GB                 +dll2  GeForce GTX 1080 |  396.24  6.1 |   8 |  249 
-twister2   Tesla K40c; cc3.5          | 1 | 12 GB                 +dll3  GeForce GTX 1080 Ti |  396.24  6.1 |  10  11  249 |                                                                     
-kronos-dev Tesla K40c; cc3.5          | 1 | 12 GB                 +dll4  GeForce GTX 1080 Ti |  396.24  6.1 |  10  11 |  249                                                                     
-iridium    Quadro K2000; cc3.0        | 1 | 2 GB                  +dll5  GeForce GTX 1080 Ti |  396.24  6.1 |  10  11 |  249                                                                     
-victoria   | GeForce GT 630; cc3.0      | 1 | 2 GB           Ondrej Bojar's desktop machine +dll6  GeForce GTX 1080 Ti |  396.24  6.1 |  10  11  123 |                                                                     
-arc        | GeForce GT 630; cc3.0      | 1 | 2 GB           Ales's desktop machine +dll7  GeForce GTX 1080 Ti |  396.24  6.1 |   11 |  123                                                                      
-athena     | GeForce GTX 1080; cc6.1    | 1 | 8 GB           Tom's desktop machine +kronos  GeForce GTX 1080 Ti |  396.24 |  6.1 |  1 |  11  123                                                                    
-dll1     | GeForce GTX 1080; cc6.1    | 8 GB each core |  | +titan1  GeForce GTX 1080 |  396.24 |  6.1 |   8 |  30                                                                         
-dll2     GeForce GTX 1080; cc6.1    8 GB each core |  |+titan2  Tesla K40c |  396.24  3.5  1 |  11 |  30 |
  
-not used at the momentGeForce GTX 570 (from twister2) +Desktop machines
-All machines have CUDA8.0 and should support both Theano and TensorFlow.+| machine                    | GPU type | [[https://en.wikipedia.org/wiki/CUDA#GPUs_supported|cc]] | GPUs | GPU RAM | Comment | 
 +| victoria; arc              | GeForce GT 630   | cc3.0 |  1 |  2 GB | desktop machine | 
 +| athena                     | GeForce GTX 1080 | cc6.1 |  1 |  8 GB | Tom's desktop machine |
  
-Summary of future plans: +Multiple versions of CUDA library are accessible on each machine together with cudnnTheano and TensorFlow is supported.
-  * Current Troja servers won't get any GPUs (the only option would be [[http://www.czc.cz/hp-quadro-k1200-4gb/171662/produkt?ppcbee-adtext-variant=Produkt%3B+kategorie+%2B+cena%3B+Pobo%C4%8Dky&gclid=CKbKkbrWrswCFQUq0wodHDELCw|Quadro K1200 4GB]], horribly cost-inefficient) +
-  * The old Quadro K2000 we have is a much more low end piece, so we can't test is in Troja. +
-  * There is MetaCentrum which also has GPUs, so testing can be done there. +
-  * It is impossible (wasteful in terms of space and forbidden by a dean regulation) to put non-rack machines to our servers rooms. So we won't be buying GeForce GTX 1080 (~20000CZK, out of stock now), for a non-rack machine since we most likely don't have any available. +
-  * Yes, there are grant applications under review which include rack machines with GPUs, e.g. 5x2 or something like that; more will be known in 2017.+
  
 +[[http://ufaladm2.ufal.hide.ms.mff.cuni.cz/munin/ufal.hide.ms.mff.cuni.cz/lrc-master.ufal.hide.ms.mff.cuni.cz/index.html#dll|GPU usage rolling graphs]]
  
-=== Individual acquisitions: NVIDIA Academic Hardware Grants == 
  
-There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing short grant application (at most ~2 hrs of work from scratch; if you have a GAUK~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards. If you want to have look at an applicationfeel free to ask at hajicj@ufal.mff.cuni.cz :)+===== Rules ===== 
 +  * First, read [[internal:Linux network]] and [[:Grid]]. 
 +  * All the rules from [[:Grid]] apply, even more strictly than for CPU because there are too many GPU users and not as many GPUs available. So as reminder: always use GPUs via ''qsub'' (or ''qrsh''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. Don't forget to specify you RAM requirements with e.g. ''-l mem_free=8G,act_mem_free=8G,h_data=12G''
 +    * **Note that you need to use ''h_data'' instead of ''h_vmem'' for GPU jobs.** CUDA driver allocates a lot of "unused" virtual memory (tens of GB per card), which is counted in ''h_vmem'', but not in ''h_data''All usual allocations (''malloc'', ''new'', Python allocations) seem to be included in ''h_data''
 +  * Always specify the number of GPU cards (e.g. ''gpu=1''), the minimal Cuda capability you need (e.g. ''gpu_cc_min3.5=1'') and your GPU memory requirements (e.g. ''gpu_ram=2G''). Thus e.g. <code>qsub -q gpu-ms.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G</code> 
 +  * If you need more than one GPU card (on single machine)always require as many CPU cores (''-pe smp X'') as many GPU cards you need. E.g. <code>qsub -q gpu-ms.q -l gpu=4,gpu_cc_min3.5=1,gpu_ram=7G -pe smp 4</code> 
 +  * For interactive jobs, you can use ''qrsh'', but make sure to end your job as soon as you don't need the GPU (so don't use qrsh for long training)**Warning: ''-pty yes bash -l'' is necessary**, otherwise the variable ''$CUDA_VISIBLE_DEVICES'' will not be set correctlyE.g. <code>qrsh -q gpu-ms.q -l gpu=1,gpu_ram=2G -pty yes bash -l</code>In generaldon't reserve a GPU (as described abovewithout actually using it for longer time. (E.g. try separating steps which need GPU and steps which do not and execute those separately on our GPU resp. CPU cluster.) Ondřej Bojar has a script /home/bojar/tools/servers/watch_gpus for watching reserved but unused GPU on most machines which will e-mail you, but don't rely on it only. 
 +  * Note that the dll machines have typically 10 cards, but "just" 250 GB RAM (DLL6 has only 128 GB). So the expected (maximal) ''mem_free'' requirement for jobs with 1 GPU is 25GB. If your 1-GPU job takes e.g. 80 GB and you submit three such jobs on the same machine, you have effectively blocked the whole machine and seven GPUs remain unused. If you really need to submit more high-memory jobs, send each on a different machine.
  
-Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here.+===== How to use cluster =====
  
-Known NVIDIA Academic Hardware Grants:+==== Set-up CUDA and CUDNN ====
  
-  * Ondřej Plátek - granted (2015) +Multiple versions of ''cuda'' can be accessed in ''/opt/cuda''
-  * Jan Hajič jr- granted (early 2016) +
-  * Jindra Helcl - planning to apply (fall 2016)+
  
 +You need to set library path from your ''~/.bashrc'':
  
-  +  CUDNN_version=7.0 
 +  CUDA_version=9.0 
 +  CUDA_DIR_OPT=/opt/cuda/$CUDA_version 
 +  if [ -d "$CUDA_DIR_OPT" ] ; then 
 +    CUDA_DIR=$CUDA_DIR_OPT 
 +    export CUDA_HOME=$CUDA_DIR 
 +    export THEANO_FLAGS="cuda.root=$CUDA_HOME,device=gpu,floatX=float32" 
 +    export PATH=$PATH:$CUDA_DIR/bin 
 +    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_DIR/cudnn/$CUDNN_version/lib64:$CUDA_DIR/lib64 
 +    export CPATH=$CUDA_DIR/cudnn/$CUDNN_version/include:$CPATH 
 +  fi
  
-===== Performance tests =====+  * When not using Theano, just Tensorflow this can be simplified to ''export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/9.0/lib64:/opt/cuda/9.0/cudnn/7.0/lib64''
 +  * Note that the ''cudnn'' library is compiled for specific version of ''cuda''. If you need specific version of ''cudnn'', you can look in ''/opt/cuda/$CUDA_version/cudnn/'' whether it is available for a given ''$CUDA_version''.
  
-* [[http://www.trustedreviews.com/nvidia-geforce-gtx-1080-review-performance-benchmarks-and-conclusion-page-2| 980 vs 1080 vs Titan X (not the Titan Z we have)]] 
  
-In the following table is the experiment conducted by Tom Kocmi. You can replicate experiment: /a/merkur3/kocmanek/GPUBenchmark (you will need to prepare environment of TensorFlow11 or use my ANACONDA)+==== TensorFlow Environment ====
  
-I am preparing department-wide benchmark, but meanwhile the results for different experiment: +Many people at UFAL use TensorFlow. To start using it it is recommended to create a [[python|Python virtual environment]] (or use Anaconda for it). Into the environment you must place TensorFlow. The TF is either in CPU or GPU version.
- * Athena (GTX 1080- 2 hodiny 32 minut +
- * Twister (Tesla K40c) - 6 hodin 46 minut+
  
-| machine | Setup; CPU/GPU; [[https://en.wikipedia.org/wiki/CUDA#Supported_GPUs|Capability]] [cc] | Walltime | Note | +  pip install tensorflow 
-| athena     | GeForce GTX 1080; cc6.1            |  9:55:58 | Tom's desktop +  pip install tensorflow-gpu 
-| dll2       | (2 GPU) GeForce GTX 1080; cc6.1    |  10:19:40 | with CUDA_VISIBLE_DEVICES=0 | +   
-| dll1       | (2 GPU) GeForce GTX 1080; cc6.1    |  12:34:34 | Probably only one GPU was used | +You can use prepared environment by adding into your ~/.bashrc
-| dll2       | (2 GPU) GeForce GTX 1080; cc6.1    |  13:01:05 | Only one GPU was used | +
-| titan-gpu  | (2 GPU) GeForce GTX Titan Z; cc3.5 |  16:05:24 | Probably only one GPU was used | +
-| kronos-dev | Tesla K40c; cc3.5                  |  22:41:01 |  | +
-| twister2   | Tesla K40c; cc3.5                  |  22:43:10 |  | +
-| twister1   | Tesla K40c; cc3.5                  |  24:19:45 |  | +
-| helena1    | 16x CPU                            |  46:33:14 |  | +
-| belzebub   | 16x CPU                            |  52:36:56 |  | +
-| iridium    | Quadro K2000; cc3.0                |  59:47:58 |  | +
-| helena7     | 8x CPU                            |  60:39:17 |  | +
-| arc        | GeForce GT 630; cc3.0              |  103:42:30 | (approximated after 66 hours) | +
-| lucifer4   | 8x CPU                              134:41:22 |  | +
-| victoria   | GeForce GT 630; cc3.0              |  --- | never run, same GPU as Arc |+
  
 +  export PATH=/a/merkur3/kocmanek/ANACONDA/bin:$PATH
  
-A comparison with Ondrej's small data set: +And then you can activate your environment:
-  * dll2 (2xGPU) takes 13m for one reporting period +
-  * achilles2 (4xCPU with 8 CPUs reserved) takes 24m for one reporting period+
  
 +  source activate tf18
 +  source activate tf18cpu
  
-===== Installed toolkits =====+This environment have TensorFlow 1.8.0 and all necessary requirements for NeuralMonkey.
  
-//This should mention where each interesting toolkit lives (on a particular machine).//+==== Pytorch Environment ====
  
-==== TensorFlow ====+If you want to use pytorch, there is a ready-made environment in
  
-[[https://redmine.ms.mff.cuni.cz/projects/mmmt/repository/revisions/6a064187fc6959db9b77cf2d5350c5f4918a8067/entry/prepare_env.sh|This script]] installs TensorFlow 0.7.1 (and all other dependencies we need for Multimodal Translation) into `tf' and `tf-gpu' virtual environmentsThe GPU environment can be loaded by calling <code>source tf-gpu/bin/activate-gpu</code>+  /home/hajicj/anaconda3/envs/pytorch/bin 
 +   
 +It does rely on the CUDA and CuDNN setup above.
  
-OP: I created [[https://gist.github.com/oplatek/323b63b8f116cd3d78c0f492f78cc289|script]] which install Tensorflow 0.8 and test it if it uses GPU. TF is installed into `user` or `global` installation either for `python3.4` or `python2.7`+==== Using cluster ====
  
-=== Select GPU device ===+As an alternative to ''qsub'', you can use /home/bojar/tools/shell/qsubmit
  
-Use variable CUDA_VISIBLE_DEVICES to constrain tensorflow to compute only on the selected oneFor the use of first GPU use: +  qsubmit --gpumem=2G --queue="gpu-ms.q" WHAT_SHOULD_BE_RUN 
-<code>export CUDA_VISIBLE_DEVICES=0</code> +   
- +It is recommended to use priority lower than the default -100 if you are not rushing for the results and don't need to leap over your colleagues jobsPlease, do not use priority between -99 to for jobs taking longer than a few hoursunless it is absolutely necessary for your work. 
-To list available devicesuse: +==== Basic commands ====
-<code>/opt/cuda/samples/1_Utilities/deviceQuery/deviceQuery | grep ^Device</code> +
- +
-===== Basic commands =====+
  
   lspci   lspci
Line 111: Line 111:
   /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery   /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery
     # shows CUDA capability etc.     # shows CUDA capability etc.
 +  ssh dll1; ~popel/bin/gpu_allocations
 +    # who occupies which card on a given machine
 +    
 +=== Select GPU device ===
 +
 +The variable CUDA_VISIBLE_DEVICES constrains tensorflow and other toolkits to compute only on the selected GPUs. **Do not set this variable yourself** (unless debugging SGE), it is set for you automatically by SGE if you ask for some GPUs (see above).
 +
 +To list available devices, use:
 +  /opt/cuda/samples/1_Utilities/deviceQuery/deviceQuery | grep ^Device
 +
 +===== Performance tests =====
 +
 +* [[http://www.trustedreviews.com/nvidia-geforce-gtx-1080-review-performance-benchmarks-and-conclusion-page-2| 980 vs 1080 vs Titan X (not the Titan Z we have)]]
 +
 +In the following table is the experiment conducted by Tom Kocmi. You can replicate experiment: /a/merkur3/kocmanek/Projects/GPUBenchmark (you will need to prepare environment of TensorFlow11 or use my ANACONDA). The benchmark uses 2GB model of seq2seq machine translation in Neural Monkey (De > EN). If not specified, the benchmark had an access only to one GPU.
 +
 +| machine | Setup; CPU/GPU; [[https://en.wikipedia.org/wiki/CUDA#Supported_GPUs|Capability]] [cc] | Walltime | Note |
 +| athena     | GeForce GTX 1080; cc6.1            |    9:55:58 | Tom's desktop  |
 +| dll2       | GeForce GTX 1080; cc6.1            |   10:19:40 |  |
 +| titan      | GeForce GTX 1080 Ti                |   10:45:11 | (new result with correct CUDA version) |
 +| dll1       | (2 GPU) GeForce GTX 1080; cc6.1    |   12:34:34 | Probably only one GPU was used |
 +| twister2   | Quadro P5000                         13:19:00 |  |
 +| dll2       | GeForce GTX 1080; cc6.1            |   13:01:05 | Only one GPU was used |
 +| titan-gpu  | (2 GPU) GeForce GTX Titan Z; cc3.5 |   16:05:24 | Probably only one GPU was used |
 +| kronos-dev | Tesla K40c; cc3.5                  |   22:41:01 |  |
 +| twister2   | Tesla K40c; cc3.5                  |   22:43:10 |  |
 +| twister1   | Tesla K40c; cc3.5                  |   24:19:45 |  |
 +| helena1    | 16x cores CPU                      |   46:33:14 |  |
 +| belzebub   | 16x cores CPU                      |   52:36:56 |  |
 +| iridium    | Quadro K2000; cc3.0                |   59:47:58 |  |
 +| helena7    | 8x cores CPU                         60:39:17 |  |
 +| arc        | GeForce GT 630; cc3.0              |  103:42:30 | (approximated after 66 hours) |
 +| lucifer4   | 8x cores CPU                        134:41:22 |  |
 +
 +
 +=== Second Benchmark ===
 +
 +The previous benchmark only compares the speed of processing units within the GPUs and do not take into account the size of memory. Therefore I have conducted another benchmark, this time for each graphic card I have increased the batch size as much as possible so the model still could fit into the GPU (the previous benchmark model had batch size 20). This way the results should be more representative of the power for each GPU.
 +
 +| GPU; Cuda capability       | GPU RAM |  Walltime | Batch size | Machine   |
 +| GeForce GTX 1080 Ti; cc6.1 |   11 GB |  00:55:56 |       2300 | dll5      |
 +| GeForce GTX 1080; cc6.1    |    8 GB |  01:10:57 |       1700 | dll1      |
 +| Quadro P5000                 16 GB |  01:17:00 |       3400 | twister2  |
 +| GeForce GTX Titan Z; cc3.5 |    6 GB |  02:20:47 |       1100 | titan-gpu |
 +| Quadro K2000; cc3.0        |    2 GB |  28:15:26 |         50 | iridium   |
  
 ===== Links ===== ===== Links =====
Line 119: Line 164:
 GPU specs for those GPUs we have: GPU specs for those GPUs we have:
   * [[http://www.nvidia.com/content/PDF/kepler/Tesla-K40-Active-Board-Spec-BD-06949-001_v03.pdf|Tesla K40c]]   * [[http://www.nvidia.com/content/PDF/kepler/Tesla-K40-Active-Board-Spec-BD-06949-001_v03.pdf|Tesla K40c]]
 +
 +==== Individual acquisitions: NVIDIA Academic Hardware Grants ====
 +
 +There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have a GAUK, ~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards (2017; check with Milan about this issue). If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :)
 +
 +Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here.
 +
 +Known NVIDIA Academic Hardware Grants:
 +
 +  * Ondřej Plátek - granted (2015)
 +  * Jan Hajič jr. - granted (early 2016)
 +
 +
 +

[ Back to the navigation ] [ Back to the content ]