[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
gpu [2017/08/28 14:46]
kocmanek [Set-up CUDA and CUDNN]
gpu [2017/11/13 09:43]
bojar [Rules]
Line 11: Line 11:
 | twister1; twister2; kronos | Tesla K40c          |  cc3.5 |   1|  12 GB |  | | twister1; twister2; kronos | Tesla K40c          |  cc3.5 |   1|  12 GB |  |
 | dll1; dll2                 | GeForce GTX 1080    |  cc6.1 |   8|   8 GB |  | | dll1; dll2                 | GeForce GTX 1080    |  cc6.1 |   8|   8 GB |  |
-| titan                      | GeForce GTX 1080 Ti |  cc6.1 |   1|  11 GB |  |+| titan                      | GeForce GTX 1080    |  cc6.1 |   1|   8 GB |  |
 | dll3; dll4; dll5           | GeForce GTX 1080 Ti |  cc6.1 |  10|  11 GB | dll3 has only 9 GPUs since 2017/07 | | dll3; dll4; dll5           | GeForce GTX 1080 Ti |  cc6.1 |  10|  11 GB | dll3 has only 9 GPUs since 2017/07 |
 +| dll6                       | GeForce GTX 1080 Ti |  cc6.1 |   3|  11 GB |  |
  
 Desktop machines: Desktop machines:
Line 22: Line 23:
 All machines have CUDA8.0 and should support both Theano and TensorFlow. All machines have CUDA8.0 and should support both Theano and TensorFlow.
  
-=== Disk space === +===== Rules ===== 
-All the GPU machines are at Malá Strana (not at Troja), so you should not use ''/lnet/tspec/work/'', but you should use: +  * First, read [[internal:Linux network]] and [[:Grid]]. 
-''/lnet/spec/work/'' (alias ''/net/work/'') - Lustre disk space at Malá Strana +  * All the rules from [[:Grid]] apply, even more strictly than for CPU because there are too many GPU users and not as many GPUs available. So as a reminder: always use GPUs via ''qsub'' (or ''qrsh''), never via ''ssh''. You can ssh to any machine e.g. to run ''nvidia-smi'' or ''htop'', but not to start computing on GPU. Don't forget to specify you RAM requirements with e.g. ''-l mem_free=8G,act_mem_free=8G,h_vmem=12G''
-''/net/cluster/TMP'' - NFS hard disk for temporary files, so slower than Lustre for most tasks +  * Always specify the number of GPU cards (e.g. ''gpu=1''), the minimal Cuda capability you need (e.g. ''gpu_cc_min3.5=1'') and you GPU memory requirements (e.g. ''gpu_ram=2G''). Thus e.g. <code>qsub -q gpu.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G</code> 
-''/net/cluster/SSD'' - also NFS, but faster then TMP because of SSD +  * If you need more than one GPU card (on single machine)always require as many CPU cores (''-pe cmp X''as many GPU cards you needE.g<code>qsub -q gpu.q -l gpu=4,gpu_cc_min3.5=1,gpu_ram=7G -pe smp 4</code> 
-''/COMP.TMP''local (for each machinespace for temporary files (use it instead of ''/tmp''; over-filling ''/COMP.TMP'' should not halt the system). +  * For interactive jobsyou can use ''qrsh''but make sure to end your job as soon as you don't need the GPU (so don't use qrsh for long training). **Warning''-pty yes bash'' is necessary**, otherwise the variable ''$CUDA_VISIBLE_DEVICES'' will not be set correctly. E.g. <code>qrsh -q gpu.q -l gpu=1,gpu_ram=2G -pty yes bash</code>In general: don't reserve a GPU (as described abovewithout actually using it for longer time. Ondřej Bojar has a script /home/bojar/tools/servers/watch_gpus for watching reserved but unused GPU on most machines which will e-mail you, but don't rely on in only.
- +
-=== Individual acquisitions: NVIDIA Academic Hardware Grants == +
- +
-There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have GAUK~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards (2017 check with Milan about this issue). If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :) +
- +
-Take carehowever, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short timethese grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here. +
- +
-Known NVIDIA Academic Hardware Grants: +
- +
-  * Ondřej Plátek granted (2015) +
-  * Jan Hajič jr- granted (early 2016) +
- +
- +
-  +
  
 ===== How to use cluster ===== ===== How to use cluster =====
- 
-In this section will be explained how to use cluster properly.  
  
 ==== Set-up CUDA and CUDNN ==== ==== Set-up CUDA and CUDNN ====
Line 59: Line 44:
     export THEANO_FLAGS="cuda.root=$CUDA_HOME,device=gpu,floatX=float32"     export THEANO_FLAGS="cuda.root=$CUDA_HOME,device=gpu,floatX=float32"
     export PATH=$PATH:$CUDA_DIR/bin     export PATH=$PATH:$CUDA_DIR/bin
-    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_DIR/lib64+    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_DIR/cudnn/$CUDNN_version/lib64:$CUDA_DIR/lib64
     export CPATH=$CUDA_DIR/cudnn/$CUDNN_version/include:$CPATH     export CPATH=$CUDA_DIR/cudnn/$CUDNN_version/include:$CPATH
   fi   fi
Line 81: Line 66:
 This environment have TensorFlow 1.0 and all necessary requirements for NeuralMonkey. This environment have TensorFlow 1.0 and all necessary requirements for NeuralMonkey.
  
-==== Using cluster ====+==== Pytorch Environment ====
  
-Rule number onealways use the GPU queue (never log in machine by ssh). Always use qsub or qsubmit with proper arguments.+If you want to use pytorchthere is a ready-made environment in
  
-For testing and using the cluster interactively you can use qrsh (this should not be used for long running experiments since the console is not closed on the end of the experiment). Following command will assign you a GPU and creates interactive console. +  /home/hajicj/anaconda3/envs/pytorch/bin
- +
-  qrsh -q gpu.q -l gpu=1,gpu_ram=2G -pty yes bash+
      
-For running experiments you must use qsub command:+It does rely on the CUDA and CuDNN setup above.
  
-  qsub -q gpu.q -l gpu=1,gpu_cc_min3.5=1,gpu_ram=2G WHAT_SHOULD_BE_RUN +==== Using cluster ==== 
-   + 
-Cleaner way to use cluster is with /home/bojar/tools/shell/qsubmit+As an alternative to ''qsub'', you can use /home/bojar/tools/shell/qsubmit
  
   qsubmit --gpumem=2G --queue="gpu.q" WHAT_SHOULD_BE_RUN   qsubmit --gpumem=2G --queue="gpu.q" WHAT_SHOULD_BE_RUN
Line 115: Line 98:
   /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery   /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery
     # shows CUDA capability etc.     # shows CUDA capability etc.
 +  ssh dll1; ~popel/bin/gpu_allocations
 +    # who occupies which card on a given machine
          
 === Select GPU device === === Select GPU device ===
  
-Use variable CUDA_VISIBLE_DEVICES to constrain tensorflow to compute only on the selected oneFor the use of first GPU use (GPU queue do this for you)+The variable CUDA_VISIBLE_DEVICES constrains tensorflow and other toolkits to compute only on the selected GPUs**Do not set this variable yourself** (unless debugging SGE), it is set for you automatically by SGE if you ask for some GPUs (see above).
-  export CUDA_VISIBLE_DEVICES=0+
  
 To list available devices, use: To list available devices, use:
Line 166: Line 150:
 GPU specs for those GPUs we have: GPU specs for those GPUs we have:
   * [[http://www.nvidia.com/content/PDF/kepler/Tesla-K40-Active-Board-Spec-BD-06949-001_v03.pdf|Tesla K40c]]   * [[http://www.nvidia.com/content/PDF/kepler/Tesla-K40-Active-Board-Spec-BD-06949-001_v03.pdf|Tesla K40c]]
 +
 +==== Individual acquisitions: NVIDIA Academic Hardware Grants ====
 +
 +There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have a GAUK, ~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards (2017 check with Milan about this issue). If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :)
 +
 +Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here.
 +
 +Known NVIDIA Academic Hardware Grants:
 +
 +  * Ondřej Plátek - granted (2015)
 +  * Jan Hajič jr. - granted (early 2016)
 +
 +
 +

[ Back to the navigation ] [ Back to the content ]