Both sides previous revision
Previous revision
|
Next revision
Both sides next revision
|
gpu [2017/03/16 16:30] kocmanek [Servers with GPU units] |
gpu [2017/03/16 16:45] kocmanek |
=== Individual acquisitions: NVIDIA Academic Hardware Grants == | === Individual acquisitions: NVIDIA Academic Hardware Grants == |
| |
There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have a GAUK, ~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards. If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :) | There is an easy way to get one high-end GPU: [[https://developer.nvidia.com/academic_gpu_seeding|ask NVIDIA for an Academic Hardware Grant]]. All it takes is writing a short grant application (at most ~2 hrs of work from scratch; if you have a GAUK, ~15 minutes of copy-pasting). Due to the GPU housing issues (mainly rack space and cooling), Milan F. said we should request the Tesla-line cards (2017 check with Milan about this issue). If you want to have a look at an application, feel free to ask at hajicj@ufal.mff.cuni.cz :) |
| |
Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here. | Take care, however, to coordinate the grant applications a little, so that not too many arrive from UFAL within a short time: these grants are explicitly //not// intended to build GPU clusters, they are "seeding" grants meant for researchers to try out GPUs (and fall in love with them, and buy a cluster later). If you are planning to submit the hardware grant, have submitted one, or have already been awarded one, please add yourself here. |
| |
In the following table is the experiment conducted by Tom Kocmi. You can replicate experiment: /a/merkur3/kocmanek/Projects/GPUBenchmark (you will need to prepare environment of TensorFlow11 or use my ANACONDA) | In the following table is the experiment conducted by Tom Kocmi. You can replicate experiment: /a/merkur3/kocmanek/Projects/GPUBenchmark (you will need to prepare environment of TensorFlow11 or use my ANACONDA) |
| |
I am preparing department-wide benchmark, but meanwhile the results for different experiment: | |
* Athena (GTX 1080) - 2 hodiny 32 minut | |
* Twister (Tesla K40c) - 6 hodin 46 minut | |
| |
| machine | Setup; CPU/GPU; [[https://en.wikipedia.org/wiki/CUDA#Supported_GPUs|Capability]] [cc] | Walltime | Note | | | machine | Setup; CPU/GPU; [[https://en.wikipedia.org/wiki/CUDA#Supported_GPUs|Capability]] [cc] | Walltime | Note | |
| lucifer4 | 8x cores CPU | 134:41:22 | | | | lucifer4 | 8x cores CPU | 134:41:22 | | |
| victoria | GeForce GT 630; cc3.0 | --- | never run, same GPU as Arc | | | victoria | GeForce GT 630; cc3.0 | --- | never run, same GPU as Arc | |
| |
| |
A comparison with Ondrej's small data set: | |
* dll2 (2xGPU) takes 13m for one reporting period | |
* achilles2 (4xCPU with 8 CPUs reserved) takes 24m for one reporting period | |
| |
| |