Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
grid [2007/10/04 15:56] stepanek file not found |
grid [2017/09/27 18:56] popel |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== |
- | Cluster | + | LRC (Linguistic Research Cluster) is a name of ÚFAL' |
- | * lrc.ufal.hide.ms.mff.cuni.cz: | + | If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]] (which is actually |
- | * fireball1 až 10 (na každém 4 procesory Intel Xeon 3 GHz, 16 GB paměti, Fedora 7) | + | |
- | * tauri1 | + | |
- | * orion1 až 10 (na každém 4 procesory Intel Xeon 2 GHz, 16 GB paměti, 12.9.2007 naplánovaná odstávka na reinstalaci) | + | |
- | * sol1 až 10 (na každém 4 procesory AMD Opteron Dual Core 2 GHz, 16 GB paměti, 12.9.2007 naplánovaná odstávka na reinstalaci) | + | |
- | Frontovací systém umožňuje: | + | ===== List of Machines ===== |
+ | Last update: 2017/09. All machines have Ubuntu 14.04. | ||
+ | Some machines are at Malá Strana (ground floor, new server room built from Lindat budget), some are at Troja (5 km north-east). | ||
+ | If you need to quickly distinguish which machine is located where, you can use your knowledge of [[https:// | ||
- | | + | ==== Troja (troja-all.q) ==== |
- | | + | ^ Name ^ CPU type ^ GHz ^cores ^RAM(GB)^ note ^ |
- | | + | | achilles[1-8] |
+ | | hector[1-8] | ||
+ | | helena[1-8] | ||
+ | | paris[1-8] | ||
- | ===== Jak začít ===== | + | ==== MS = Malá Strana (ms-all.q) |
- | Jednou za život musíte provést | + | ^ Name ^ CPU type ^ GHz ^cores ^RAM(GB)^ note ^ |
+ | | andromeda[1-13] | AMD 2xCore4 Opteron | 2.8 | 8 | 32 | | | ||
+ | | hydra[1-4] | AMD | 2.6 | 16 | 128 | | | ||
+ | | fireball[1-10] | ||
+ | | hyperion[1-9] | ||
+ | | lucifer[1-10] | ||
+ | | orion[1-6] | ||
+ | | orion[7-10] | ||
+ | | tauri[1-10] | ||
+ | | cosmos | ||
+ | | belzebub | ||
+ | | iridium | ||
+ | | twister[1,2] | Intel 2xCore4 Xeon | 2.4 | 8 | 48 | also in '' | ||
+ | === Outside LRC cluster (but located as MS) === | ||
+ | ^ Name ^ CPU type ^ GHz ^cores ^ RAM(GB)^ note ^ | ||
+ | | lrc[1, | ||
+ | | pandora[1-10] | ||
+ | | sol[1-5] | ||
+ | | sol[6-8] | ||
- | ===== Ukázka užití SGE ===== | + | The two **lrc machines** are so called heads of the cluster. **No computation is allowed here**, i.e. no CPU-intensive, |
- | Tato posloupnost příkazů ukazuje, jak užít SGE: | + | Alternatively, you can ssh to one of the **sol machines** and submit jobs from here. It is allowed to compute here, which is useful e.g. when you have a script which submits your jobs, but it also collects statistics from the jobs outputs (and possibly submits new jobs conditioned on the statistics). However, the sol machines are relatively slow and may be occupied by your colleagues, so for bigger (longer) tasks, always prefer submission as separate jobs. |
- | < | + | The **pandora machines** are in a special cluster |
- | ssh lrc | + | |
- | # přihlašte se na hlavu clusteru | + | |
- | echo " | + | |
- | # vyrobte skript, který popisuje, co má úloha udělat | + | |
- | qsub -cwd -j y skript.sh | + | |
- | # zařaďte úlohu do fronty. | + | |
- | # Vlastně stačilo zavolat: qsub skript.sh | + | |
- | # Ale dodatečné parametry zařídily: | + | |
- | # -cwd ... skript bude spuštěn v aktuálním adresáři | + | |
- | # | + | |
- | # -j y | + | |
- | # Pořadí parametrů **je** důležité, | + | |
- | qstat | + | |
- | # Podívejme se, jaké úlohy běží. | + | |
- | # SGE chvíli čeká, než skript opravdu spustí. Pro malinké úlohy tedy SGE může představovat | + | |
- | # zbytečné zpoždění. | + | |
- | cat skript.sh.oXXXXX | + | |
- | # vypište si výstup skriptu. XXXXX je ID jobu, které bylo přiděleno | + | |
- | # qsubem. Čili druhé poslání do fronty starší log typicky nepřepíše. | + | |
- | </ | + | |
- | A takto dopadl výstup našeho skriptu: | + | ===== Installation ===== |
- | < | + | Add the following line into your '~/.bash_profile' |
- | Warning: no access to tty (Bad file descriptor). | + | |
- | Thus no job control in this shell. | + | |
- | sol2.ufal.hide.ms.mff.cuni.cz | + | |
- | / | + | |
- | </ | + | |
- | Další užitečné příkazy a parametry: | + | source / |
- | < | + | This detects if you are on one of the cluster machines (including lrc and sol) and sets env variables accordingly. It also prints a status message. |
- | qsub -o LOG.stdout -e LOG.stderr skript.sh | + | Usually, this is the first line of your '~/.bash_profile' |
- | # když chcete přesměrovat výstup skriptu do určených souborů | + | |
- | qsub -S /bin/bash | + | |
- | # když chcete, aby skript běžel v bashi | + | |
- | qdel all | + | |
- | # když chcete zrušit všechny své joby (rušit cizí nesmíte) | + | |
- | </code> | + | |
- | ** V.N.: "qdel all" mi nefunguje, nahradil jsem za:** | + | [ -f ~/.bashrc ] && source ~/.bashrc |
- | qdel " | + | |
+ | ===== Basic usage ===== | ||
+ | First, you need to ssh to the cluster head (lrc1 or lrc2) or to one of the sol machines. The full address is '' | ||
- | ===== Pravidla pro správné používání clusteru ===== | + | < |
+ | ssh lrc1 | ||
+ | echo ' | ||
+ | # prepare a shell script describing your task | ||
+ | qsub -cwd -j y script.sh Hello World | ||
+ | # This submits your job to the default queue, which is currently '' | ||
+ | # Usually, there is a free slot, so the job will be scheduled within few seconds. | ||
+ | # We have used two handy qsub parameters: | ||
+ | # -cwd ... the script is executed in the current directory (the default is your home) | ||
+ | # -j y ... stdout and stderr outputs are merged and redirected to a file ('' | ||
+ | # We have also provided two parameters for our script " | ||
+ | # The qsub prints something like | ||
+ | # Your job 121144 (" | ||
+ | qstat | ||
+ | # This way we inspect all our jobs (both waiting in queue and scheduled, i.e. running). | ||
+ | qstat -u ' | ||
+ | # This shows jobs of all users. | ||
+ | qstat -j 121144 | ||
+ | # This shows detailed info about the job with this number (if it is still running). | ||
+ | less script.sh.o* | ||
+ | # We can inspect the job's output (in our case stored in script.sh.o121144). | ||
+ | # Hint: if the job is still running, press F in ' | ||
+ | </ | ||
- | Základní pravidlo, které musíme všichni ctít, aby SGE plnilo svou úlohu dobře: | + | The output of our job should look like: |
- | | + | < |
+ | LRC:ubuntu 14.04: 8.1.7a Son of Grid Engine variables set... | ||
+ | lucifer5 | ||
+ | / | ||
+ | The second parameter is World | ||
+ | ======= EPILOG: Tue Sep 26 19:49:05 CEST 2017 | ||
+ | == Limits: | ||
+ | == Usage: | ||
+ | == Duration: 00:00:02 (2 s) | ||
+ | </ | ||
- | Další doporučení: | + | Our admins configured the SGE to print some extra info on stderr: the first line and then the epilog. |
- | * Pokud možno používat | + | The '' |
- | | + | The '' |
- | * Uklízet po sobě lokální data, protože jinak si tam už nikdo nic užitečného nepustí. | + | |
- | * Vyhnout se hodně divokému paralelnímu přístupu ke sdíleným diskům. NFS server to pak nepěkně zpomalí pro všechny. Distribuujte tedy i data. | + | |
- | Víc pravidel není. | + | < |
+ | qdel 121144 | ||
+ | # This way you can delete (" | ||
+ | qdel \* | ||
+ | # This way you can delete all your jobs. Don't be afraid - you cannot delete others jobs. | ||
+ | </ | ||
+ | ===== Rules ===== | ||
+ | The purpose of these rules is to prevent your jobs to damage the work of your colleagues and to divide the resources among users in a fair way. | ||
- | ===== Triky a opentlení ===== | + | * Read about our [[internal: |
+ | * While your jobs are running (or queued), check your jobs (esp. previously untested setups) and your email (esp. [[internal: | ||
+ | * You can ssh to any cluster machine, which can be useful e.g. to diagnose what's happening there (using '' | ||
+ | * However, **never execute any computing manually** on a cluster machine where you are sshed (i.e. not via '' | ||
+ | * For interactive work, you can use '' | ||
+ | * **Specify the memory and CPU requirements** (if higher than the defaults) and **don' | ||
+ | * If your job needs more than one CPU (on a single machine) for most of the time, reserve the given number of CPU cores (and SGE slots) with < | ||
+ | * If you are sure your job needs less than 1GB RAM, then you can skip this. Otherwise, if you need e.g. 8 GiB, you must always use '' | ||
+ | * Be kind to your colleagues. If you are going to submit jobs that effectively occupy more than one fifth of our cluster for more than several hours, check if the cluster is free (with '' | ||
+ | === Memory === | ||
- | ==== ~bojar/tools/shell/qsubmit ==== | + | * There are three commonly used options for specifying memory requirements: |
+ | * '' | ||
+ | * **mem_free** (or mf) specifies a //consumable resource// tracked by SGE and it affects job scheduling. Each machine has an initial value assigned (slightly lower than the real total physical RAM capacity). When you specify '' | ||
+ | * **act_mem_free** (or amf) is a ÚFAL-specific option, which specifies the real amount of free memory (at the time of scheduling). You can specify it when submitting a job and it will be scheduled to a machine with at least this amount of memory free. In an ideal world, where no jobs are exceeding their '' | ||
+ | * **h_vmem** is equivalent to setting '' | ||
+ | * It is recommended to profile your task first, so you can estimate reasonable memory requirements before submitting many jobs with the same task (varying in parameters which do not affect memory consumption). So for the first time, declare mem_free with much more memory than expected and ssh to a given machine and check '' | ||
+ | * **s_vmem** is similar to '' | ||
- | qsubmit je jako qsub, ale příjemnější: | ||
- | |||
- | * nemusíte vyrábět skript, vyrobí ho sám (pozn.: nemusíte vyrábět skript, když použijete přepínač '' | ||
- | * nemusíte připisovat '' | ||
+ | ===== Advanced usage ===== | ||
< | < | ||
- | ~bojar/ | + | qsub -o LOG.stdout -e LOG.stderr |
- | </ | + | # to redirect std{out, |
+ | qsub -S /bin/bash | ||
+ | # Choose the interpreter of your script. I think ''/ | ||
+ | qsub -v PATH | ||
+ | # | ||
+ | qsub -V | ||
+ | # když chcete předat proměnné prostředí | ||
+ | man qsub qstat qhold queue_conf sge_types complex | ||
+ | # Find out all the details missing here. You'll have to do it one anyway. | ||
+ | </ | ||
+ | |||
+ | * By default, all the resource requirements (specified with '' | ||
+ | * If you often run (ad-hoc) bash commands via '' | ||
+ | |||
+ | ===== Monitorování úloh ===== | ||
+ | * '' | ||
+ | * '' | ||
+ | * ''/ | ||
+ | * ''/ | ||
+ | * ''/ | ||
+ | * mem_total: celkova pamet uzlu | ||
+ | * mem_free: tedy kolik je jeste volne pameti z pametove quoty uzlu | ||
+ | * act_mem_free: | ||
+ | * mem_used: kolik je pameti skutecne pouzito | ||
+ | * ''/ | ||
+ | * celkovy pocet jader, pocet vyuzitych jader | ||
+ | * celkova velikost RAM, kolik je ji fyzicky nepouzite, kolik je ji jeste nerezervovane | ||
+ | * po jednotlivych uzivatelich (zrovna pocitajicich) -- kolik jim bezi uloh, kolik jich maji ve fronte a kolik z nich je ve stavu hold | ||
+ | * '' | ||
+ | * [[https:// | ||
===== Časté a záludné problémy ===== | ===== Časté a záludné problémy ===== | ||
- | ==== Submitnutý job nesmí znovu submitovat ==== | ||
- | Pokud se nemýlím, není dovoleno použít '' | + | ==== Submitnutý job může znovu submitovat ==== |
+ | |||
+ | Danovy starší zkušenosti s clusterem PBS (nikoli SGE) říkaly, že tohle nejde. Ale jde to, aspoň u nás. Příkazy | ||
Line 167: | Line 235: | ||
fi | fi | ||
</ | </ | ||
+ | |||
+ | ==== Jak zjistit, jaké zdroje jsem pro svou úlohu požadoval ==== | ||
+ | |||
+ | < | ||
+ | hard resource_list: | ||
+ | hard resource_list: | ||
+ | hard resource_list: | ||
+ | hard resource_list: | ||
+ | hard resource_list: | ||
+ | hard resource_list: | ||
+ | |||
+ | ==== Jak rezervovat více jader na stejném stroji pro 1 job ==== | ||
+ | |||
+ | < | ||
+ | qsub -pe smp <pocet jader> | ||
+ | </ | ||
+ |