Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
grid [2017/09/26 11:28] popel |
grid [2017/09/27 19:10] popel |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== ÚFAL Grid Engine (LRC) ====== | ====== ÚFAL Grid Engine (LRC) ====== | ||
- | LRC (Linguistic Research Cluster) is a name of ÚFAL' | + | LRC (Linguistic Research Cluster) is a name of ÚFAL' |
- | If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]]. | + | If you need GPU processing, see a special page about our [[:gpu|GPU cluster called DLL]] (which is actually a subsystem of LRC with independent queue '' |
- | Cluster | + | ===== List of Machines ===== |
+ | Last update: 2017/09. All machines have Ubuntu 14.04. | ||
+ | Some machines are at Malá Strana | ||
+ | If you need to quickly distinguish which machine is located where, you can use your knowledge of [[https://en.wikipedia.org/ | ||
- | * lrc.ufal.hide.ms.mff.cuni.cz: | + | ==== Troja (troja-all.q) ==== |
- | * V následující tabulce je uveden seznam výpočetních uzlů clusteru (aktuální k 6.4.2012): | + | ^ Name ^ CPU type ^ GHz ^cores ^RAM(GB)^ note ^ |
+ | | achilles[1-8] | ||
+ | | hector[1-8] | ||
+ | | helena[1-8] | Intel | 2.6 | 32 | 128 | | | ||
+ | | paris[1-8] | ||
- | ^ Jméno | + | ==== MS = Malá Strana |
- | | andromeda[1-13] | 2xCore4 AMD Opteron 2.8 Ghz | 32 | Ubuntu 10.04 | | + | |
- | | fireball[1-10] | + | |
- | | hyperion[1-10] | + | |
- | | lucifer[1-10] | + | |
- | | orion[1-10] | + | |
- | | pandora[1-10] | + | |
- | | sol[1-8, | + | |
- | | tauri[1-10] | + | |
- | | cosmos | + | |
- | | belzebub | + | |
- | | iridium | + | |
- | | twister[1, | + | |
- | | + | |
- | Frontovací systém umožňuje: | + | ^ Name ^ CPU type ^ GHz ^cores ^RAM(GB)^ note ^ |
+ | | andromeda[1-13] | ||
+ | | hydra[1-4] | ||
+ | | fireball[1-10] | ||
+ | | hyperion[1-9] | ||
+ | | lucifer[1-10] | ||
+ | | orion[1-6] | ||
+ | | orion[7-10] | ||
+ | | tauri[1-10] | ||
+ | | cosmos | ||
+ | | belzebub | ||
+ | | iridium | ||
+ | | twister[1, | ||
- | * využít na maximum výpočetní výkon | + | === Outside LRC cluster (but located as MS) === |
- | * poslat mnoho úloh k řešení najednou, úlohy budou ale spuštěny teprve, když na to bude čas | + | ^ Name ^ CPU type ^ GHz ^cores ^ RAM(GB)^ note ^ |
- | | + | | lrc[1, |
+ | | pandora[1-10] | ||
+ | | sol[1-5] | ||
+ | | sol[6-8] | ||
- | ===== Jak začít ===== | + | The two **lrc machines** are so called heads of the cluster. **No computation is allowed here**, i.e. no CPU-intensive, |
- | Jednou za život musíte provést [[Základní nastavení SGE]], abyste SGE mohli používat. | + | Alternatively, you can ssh to one of the **sol machines** and submit jobs from here. It is allowed to compute here, which is useful e.g. when you have a script which submits your jobs, but it also collects statistics from the jobs outputs (and possibly submits new jobs conditioned on the statistics). However, the sol machines are relatively slow and may be occupied by your colleagues, so for bigger (longer) tasks, always prefer submission as separate jobs. |
- | ===== Ukázka užití SGE ===== | + | The **pandora machines** are in a special cluster (not accessible from lrc) and queue **ms-guests.q** available for our colleagues from KSVI and for students of [[http:// |
- | Tato posloupnost příkazů ukazuje, jak užít | + | ===== Installation ===== |
+ | |||
+ | Add the following line into your ' | ||
+ | |||
+ | source / | ||
+ | |||
+ | This detects if you are on one of the cluster machines (including lrc and sol) and sets env variables accordingly. It also prints a status message. | ||
+ | Usually, this is the first line of your ' | ||
+ | |||
+ | [ -f ~/.bashrc ] && source ~/.bashrc | ||
+ | |||
+ | ===== Basic usage ===== | ||
+ | |||
+ | First, you need to ssh to the cluster head (lrc1 or lrc2) or to one of the sol machines. The full address is '' | ||
< | < | ||
- | ssh lrc2 | + | ssh lrc1 |
- | # přihlašte se na hlavu clusteru | + | echo 'hostname; pwd; echo The second parameter is $2' |
- | echo "hostname; pwd" | + | # prepare a shell script describing your task |
- | # vyrobte skript, který popisuje, co má úloha udělat | + | qsub -cwd -j y script.sh Hello World |
- | qsub -cwd -j y skript.sh | + | # This submits your job to the default queue, which is currently '' |
- | # zařaďte úlohu do fronty. | + | # Usually, there is a free slot, so the job will be scheduled within few seconds. |
- | # Vlastně stačilo zavolat: qsub skript.sh | + | # We have used two handy qsub parameters: |
- | # Ale dodatečné parametry zařídily: | + | # -cwd ... the script is executed in the current directory |
- | # -cwd ... skript bude spuštěn v aktuálním adresáři | + | # -j y ... stdout and stderr outputs are merged and redirected to a file ('' |
- | # -V ... proměnné z vašeho prostředí budou zkopírovány do prostředí skriptu | + | # We have also provided two parameters for our script " |
- | # -j y ... standardní | + | # The qsub prints something like |
- | # Pořadí parametrů **je** důležité, | + | # Your job 121144 (" |
qstat | qstat | ||
- | qstat -u ' | + | # This way we inspect all our jobs (both waiting in queue and scheduled, i.e. running). |
- | # Podívejme se, jaké vaše úlohy běží. | + | qstat -u ' |
- | # SGE chvíli čeká, než skript opravdu spustí. Pro malinké úlohy tedy SGE může představovat | + | # This shows jobs of all users. |
- | # zbytečné zpoždění. | + | qstat -j 121144 |
- | # -u '*' ukáže úlohy všech uživatelů na clusteri | + | # This shows detailed info about the job with this number (if it is still running). |
- | cat skript.sh.oXXXXX | + | less script.sh.o* |
- | # vypište si výstup skriptu. XXXXX je ID jobu, které bylo přiděleno | + | # We can inspect the job's output (in our case stored in script.sh.o121144). |
- | # qsubem. Čili druhé poslání do fronty starší log typicky nepřepíše. | + | # Hint: if the job is still running, press F in ' |
</ | </ | ||
- | A takto dopadl výstup našeho skriptu: | + | The output of our job should look like: |
< | < | ||
- | Warning: no access to tty (Bad file descriptor). | + | LRC:ubuntu 14.04: 8.1.7a Son of Grid Engine variables set... |
- | Thus no job control in this shell. | + | lucifer5 |
- | sol2.ufal.hide.ms.mff.cuni.cz | + | /home/popel/tmp |
- | /export/home/bojar | + | The second parameter is World |
+ | ======= EPILOG: Tue Sep 26 19:49:05 CEST 2017 | ||
+ | == Limits: | ||
+ | == Usage: | ||
+ | == Duration: 00:00:02 (2 s) | ||
</ | </ | ||
- | Další užitečné příkazy a parametry: | + | Our admins configured the SGE to print some extra info on stderr: the first line and then the epilog. |
+ | The '' | ||
+ | The '' | ||
< | < | ||
- | qsub -o LOG.stdout -e LOG.stderr skript.sh | + | qdel 121144 |
- | # když chcete přesměrovat výstup skriptu do určených souborů | + | # This way you can delete (" |
- | qsub -S /bin/bash | + | |
- | # když chcete, aby skript běžel v bashi | + | |
- | qsub -V | + | |
- | # když chcete předat proměnné prostředí | + | |
qdel \* | qdel \* | ||
- | # když chcete zrušit všechny své joby (rušit cizí nesmíte) | + | # This way you can delete all your jobs. Don't be afraid - you cannot delete others jobs. |
</ | </ | ||
+ | ===== Rules ===== | ||
+ | The purpose of these rules is to prevent your jobs to damage the work of your colleagues and to divide the resources among users in a fair way. | ||
+ | * Read about our [[internal: | ||
+ | * While your jobs are running (or queued), check your jobs (esp. previously untested setups) and your email (esp. [[internal: | ||
+ | * You can ssh to any cluster machine, which can be useful e.g. to diagnose what's happening there (using '' | ||
+ | * However, **never execute any computing manually** on a cluster machine where you are sshed (i.e. not via '' | ||
+ | * For interactive work, you can use '' | ||
+ | * **Specify the memory and CPU requirements** (if higher than the defaults) and **don' | ||
+ | * If your job needs more than one CPU (on a single machine) for most of the time, reserve the given number of CPU cores (and SGE slots) with < | ||
+ | * If you are sure your job needs less than 1GB RAM, then you can skip this. Otherwise, if you need e.g. 8 GiB, you must always use '' | ||
+ | * Be kind to your colleagues. If you are going to submit jobs that effectively occupy more than one fifth of our cluster for more than several hours, check if the cluster is free (with '' | ||
+ | === Memory === | ||
- | ===== Pravidla pro správné používání clusteru ===== | + | * There are three commonly used options for specifying memory requirements: |
+ | * '' | ||
+ | * **mem_free** (or mf) specifies a // | ||
+ | * **act_mem_free** (or amf) is a ÚFAL-specific option, which specifies the real amount of free memory (at the time of scheduling). You can specify it when submitting a job and it will be scheduled to a machine with at least this amount of memory free. In an ideal world, where no jobs are exceeding their '' | ||
+ | * **h_vmem** is equivalent to setting '' | ||
+ | * It is recommended to **profile your task first**, so you can estimate reasonable memory requirements before submitting many jobs with the same task (varying in parameters which do not affect memory consumption). So for the first time, declare mem_free with much more memory than expected and ssh to a given machine and check '' | ||
+ | * **s_vmem** is similar to '' | ||
- | Základní pravidlo, které musíme všichni ctít, aby SGE plnilo svou úlohu dobře: | ||
- | * Nespouštět úlohy ručně. (O ručně spuštěných úlohách SGE nemá informaci, klidně na daný uzel pošle ještě další úlohy z fronty.) | + | ===== Advanced usage ===== |
- | * Interaktivní shell se dá získat příkazem '' | + | |
- | + | ||
- | Další doporučení: | + | |
- | * Pokud možno používat '' | + | |
- | * Dotaz: jak se kombinuje '' | + | |
- | * Uklízet po sobě lokální data, protože jinak si tam už nikdo nic užitečného nepustí. | + | |
- | * Vyhnout se hodně divokému paralelnímu přístupu ke sdíleným diskům. NFS server to pak nepěkně zpomalí pro všechny. Distribuujte tedy i data. | + | |
- | * Informovat SGE, kolik paměti úloha žere, aby na strojích nedošla paměť (a používat " | + | |
- | + | ||
- | + | ||
- | Víc pravidel není. | + | |
- | + | ||
- | ===== Slušné chování | + | |
- | + | ||
- | Pokud chci spouštět úlohy, které poběží dlouhou dobu (hodiny, dny), nepustím je všechny najednou, aby cluster mohli využívat i ostatní. | + | |
- | + | ||
- | ===== Triky a opentlení ===== | + | |
- | + | ||
- | + | ||
- | ==== ~bojar/ | + | |
- | + | ||
- | qsubmit je jako qsub, ale příjemnější: | + | |
- | + | ||
- | * nemusíte vyrábět skript, vyrobí ho sám (pozn.: nemusíte vyrábět skript, když použijete přepínač '' | + | |
- | * nemusíte připisovat '' | + | |
+ | * < | ||
+ | redirect std{out, | ||
+ | | ||
< | < | ||
- | ~bojar/tools/shell/ | + | qsub -S /bin/bash |
- | </ | + | # Choose the interpreter of your script. I think ''/ |
- | + | qsub -v PATH | |
- | lépe funguje | + | # export a given environment variable from the current shell to the job |
- | + | qsub -V | |
- | ==== ~zeman/ | + | # export all environment variables |
- | + | man qsub qstat qhold queue_conf sge_types complex | |
- | Podobná věc pro '' | + | # Find out all the gory details which are missing here. You'll have to do it one anyway:-). |
- | + | </ | |
- | <code tcsh> | + | |
- | echo $* > $SCRIPTFILE | + | |
- | echo $* | + | |
- | echo qsub -cwd -V -S /bin/tcsh -m e $SCRIPTFILE | + | |
- | qsub -cwd -V -S /bin/tcsh -m e $SCRIPTFILE | + | |
- | qstat -u ' | + | |
- | rm $SCRIPTFILE</ | + | |
- | + | ||
- | Příklad spuštění: | + | |
- | + | ||
- | < | + | |
- | + | ||
- | (Kdybych místo uvozovek použil apostrofy, nerozbalily by se mi proměnné. První argument (název skriptu) klidně mohl být v uvozovkách spolu s přesměrováním. Dal jsem ho ven jen proto, že potom '' | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | ==== TectoMT: devel/ | + | |
- | + | ||
- | Jako '' | + | |
- | + | ||
- | | + | |
- | + | ||
- | Skript zadanou hromádku souboru rozdělí do '' | + | |
- | + | ||
- | Soubory možno zadat filelistem, nebo pomocí '' | + | |
- | + | ||
- | Je nutné buď zadat '' | + | |
- | + | ||
- | Parametr '' | + | |
- | + | ||
- | Parametr '' | + | |
- | + | ||
- | Výstup každého jobu jde do vlastního logu, '' | + | |
- | + | ||
- | Parametr '' | + | |
- | Bez '' | + | * By default, all the resource requirements (specified with '' |
+ | * If you often run (ad-hoc) bash commands via '' | ||
===== Monitorování úloh ===== | ===== Monitorování úloh ===== | ||
Line 270: | Line 253: | ||
</ | </ | ||
- | ===== Synchronizace úloh (v Perlu) ===== | ||
- | |||
- | Pokud chci paralelizovat část úlohy (zde '' | ||
- | |||
- | * Obalím svůj skript pro běh na gridu – vytvořím '' | ||
- | < | ||
- | #!/bin/bash | ||
- | . / | ||
- | |||
- | qrsh -cwd -V -p -50 -l mf=5g -now no ' | ||
- | </ | ||
- | * Ve svém hlavním skriptu ho pak zavolám a posbírám výsledky: | ||
- | < | ||
- | use FileHandle; | ||
- | use IPC::Open2; | ||
- | use threads; | ||
- | use threads:: | ||
- | my @threads; | ||
- | my @results; | ||
- | share(@results); | ||
- | for (@inputs) | ||
- | my $t = async { | ||
- | my $reader; my $writer; | ||
- | my $pid = open2($reader, | ||
- | die " | ||
- | $writer-> | ||
- | print $writer " | ||
- | $writer-> | ||
- | for (< | ||
- | chomp; | ||
- | { | ||
- | lock @results; | ||
- | push @results, $_; | ||
- | } | ||
- | } | ||
- | waitpid $pid, 0; # Pockame s ukoncenim vlakna na ukonceni ulohy v gridu | ||
- | return $? >> 8; # Pokusime se ziskat navratovou hodnotu (netestoval jsem) | ||
- | }; | ||
- | push @threads, $t; | ||
- | } | ||
- | for (@threads) | ||
- | die "Child exited with non-zero exit code" if $_-> | ||
- | } | ||
- | </ | ||
- | |||
- | Poznámky: | ||
- | * Pokud lze všechno předat parametry, nemusí se otevírat obousměrná roura a situace bude jednodušší | ||
- | * Pokud '' | ||
- | * Celý příklad je k vidění v Czengu od V.N. |