[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
grid [2017/10/09 13:32]
popel [Basic usage]
grid [2017/10/16 18:47]
popel [Installation]
Line 55: Line 55:
   [ -f ~/.bashrc ] && source ~/.bashrc   [ -f ~/.bashrc ] && source ~/.bashrc
  
 +Make sure you have correctly configured locale (otherwise ''qrsh'' may not show accented letters in ''less'' and you may get errors when printing utf8 on stdout/stderr from your script in ''qsub''). For example add the following line to your ''~/.bashrc'':  
 +
 +  export LC_ALL=en_US.UTF-8
 ===== Basic usage ===== ===== Basic usage =====
  
Line 175: Line 178:
 The hashbang (''!#/bin/bash'') in your ''script.sh'' is ignored, but you can change the interpreter with ''-S''. I think ''/bin/bash'' is now (2017/09) the default (but it used to be ''csh''). The hashbang (''!#/bin/bash'') in your ''script.sh'' is ignored, but you can change the interpreter with ''-S''. I think ''/bin/bash'' is now (2017/09) the default (but it used to be ''csh'').
  
-''qsub **-v** PATH''+''qsub **-v** PATH[=value]''
 Export a given environment variable from the current shell to the job. Export a given environment variable from the current shell to the job.
  
Line 244: Line 247:
   * ''-hold_jid_ad comma_separated_job_list'' array jobs that must finish before this job starts; task //i// of the current job depends only on task //i// of the specified jobs   * ''-hold_jid_ad comma_separated_job_list'' array jobs that must finish before this job starts; task //i// of the current job depends only on task //i// of the specified jobs
  
 +=== Ssh to random sol ===
 +Ondřej Bojar suggests to add the following alias to your .bashrc (cf. [[#sshcwd]]):
 +<code>alias cluster='comp=$(($RANDOM /4095 +1)); ssh -o "StrictHostKeyChecking no" sol$comp'</code>
 ===== Job monitoring ===== ===== Job monitoring =====
  
Line 262: Line 268:
   * There is a **great course [[http://ufal.mff.cuni.cz/courses/npfl102|Data intensive computing]]**, see the 2016 handouts if you missed the course. It covers the usage of [[http://spark.apache.org/|Spark]] (MapReduce/Hadoop alternative, but better) and HDFS (Hadoop filesystem).   * There is a **great course [[http://ufal.mff.cuni.cz/courses/npfl102|Data intensive computing]]**, see the 2016 handouts if you missed the course. It covers the usage of [[http://spark.apache.org/|Spark]] (MapReduce/Hadoop alternative, but better) and HDFS (Hadoop filesystem).
   * This course had used a special **DLRC (Demo LRC) cluster** (students had to login with ''ssh -p 11422 ufallab.ms.mff.cuni.cz'' and special NPFL102-only LDAP logins) with six virtual machines on one physical. During the years when NPFL102 is not taught (e.g. 2017), the DLRC cluster has just one virtual machine.   * This course had used a special **DLRC (Demo LRC) cluster** (students had to login with ''ssh -p 11422 ufallab.ms.mff.cuni.cz'' and special NPFL102-only LDAP logins) with six virtual machines on one physical. During the years when NPFL102 is not taught (e.g. 2017), the DLRC cluster has just one virtual machine.
 +  * **Note:** soma hadoop basics and a lot of NoSQL technologies are covered by [[https://is.cuni.cz/studium/predmety/index.php?do=predmet&kod=NDBI040|Big Data Management and NoSQL Databases]]
   * You can use environment variables ''$JOB_ID'', ''$JOB_NAME''.   * You can use environment variables ''$JOB_ID'', ''$JOB_NAME''.
   * One job can submit other jobs (but be careful with recursive:-)). A job submitted to the CPU cluster may submit GPU jobs (to the ''qpu.q'' queue).   * One job can submit other jobs (but be careful with recursive:-)). A job submitted to the CPU cluster may submit GPU jobs (to the ''qpu.q'' queue).

[ Back to the navigation ] [ Back to the content ]