Skip to content
Commits on Source (8)
......@@ -58,8 +58,8 @@ It's possible to conflict with other modules. For example, if you were to load :
.. code-block:: shell
[krause@master ~] module load mrtrix3
[krause@master ~] module load mrtrix3tissue
[krause@login ~] module load mrtrix3
[krause@login ~] module load mrtrix3tissue
mrtrix3tissue/5.2.8(13):ERROR:150: Module 'mrtrix3tissue/5.2.8' conflicts with the currently loaded module(s) 'mrtrix3/rc3'
mrtrix3tissue/5.2.8(13):ERROR:102: Tcl command execution failed: conflict "mrtrix3"
......
......@@ -140,23 +140,23 @@ Create a mount location (once):
.. code-block:: bash
krause@master:~> $ mkdir NetworkFolders
krause@login:~> $ mkdir NetworkFolders
Now mount a folder with this command (FB-LIP in this example):
.. code-block:: bash
krause@master:~> $ udevil mount -t cifs smb://$USER@mpib-berlin.mpg.de/FB-LIP NetworkFolders/
krause@login:~> $ udevil mount -t cifs smb://$USER@mpib-berlin.mpg.de/FB-LIP NetworkFolders/
It will ask you for your password:
.. code-block:: bash
krause@master:~> $ udevil mount -t cifs smb://krause@mpib-berlin.mpg.de/fb-lip NetworkFolders/
krause@login:~> $ udevil mount -t cifs smb://krause@mpib-berlin.mpg.de/fb-lip NetworkFolders/
Password:
Mounted //mpib-berlin.mpg.de/fb-lip at /home/mpib/krause/NetworkFolders
krause@master:~> $ ls NetworkFolders/ConMEM
krause@login:~> $ ls NetworkFolders/ConMEM
BEH EEG MRI Neurodaten Project STUDIES
......@@ -164,8 +164,17 @@ If you want to mount another folder either create a different target directory o
.. code-block:: bash
krause@master:~> $ udevil umount NetworkFolders/
krause@master:~> $
krause@login:~> $ udevil umount NetworkFolders/
krause@login:~> $
azure files
^^^^^^^^^^^
Mounting Azure Files is no different to mounting network shares, you just have to use the username and password that IT provided for your project:
.. code-block:: bash
krause@login:~> $ udevil mount -t cifs smb://user@user.file.core.windows.net/project NetworkFolders/project
azure files
^^^^^^^^^^^
......@@ -185,8 +194,7 @@ your username with :program:`mount | grep $USER`.
on the machine you issue the mount on**. Primarily for performance reasons,
they are not transparently handed down to the execution nodes and thus do
not work in your scripts out of the box. Usually you need to make a copy of
your data on the login/master node and then have your jobs work on that
copy.
your data on the login and then have your jobs work on that copy.
.. _osxfuse: https://osxfuse.github.io/
......
......@@ -16,16 +16,16 @@ Example:
.. code-block:: bash
# UTF-8, will probably produce a faulty job file
root@master:$ file MRtrix3_TARDIS_0.3.sh
krause@login:$ file MRtrix3_TARDIS_0.3.sh
MRtrix3_TARDIS_0.3.sh: Bourne-Again shell script, UTF-8 Unicode text executable
# ASCII, will probably work
root@master:$ file MRtrix3_preproc.sh
krause@login:$ file MRtrix3_preproc.sh
MRtrix3_preproc.sh: Bourne-A gain shell script, ASCII text executable
# convert from utf-8 to ascii will never work, but tell you the position of
# the non-ascii character
root@master:$ iconv -f utf-8 -t ascii MRtrix3_TARDIS_0.3.sh
krause@login:$ iconv -f utf-8 -t ascii MRtrix3_TARDIS_0.3.sh
[...]
#PBS -j oe
#PBS iconv: illegal input sequence at position 3811
......
......@@ -46,7 +46,7 @@ example limits the output to jobs belonging to the user `krause`:
.. code-block:: bash
[krause@master ~] squeue -u krause
[krause@login ~] squeue -u krause
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
110996 short test krause R 0:12 1 ood-43
110997 gpu job.slur krause R 0:08 1 gpu-4
......@@ -63,7 +63,7 @@ To look up historical (accounting) data there is ``sacct``. Again, all output co
.. code-block:: bash
[krause@master ~] sacct -o JobID,ReqMEM,MaxRSS,CPU,Exit
[krause@login ~] sacct -o JobID,ReqMEM,MaxRSS,CPU,Exit
JobID ReqMem MaxRSS CPUTime ExitCode
------------ ---------- ---------- ---------- --------
110973 4Gc 936K 00:00:08 0:0
......@@ -83,7 +83,7 @@ we add a format hint ``%30`` to leave enough room for the complete name.
.. code-block:: bash
[krause@master ~] sacct --starttime 12/06-08:00 --endtime 12/06-23:00 --state CD --parsable --user moneta \
[krause@login~] sacct --starttime 12/06-08:00 --endtime 12/06-23:00 --state CD --parsable --user moneta \
-o JobID,JobName%30,TotalCPU,Elapsed,ExitCode | grep MagicM41_3_3_1
2392298 MagicM41_3_3_1_1_1 00:35.463 00:00:36 0:0
......@@ -114,9 +114,9 @@ using bashplotlib (must be installed with pip).
.. code-block:: bash
[krause@master ~] ids=$(sacct --starttime 12/06-08:00 --endtime 12/06-23:00 --state CD --parsable \
[krause@login~] ids=$(sacct --starttime 12/06-08:00 --endtime 12/06-23:00 --state CD --parsable \
--user moneta -o JobID,JobName%30.batch | grep MagicM41 | cut -d"|" -f1)
[krause@master ~] for id in $ids ; do sacct --parsable2 --noheader --units M -o MaxRSS \
[krause@login~] for id in $ids ; do sacct --parsable2 --noheader --units M -o MaxRSS \
-j $id.batch; done | tr -d "M" | grep -v "^0$" | hist -b 50 -x
10| o
......
......@@ -48,16 +48,16 @@ Examples:
.. code-block:: bash
krause@master:~/sleep> $ qsub job.pbs -l mem=32gb
4294349.master.tardis.mpib-berlin.mpg.de
krause@master:~/sleep> $
krause@master:~> $ qsub -I -l nodes=1:ppn=16
qsub: waiting for job 4294350.master.tardis.mpib-berlin.mpg.de to start
qsub: job 4294350.master.tardis.mpib-berlin.mpg.de ready
krause@login:~/sleep> $ qsub job.pbs -l mem=32gb
4294349.login.tardis.mpib-berlin.mpg.de
krause@login:~/sleep> $
krause@login:~> $ qsub -I -l nodes=1:ppn=16
qsub: waiting for job 4294350.login.tardis.mpib-berlin.mpg.de to start
qsub: job 4294350.login.tardis.mpib-berlin.mpg.de ready
krause@ood-32:~> $
krause@master:~some/path> $ qsub -I -d.
qsub: waiting for job 4294351.master.tardis.mpib-berlin.mpg.de to start
qsub: job 4294350.master.tardis.mpib-berlin.mpg.de ready
krause@login:~some/path> $ qsub -I -d.
qsub: waiting for job 4294351.login.tardis.mpib-berlin.mpg.de to start
qsub: job 4294350.login.tardis.mpib-berlin.mpg.de ready
krause@ood-31:~some/path> $
......@@ -93,20 +93,20 @@ Examples:
.. code-block:: bash
krause@master:~> $ echo "sleep 5m" | qsub
4294351.master.tardis.mpib-berlin.mpg.de
krause@master:~> $ qstat -a 4294351
krause@login:~> $ echo "sleep 5m" | qsub
4294351.login.tardis.mpib-berlin.mpg.de
krause@login:~> $ qstat -a 4294351
master.tardis.mpib-berlin.mpg.de:
login.tardis.mpib-berlin.mpg.de:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - -----
4294351.master.t krause default STDIN 5712 -- -- 4000mb 24:00 R 00:00
4294351.login.t krause default STDIN 5712 -- -- 4000mb 24:00 R 00:00
krause@master:~> $ qstat -q
krause@login:~> $ qstat -q
server: master
server: login
Queue Memory CPU Time Walltime Node Run Que Lm State
---------------- ------ -------- -------- ---- --- --- -- -----
......@@ -132,13 +132,13 @@ Examples:
.. code-block:: bash
krause@master:~> $ qselect -s R
4294352.master.tardis.mpib-berlin.mpg.de
4294353.master.tardis.mpib-berlin.mpg.de
4294354.master.tardis.mpib-berlin.mpg.de
krause@master:~> $ qselect -N STDIN
4294352.master.tardis.mpib-berlin.mpg.de
krause@master:~> $ qselect -N STDIN | cut -d"." -f1
krause@login:~> $ qselect -s R
4294352.login.tardis.mpib-berlin.mpg.de
4294353.login.tardis.mpib-berlin.mpg.de
4294354.login.tardis.mpib-berlin.mpg.de
krause@login:~> $ qselect -N STDIN
4294352.login.tardis.mpib-berlin.mpg.de
krause@login:~> $ qselect -N STDIN | cut -d"." -f1
4294352
......@@ -157,9 +157,9 @@ Examples:
.. code-block:: bash
krause@master:~> $ qdel 4294357
krause@master:~> $ qdel all
krause@master:~> $ qdel all
krause@login:~> $ qdel 4294357
krause@login:~> $ qdel all
krause@login:~> $ qdel all
qdel: cannot find any jobs to delete
``qdel`` cannot handle wildcards to remove jobs matching the name 'project-\*',
......@@ -189,22 +189,24 @@ Examples:
krause@master:~> $ qstat -a 4294516
krause@login:~> $ qstat -a 4294516
master.tardis.mpib-berlin.mpg.de:
login.tardis.mpib-berlin.mpg.de:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - -----
4294516.master.t krause default foo -- 1 0 4000mb 24:00 Q --
krause@master:~> $ qalter -l walltime=10:0:0 4294516
krause@master:~> $ qalter -l mem=32gb 4294516
krause@master:~> $ qstat -a 4294516
4294516.login.t krause default foo -- 1 0 4000mb 24:00 Q --
master.tardis.mpib-berlin.mpg.de:
krause@login:~> $ qalter -l walltime=10:0:0 4294516
krause@login:~> $ qalter -l mem=32gb 4294516
krause@login:~> $ qstat -a 4294516
login.tardis.mpib-berlin.mpg.de:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - -----
4294516.master.t krause default foo -- 1 0 32gb 10:00 Q --
4294516.login.t krause default foo -- 1 0 32gb 10:00 Q --
......
......@@ -56,7 +56,7 @@ Sometimes it may be useful to get a quick shell on one of the compute nodes.
Before submitting hundreds or thousands of jobs you might want to run some
simple checks to ensure all the paths are correct and the software is loading
as expected. Although you can usually run these tests on the master itself there are
as expected. Although you can usually run these tests on the login itself there are
cases when this is dangerous, for example when your tests quickly require lot's
of memory. In that case you should move those tests to one of the compute nodes:
......@@ -68,9 +68,9 @@ This will submit a job that requests a shell. The submission will block until th
.. code-block:: bash
krause@master:~> $ qsub -I -q testing
qsub: waiting for job 4465022.master.tardis.mpib-berlin.mpg.de to start
qsub: job 4465022.master.tardis.mpib-berlin.mpg.de ready
krause@login:~> $ qsub -I -q testing
qsub: waiting for job 4465022.login.tardis.mpib-berlin.mpg.de to start
qsub: job 4465022.login.tardis.mpib-berlin.mpg.de ready
krause@ood-9:~> $
......@@ -154,15 +154,15 @@ There are a number of **environment variables** available to each job, for insta
PBS_ARRAYID=1
PBS_ENVIRONMENT=PBS_BATCH
PBS_JOBCOOKIE=22E3D79E015B9F5EE9E205EBF8CB64E7
PBS_JOBID=4294864-1.master.tardis.mpib-berlin.mpg.de
PBS_JOBID=4294864-1.login.tardis.mpib-berlin.mpg.de
PBS_JOBNAME=STDIN-1
PBS_MOMPORT=15003
PBS_NODEFILE=/var/spool/torque/aux//4294864-1.master.tardis.mpib-berlin.mpg.de
PBS_NODEFILE=/var/spool/torque/aux//4294864-1.login.tardis.mpib-berlin.mpg.de
PBS_NODENUM=0
PBS_NUM_NODES=1
PBS_NUM_PPN=1
PBS_O_HOME=/home/mpib/krause
PBS_O_HOST=master.tardis.mpib-berlin.mpg.de
PBS_O_HOST=login.tardis.mpib-berlin.mpg.de
PBS_O_LANG=de_DE.UTF-8
PBS_O_LOGNAME=krause
PBS_O_MAIL=/var/mail/krause
......@@ -171,7 +171,7 @@ There are a number of **environment variables** available to each job, for insta
PBS_O_SHELL=/bin/bash
PBS_O_WORKDIR=/home/mpib/krause
PBS_QUEUE=default
PBS_SERVER=master
PBS_SERVER=login
PBS_TASKNUM=1
PBS_VERSION=TORQUE-2.4.16
PBS_VNODENUM=0
......
......@@ -190,7 +190,7 @@ Virtualenv(wrapper)
/home/mpib/krause/.virtualenvs/project/bin/python
pip --version
pip 9.0.1 from /home/mpib/krause/.virtualenvs/project/lib/python3.5/site-packages (python 3.5)
(project) [krause@master ~] pip install numpy
(project) [krause@login ~] pip install numpy
Collecting numpy
Using cached https://files.pythonhosted.org/packages/fe/94/7049fed8373c52839c8cde619acaf2c9b83082b935e5aa8c0fa27a4a8bcc/numpy-1.15.1-cp35-cp35m-manylinux1_x86_64.whl
Installing collected packages: numpy
......@@ -260,12 +260,12 @@ Tardis:
.. code-block:: bash
[krause@master ~] module avail conda
[krause@login ~] module avail conda
-------- /opt/environment/modules --------
conda/4.7.10
[krause@master ~] module load conda
[krause@master ~] conda -V
[krause@login ~] module load conda
[krause@login ~] conda -V
conda 4.7.10
Once loaded, just like with `pyvenv` or `virtualenv`, you can create and manage
......@@ -281,20 +281,20 @@ install Theano (or other conda-only packages) you can create a new environment:
.. code-block:: bash
[krause@master ~] module load conda # activate conda itself
[krause@master ~] conda create --yes --name theano
[krause@login ~] module load conda # activate conda itself
[krause@login ~] conda create --yes --name theano
Collecting package metadata (current_repodata.json): done
Solving environment: done
[...]
[krause@master ~] conda activate theano # activate a conda env
(theano) [krause@master ~] # now you can install packages into the env
(theano) [krause@master ~] conda install --yes numpy scipy mkl
[krause@login ~] conda activate theano # activate a conda env
(theano) [krause@login ~] # now you can install packages into the env
(theano) [krause@login ~] conda install --yes numpy scipy mkl
[...]
(theano) [krause@master ~] conda install --yes theano pygpu
(theano) [krause@master ~] which python
(theano) [krause@login ~] conda install --yes theano pygpu
(theano) [krause@login ~] which python
/home/beegfs/krause/.conda/envs/theano/bin/python
(theano) [krause@master ~] python
(theano) [krause@login ~] python
Python 3.7.4 (default, Aug 13 2019, 20:35:49)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
......@@ -307,10 +307,10 @@ To deactivate (and possibly remove) an existing conda environment, run:
.. code-block:: bash
(theano) [krause@master ~] conda deactivate
[krause@master ~] # deactivated, safe to remove
(theano) [krause@login ~] conda deactivate
[krause@login ~] # deactivated, safe to remove
[krause@master ~] conda remove --yes --name theano --all
[krause@login ~] conda remove --yes --name theano --all
Remove all packages in environment /home/beegfs/krause/.conda/envs/theano:
[...]
......
......@@ -8,7 +8,7 @@ R available through the :doc:`../environment/modules` system.
.. code-block:: none
[krause@master ~] module avail R
[krause@login ~] module avail R
--- /opt/environment/modules ---
R/2 R/3.3 R/3.5 R/3.6 Rlibs/3.3-fix
R/2.15 R/3.4 R/3.5.1 R/3.6.0
......@@ -21,17 +21,17 @@ Packages
Some packages are already installed globally, for every other dependency you
can just go ahead and install them on your own (Answer `yes` if you are asked
`Would you like to use a personal library instead?`).
All the nodes and the master node share your home directory so you only need to
All the nodes and the login node share your home directory so you only need to
install the packages once and they'll be available with your jobs. Note that
you should install the packages on the master as development files are usually
you should install the packages on the login node as development files are usually
not distributed to the execution nodes. Also, if you change the minor or major
version of R (3.5.x to 3.6.x or 3.x ->
4.x) it's necessary to rebuild your packages.
.. code-block:: r
[krause@master] module load R/3.6
[krause@master] R --quiet
[krause@login] module load R/3.6
[krause@login] R --quiet
> install.packages('rmarkdown', repos='http://ftp5.gwdg.de/pub/misc/cran/')
Installing package into ‘/mnt/beegfs/home/krause/R/x86_64-pc-linux-gnu-library/3.6’
(as ‘lib’ is unspecified)
......