Newer
Older
Matlab is a bit of a problem child on the Tardis. While the `MATLAB Distributed
Computing Server`_ product aims to implement a compatibility layer to a number
of PBS based clusters it just doesn't work reliably for a number of reasons.
Because there are only a limited number of shared licenses available, it's
generally not feasible to run an arbitrary number of Matlab sessions in the
form of jobs. A workaround is to "compile" a script and create a standalone
redistribution environment, which does not require a license to run.
.. important::
You can check https://lipsupport.mpib-berlin.mpg.de/licstat for the current
license usage at the institute. Keep in mind that *one* license is necessary
for each user on each unique host. So 3 users, each submitting jobs to
any 4 hosts will result in 12 licenses.
Different Matlab versions are available via environment modules. You can list
them with :program:`module avail matlab` and activate a specific version with
:program:`module load matlab/<version>`.
**If** there are free licenses available and you just need a quick way to spawn
a single Matlab session, there is nothing wrong with just running Matlab as is.
This might especially be useful if you simply need a node with lot's of memory
or if you want to test your code. In an interactive job you could simply enter
"matlab" and it will warn you about there not being a display available and
start in command line mode.
.. code-block:: bash
srun --mem 100gb -p test --pty /bin/bash # run an interactive job
module load matlab/R2012b # in that job, run matlab
matlab
< M A T L A B (R) >
Copyright 1984-2012 The MathWorks, Inc.
R2012a (7.14.0.739) 64-bit (glnxa64)
February 9, 2012
To get started, type one of these: helpwin, helpdesk, or demo.
For product information, visit www.mathworks.com.
In a job context you would just run :program:`matlab -r main` with main.m containing your script:
sbatch --wrap ". /etc/profile ; module load matlab/R2014b; matlab -r main"
Compiling
---------
Once you leave the testing stage and would like to spawn an arbitrary number of
Matlab jobs/processes you have to compile your script with :program:`mcc`.
A reliable pattern is to create a main file :file:`project.m` that contains
a function with the same name and expects some arguments you would like to loop
over. A little like this maybe:
.. code-block:: matlab
function project(subject_id, sigma)
%
% my main program implementing foo
%
% arguments
% ---------
%
% subject_id: a string encoding the subject id
% sigma: a string encoding values for sigma
sigma = str2num(sigma);
repmat(cellstr(subject_id), 1, sigma)
Running :program:`mcc -m project.m` would then "compile" (or rather encrypt and
package) your function and output a system dependent binary named
:file:`project` and a wrapper script :file:`run_project.sh`. To run it you
now have to combine the wrapper script, the location of a Matlab Compile
Runtime or the local installation path of the Matlab instance, that was used by
mcc, and a sufficient number of arguments for the function project().
mcc -m project.m
./run_project.sh /opt/matlab/interactive 42 5
------------------------------------------
Setting up environment variables
---
LD_LIBRARY_PATH is .:/opt/matlab/interactive/runtime/glnxa64:/opt/matlab/interactive/bin/glnxa64:/opt/matlab/interactive/sys/os/glnxa64:/opt/matlab/interactive/sys/java/jre/glnxa64/jre/lib/amd64/native_threads:/opt/matlab/interactive/sys/java/jre/glnxa64/jre/lib/amd64/server:/opt/matlab/interactive/sys/java/jre/glnxa64/jre/lib/amd64/client:/opt/matlab/interactive/sys/java/jre/glnxa64/jre/lib/amd64
Warning: No display specified. You will not be able to display graphics on the screen.
ans =
To include toolboxes in your script you have to add them during the compile
step so they get included in your package. Matlab built-in toolboxes such as
signal processing or statistics are detected automatically by scanning the
functions used in your script and don't need to be added explicitly. Compiled
scripts can't use the :program:`addpath()` function at runtime. You can guard
those calls however with the function :program:`isdeployed()`, which will
return 1 when Matlab detects that it runs as a compiled script and 0 otherwise.
Example: Suppose you collect your project library in a toolbox called project,
which in turn uses the function :program:`normrnd()` from the statistics
package:
cat matlab/tools/project/myrnd.m
X = normrnd(0, 1, arg, arg);
You can then either use the "-a" or the "-I" switch of mcc to add your own toolbox.
+ **-a** will add the functions or directories listed directly to the compiled package/archive
+ **-I** (uppercase i) will add the location to the mcc search path so it get's included implicitly
Both options should work fine. The example below uses mcc from matlab R2014b,
but you can use any version. The important part is to use the same Matlab
version as MCR upon script invocation with :program:`run_project.sh`.
module load matlab/R2014b
cat project.m
function project(arg1)
myrnd(str2num(arg1))
mcc -m project.m -a matlab/tools/project
./run_project.sh /opt/matlab/R2014b 3
------------------------------------------
Setting up environment variables
---
LD_LIBRARY_PATH is .:/opt/matlab/R2014b/runtime/glnxa64:/opt/matlab/R2014b/bin/glnxa64:/opt/matlab/R2014b/sys/os/glnxa64:/opt/matlab/R2014b/sys/opengl/lib/glnxa64
0.5377 0.8622 -0.4336
1.8339 0.3188 0.3426
-2.2588 -1.3077 3.5784
You only have to compile your project once and can then use it any number
of times. Matlab extracts your package to a shared hidden folder called
`.mcrCache<Version-Number>`. Those folders sometimes get corrupted by
Matlab, especially when multiple jobs start at exactly the same time. The
only workaround so far is to add a sleep 1s between qsub/sbatch calls and hope
there is no collision. Also, it makes sense to regularly remove those
directories. But make sure all your jobs have finished before removing
them with :file:`rm -rf .mcrCache*`.
There are environment modules available to choose from and activate SPM12. At
the time of writing these are:
-------------- /opt/environment/modules ---------------
spm12/6225 spm12/6470 spm12/6685 spm12/6906 spm12/7219
Usually users are exporting a number of batch files with the spm gui on their
local machine, change the paths to reflect the names on the Tardis and then
call :program:`run_spm12.sh` with the **batch** parameter for each batch file.
Example: segmentation for a number of nifti images. The file batch.template
contains the string :file:`%%IMAGE%%` as a placeholder so we can easily replace it
with the current image path and create a number of new batches from a single
template:
i=0
for image in tp2/Old/*.nii ; do
fullpath=$PWD/$image
sed "s#%%IMAGE%%#$fullpath#" batch.template > batch_${i}.m
sbatch --wrap ". /etc/profile ; module load spm12 ; run_spm12.sh /mcr batch $PWD/batch_${i}.m"
**SPM Memory**
By default, SPM uses very conservative memory settings. Combined with large
time-series data and a networked file system this can quickly result in
inefficient I/O patterns due to unnecessary data chunking and hasty saving of
intermediate results. (thank you Nir Moneta for finding and reporting this!)
For that reason, our pre-compiled SPM modules now contain these new default values:
.. code-block:: matlab
global defaults;
defaults.stats.maxmem = 2^32; % increases chunk size
defaults.stats.resmem = true; % do not save intermediate steps
If you need different defaults (possibly up to `2^38` / 256GB ), you need to
include a custom :file:`spm_my_defaults.m` and re-compile SPM12.
**(Re-)Compiling**
Sometimes it *might be* necessary to recompile the spm toolbox yourself,
for instance if you need a specific version, if you want to add external
toolboxes to SPM (e.g. cat12) or if you need to change some spm defaults.
Warning: No display specified. You will not be able to display graphics on the screen.
< M A T L A B (R) >
Copyright 1984-2012 The MathWorks, Inc.
R2012a (7.14.0.739) 64-bit (glnxa64)
February 9, 2012
To get started, type one of these: helpwin, helpdesk, or demo.
For product information, visit www.mathworks.com.
>> %addpath(genpath('/home/mpib/krause/matlab/tools/spm12')) % overkill
>> addpath('/home/mpib/krause/matlab/tools/spm12')
>> addpath('/home/mpib/krause/matlab/tools/spm12/config')
>> spm_make_standalone()
[... lot's of output and warnings ...]
Processing /opt/matlab/R2012a/toolbox/matlab/mcc.enc
[... lot's of output and warnings ...]
Generating file "/home/mpib/krause/matlab/tools/spm_exec/readme.txt".Gen
Generating file "/home/mpib/krause/matlab/tools/spm12/../spm_exec/run_spm12.sh".
>>
This should create a folder :file:`spm_exec` or :file:`standalone` below the
spm toolbox location containing the fresh :program:`spm12` and
:program:`run_spm12.sh` which you can then use in your jobs just like above.
To properly add the fieldtrip toolbox we have to jump through some more hoops.
For now the only reliable and flexible way is to run :program:`mcc()` from
within a Matlab session and make sure to run :program:`ft_defaults()` first.
Also, some of the provided mex files won't work out of the box, so we have to
recompile them using :program:`ft_compile_mex()`. This however stumbles over
some external C file called :file:`CalcMD5.c` using non-standard comments.
The following Matlab Script has been successfully used to create a compiled
script from a :file:`main.m` file, which relies on internal fieldtrip
functions.
.. code-block:: matlab
% setup path
basepath='/home/mpib/krause/matlab/tools/ConMemEEGTools/'
addpath([basepath, '/fieldtrip-20150930'])
ft_defaults()
% re-compile mex functions (this has to be done only once per fieldtrip version)
% "fix" the CalcMD5.c file
system(['sed -i ''s#//.*##g'' ', basepath, '/fieldtrip-20150930/external/fileexchange/CalcMD5.c'])
% and compile
ft_compile_mex(true)
% build the runtime environment
mcc('-m', 'main.m')
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
Efficient Saving
----------------
We have noticed a couple of times now that Matlab's :program:`save()` function
can lead to undesirable performance issues. This occurs especially when a large
number of jobs try to save big objects at the same time. This is a bit of
a complex issue and I will try to go through them in this section using some
examples. You are free to adapt some of the things highlighted here in your
code, but you don't have to.
There are two aspects to consider when saving larger objects with Matlab:
1. shared Input/Output
Your home directories are connected via a network file system (NFS). While
NFS bandwidth, storage size and IO performance is matched to our cluster
size, its capacity can be quickly saturated. We recommend to carefully watch
your saving and loading operations in your scripts, especially when there
are a large number of jobs accessing files at the same time. An indication
for bad IO performance is a job cpu efficiency value considerably less than
1.0 (check the `Tardis Website`_ for that value).
The worst case for a networked file system are a huge number of tiny file
operations, like reading or writing a few bytes or constantly checking for
changed file attributes (size, modification time and so on). Try to avoid
these situations by only writing large chunks of data at a time.
2. Compression
Sometimes it's much faster to compress large chunks of data **before**
sending it over the wire (i.e. saving an object). Also, storage size is
a valuable resource and you should therefore consider compressing data with
Matlab.
With objects larger than 2GB unfortunately, people usually resort to saving
with the Matlab File format v7.3. The default behaviour of Matlab's
:program:`save(.., '-v7.3')` is suboptimal and you might want to save your
objects differently. The following examples highlight why this is necessary and
also why it's not trivial to just recommend a better alternative.
Consider two extreme variants of data, a truly random matrix and a matrix full
of zeros. Both with a size of 5GB:
.. code-block:: bash
R = randn(128,1024,1024,5);
Z = zeros(128,1024,1024,5);
On a drained Tardis, saving these two objects naively takes **forever**
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
.. code-block:: bash
tic; save('~/test.mat', ['R'], '-v7.3'); toc;
Elapsed time is 346.156656 seconds.
tic; save('~/test.mat', ['Z'], '-v7.3'); toc;
Elapsed time is 145.100863 seconds
There are two reasons for this. Firstly, the compression algorithm Matlab uses
appears to be quite slow. And even with perfectly compressible data (Z can be
easily expressed with 5 bytes), Matlab needs more than two minutes to save the
object, while **still** creating a considerable amount of IO operations.
There is an option to disable compression (which is generally not advisable
anyway). But even then saving the "R" object takes more than 3 minutes:
.. code-block:: bash
tic; save('~/test.mat', ['R'], '-v7.3', '-nocompression'); toc
Elapsed time is 220.320252 seconds.
With version 7.3 Matlab changed the binary format for object serialization to
something based on HDF5. Luckily, the lower level functions for HDF5 file
manipulations are available to you. For this simple case, a matrix of numbers
saving directly to HDF5 could look like this:
.. code-block:: bash
>> h5create('test.h5', '/R', size(R), 'ChunkSize', [128,1024,1024,1]);
>> tic; h5write('test.h5', '/R', R); toc;
Elapsed time is 15.225184 seconds.
In comparison to the naive :program:`save()`, we are faster by a factor of 22.
But this is not the whole story unfortunately. One downside to this approach is
connected to the internal structure of HDF5 objects. Saving a struct with
multiple, nested objects of different types (think string annotations, integer
arrays and float matrices) is much more tedious. Timothy E. Holy wrote
a wrapper script, that kind of automatically creates the necessary structures
and published the function :program:`savefast` on `Matlab's fileexchange`_. It
has a similar interface to the original save and can be used as a drop-in
replacement in many cases. You need to add the function to your search path of
course.
.. code-block:: bash
>> tic; savefast('test.mat', 'R'); toc
Elapsed time is 16.242634 seconds.
Unfortunately, we can't just stop here, because savefast by default is not
compressing anything and the whole point of using HDF5 is because we need to
store **large** matrices, bigger than 2GB. Blindly storing them to storage will
waste storage space and saturates the disk arrays with only 4 concurrent jobs
(based on the 15s benchmark above).
In other words, using some kind of compression is necessary - unless you
**know** that you generated random or close to random data. To highlight the
differences that come with compression level, let's look at the 5GB of zeros
stored in Z again. Matlab is a bit faster in compressing and saving with
:program:`save()`. But it still takes 145 seconds. You might be tempted to
combine the savefast() approach and simply compress with something fast like
::program:`gzip` afterwards. This would actually speed things up *and* save
a lot of space:
.. code-block:: bash
>> Z = zeros(128,1024,1024,5);
>> tic; save('~/test.mat', ['Z'], '-v7.3'); toc;
Elapsed time is 145.100863 seconds.
>> tic; savefast('~/test2.mat', ['Z']); system('gzip test2.mat'); toc;
Elapsed time is 53.527696 seconds.
-rw-r--r-- 1 krause domain users 31M May 16 17:57 test.mat
-rw-r--r-- 1 krause domain users 5.0M May 16 18:00 test2.mat.gz
On a first glance this appears to be faster and more efficient. But this
approach does not scale well to multiple jobs, as it saves the whole 5GB
uncompressed, then reads it all in again (from NFS cache, but still over the
network) and then, after compression, saves it back to disk.
Using HDF5's low level function you can fine tune the compression level from
0 (lowest and fastest) to 9 (highest and slowest). If you set the compression
level however, you also need to set a chunk size, probably because compression
is done chunk-wise. Recommending a generic compression level is hard and
depends very much on your data. Of course you don't want to waste time by
maximizing the compression ratio, gaining only a couple of megabytes, but you also
don't want to waste bandwidth by saving overhead data. Consider our zeros again:
>> h5create('test.h5', '/Z', size(Z), 'ChunkSize', [128,1024,1024,1], 'Deflate', 9);
>> tic; h5write('test.h5', '/Z', Z); toc;
Elapsed time is 35.333514 seconds.
>> ls -lh test.h5
-rw-r--r-- 1 krause domain users 5.0M May 16 18:35 test.h5
>> h5create('test.h5', '/Z', size(Z), 'ChunkSize', [128,1024,1024,1], 'Deflate', 6);
>> tic; h5write('test.h5', '/Z', Z); toc;
Elapsed time is 36.646509 seconds.
>> ls -lh test.h5
-rw-r--r-- 1 krause domain users 5.0M May 16 18:36 test.h5
>> h5create('test.h5', '/Z', size(Z), 'ChunkSize', [128,1024,1024,1], 'Deflate', 3);
>> tic; h5write('test.h5', '/Z', Z); toc;
Elapsed time is 20.002455 seconds.
>> ls -lh test.h5
-rw-r--r-- 1 krause domain users 23M May 16 18:37 test.h5
>> h5create('test.h5', '/Z', size(Z), 'ChunkSize', [128,1024,1024,1], 'Deflate', 0);
>> tic; h5write('test.h5', '/Z', Z); toc;
Elapsed time is 50.847998 seconds.
>> ls -lh test.h5
-rw-r--r-- 1 krause domain users 5.1G May 16 18:38 test.h5
Here I picked a chunk size of 1GB, compressing with levels 9, 6, 3, and 0. Not
surprisingly the optimal value in this group is somewhere in the middle (3) and
it only takes 20seconds to save the data, while still reducing the 5GB file to
23MB.
.. _`MATLAB Distributed Computing Server`: http://de.mathworks.com/help/mdce/index.html
.. _`Tardis Website`: https://tardis.mpib-berlin.mpg.de/nodes
.. _`Matlab's fileexchange`: https://de.mathworks.com/matlabcentral/fileexchange/39721-save-mat-files-more-quickly