Skip to content
Commits on Source (2)
......@@ -2,36 +2,72 @@ FreeSurfer
==========
Harvard's Freesurfer_ pipeline is one of the most straight forward things to
parallelize. The time consuming :program:`recon-all` command will take up to
20 hours to complete and often needs to be run for a large number of subjects.
Obviously you can run all those processes independently of each other and in
parallel.
parallelize. The time consuming :program:`recon-all` command will take up to
20 hours (6 hours for more recent versions) to complete and often needs to be
run for a large number of subjects. Obviously you can run all those processes
independently of each other and in parallel. There are also some parts of the
freesurfer code that can be parallelized with openmp. To use that you need to
set some flags (see example) and request a job with a matching number cpus.
You can currently choose between different versions of FreeSurfer located in
the folder :file:`/opt/freesurfer/` by activating a freesurfer module with
:program:`module load freesurfer[/version]`. See :doc:`../environment/modules`
for details.
Example
-------
The simple subject loop shown in section :ref:`slurm_job_wrappers` and can be adapted like this:
The simple subject loop shown in section :ref:`slurm_job_wrappers` can be adapted like this:
.. code-block:: bash
SUBJECTS_IDS=SOME_ID{1..99} # SOME_ID1, SOME_ID2, ... , SOME_ID99
SUBJECTS_DIR="./subjects" # The freesurfer import location
IN=bids_dataset/
OUT=bids_dataset/derivatives/freesurfer/
OPTS="-openmp 6 -threads 6 -all"
mkdir -p $OUT
for subject in $SUBJECTS_IDS ; do
echo '#!/bin/bash' > tmp.sh
echo "#SBATCH --output $HOME/logs/slurm-%j.out" >> tmp.sh
echo "export FREESURFER_HOME=/opt/freesurfer/6.0.0" >> tmp.sh
echo "source \$FREESURFER_HOME/SetUpFreeSurfer.sh" >> tmp.sh
echo "export SUBJECTS_DIR=$SUBJECTS_DIR" >> tmp.sh
echo "recon-all -all -i ${subject}.nii -subjid ${subject}" >> tmp.sh
for sub in $(tail -n+2 $IN/participants.tsv | cut -f1) ; do
echo '#!/bin/bash' > jobfile
echo "#SBATCH --output $HOME/logs/slurm-%j.out" >> jobfile
echo "#SBATCH --cpus-per-task 6" >> jobfile
echo "#SBATCH --mem 4GB" >> jobfile
echo "module load freesurfer/7.4.1" >> jobfile
echo "recon-all $OPTS -i $IN/$sub/anat/*T1w.nii.gz -subjid ${sub} -sd $OUT" >> jobfile
sbatch tmp.sh
sbatch jobfile
done
You can currently choose between different versions of FreeSurfer located in
the folder :file:`/opt/freesurfer/` by activating a freesurfer module with
:program:`module load freesurfer[/version]`. See :doc:`../environment/modules`
for details.
A simple benchmark on recon-all (7.4.1) on a single 192x256x256 T1w image can
be found in the table below. The numbers suggest there is minimal gain in using
more than 4 to 6 CPU threads. A memory reservation of 4GB should be more than
enough with images of this kind.
.. list-table:: fMRIPrep benchmarks
:header-rows: 1
* - Number of cores (HT)
- Memory Usage (maxRSS)
- Elapsed Time (H:M:S)
* - 1
- 2141 MB
- 05:38:55
* - 2
- 2143 MB
- 05:00:08
* - 4
- 2144 MB
- 03:16:23
* - 6
- 2111 MB
- 03:33:03
* - 8
- 2144 MB
- 03:09:30
* - 10
- 2048 MB
- 02:56:38
.. _Freesurfer: https://surfer.nmr.mgh.harvard.edu/fswiki