Newer
Older
Harvard's Freesurfer_ pipeline is one of the most straight forward things to
parallelize. The time consuming :program:`recon-all` command will take up to
20 hours to complete and often needs to be run for a large number of subjects.
Obviously you can run all those processes independently of each other and in
parallel.
The simple subject loop shown in section :ref:`job_wrappers` and can be adapted like this:
.. code-block:: bash
SUBJECTS_IDS=SOME_ID{1..99} # SOME_ID1, SOME_ID2, ... , SOME_ID99
SUBJECTS_DIR="./subjects" # The freesurfer import location
for subject in $SUBJECTS_IDS ; do
echo "#PBS -m n" > tmp.pbs
echo "#PBS -o $HOME/logs/" >> tmp.pbs
echo "#PBS -j oe" >> tmp.pbs
echo "#PBS -d ." >> tmp.pbs
echo "module load freesurfer/6.0.0" >> tmp.pbs
echo "export SUBJECTS_DIR=$SUBJECTS_DIR" >> tmp.pbs
echo "recon-all -all -i ${subject}.nii -subjid ${subject}" >> tmp.pbs
You can currently choose between different versions of FreeSurfer located in
the folder :file:`/opt/freesurfer/` by activating a freesurfer module with
:program:`module load freesurfer[/version]`. See :doc:`../environment/modules`
for details.
.. _Freesurfer: https://surfer.nmr.mgh.harvard.edu/fswiki