Commit c26bec3e authored by Michael Krause's avatar Michael Krause 🎉
Browse files

software: new ashs

parent 74854f4d
......@@ -56,9 +56,9 @@ author = 'Michael'
# built documents.
#
# The short X.Y version.
version = '3.4.4'
version = '3.4.5'
# The full version, including alpha/beta/rc tags.
release = '3.4.4'
release = '3.4.5'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......
......@@ -4,6 +4,9 @@ Welcome to Tardis's documentation!
Changelog
=========
3.4.5 (09.02.2018)
+ major changes with ASHS
3.4.4 (19.12. 2017)
+ updated Tardis specs
+ added FSL feat example
......@@ -59,8 +62,8 @@ List of Contents
:maxdepth: 1
:caption: Software
software/ashs
software/ants
software/ashs
software/freesurfer
software/fsl
software/matlab
......
......@@ -7,31 +7,97 @@ Segmentation of Hippocampal Subfields) tool available on the Tardis.
In most cases you are probably going to run this tool with a pre-installed
atlas and a set of a T1 and T2 Nifti images to start the automatic segmentation
pipeline. To do this you have to tell ASHS where its root folder is located by
exporting a variable called ``ASHS_ROOT`` like this:
setting a variable called ``ASHS_ROOT``. A number of environment modules will
help with that.
.. code-block:: bash
export ASHS_ROOT=/opt/ashs
$ module load ashs
$ echo $ASHS_ROOT
/opt/software/ashs/1.0.0-mpib0
You could also get your own copy (300MB) of ASHS for example from our gitlab server
https://gitlab.mpib-berlin.mpg.de/krause/ashs/tree/mpib and place it in your
home.
**Note** that there are two major, incompatible versions available right now
(February 2018). The legacy version (0.1.x), and a new version (1.0.0), which the
authors also refer to as *fastashs*. Versions suffixed with *mpibN* contain a
set of patches developed at MPIB to work with Torque.
Atlases
---
The atlases available need to match the ashs version to function properly.
Checkout the folder :file:`/opt/software/ashs/data` to see available atlases or
download your on. Right now there are:
.. code-block:: bash
mkdir ~/src/ && cd ~/src
git clone --depth 1 --branch mpib https://gitlab.mpib-berlin.mpg.de/krause/ashs.git
export ASHS_ROOT=$HOME/src/ashs
/opt/software/ashs/data/
├── 1.0
│   ├── ashs_atlas_mpib_20180208
│   └── ashs_atlas_upennpmc_20170810
└── legacy
├── ashs_atlas_mpib_2016
├── ashs_atlas_paul_2012
└── ashs_atlas_upennpmc_20140416
The data in *legacy* is supposed to work with the 0.1.x-branch of ASHS and the
data in *1.0* have been created with the newer branch.
Parallelization
---
The legacy branch of ASHS necessarily relied internally on qsub. Segmentation,
even for a single subject, was almost prohibitively slow to be run on a single
core. The patched version should distribute nicely over the cluster when using
the parameter `-Q` (see below for examples).
**However**, the newer ASHS branch is able to use multiple cores for each
subject. That means you now have the choice to either distribute a segmentation
across the whole cluster or submit a single, multi-threaded version. This
should be the preferred approach as it is much more robust and closer to the
original scripts.
The table below shows a comparison for a test segmentation run using the **1.0** branch:
========== ===========
cores real time
========== ===========
800 (pbs) 20m
20 (par) 15m
8 23m
4 35m
2 55m
1 110m
Due to defensive inter-stage delays, the first approach (using cluster-wide
distribution with `-Q`) was even slower than 20-core parallel version on a
single node using `-P`. Even single-threaded performance is at acceptable
speeds now. With recent workstations a batch segmentation for a small number of
subjects could be done overnight - without a cluster.
Segmentation
-------------
Given your files are already in place a typical invocation would then be:
Given your files are already in place a typical invocation for a single subject
would then be **either**:
old approach
^^^^^^^^^^^^
(using internal qsub)
.. code-block:: bash
$ASHS_ROOT/bin/ashs_main.sh -a /opt/ashs/data/atlas_upennpmc/ -g T1.nii -f T2.nii -w wdir -T -Q
[krause@master ~] module load ashs
[krause@master ~] $ASHS_ROOT/bin/ashs_main.sh \
-a /opt/software/ashs/data/1.0/ashs_atlas_mpib_20180208\
-g T1.nii -f T2.nii -w wdir -T -Q
This will run the ASHS pipeline in the foreground submitting jobs and waiting
for them to finish. Results will be put in the specified folder *wdir*. You can
......@@ -47,7 +113,8 @@ keep the shell open use the ampersand (*&*) and shell redirection mechanism like
$ASHS_ROOT/bin/ashs_main.sh [..opts..] >ashs.log 2>&1 &
For a number of image pairs you can use a simple for loop.
For a number of image pairs you could use a simple for loop.
.. code-block:: bash
......@@ -56,12 +123,52 @@ For a number of image pairs you can use a simple for loop.
done
Be careful with looping over a large number (>30) of image pairs in this way as
some intermediate steps will generate a lot of jobs on its own. Also check ``qstat`` and the log file you specified to track ASHS' progress.
some intermediate steps will generate a lot of jobs on its own. Also check
``qstat`` and the log file you specified to track ASHS' progress.
new approach
^^^^^^^^^^^^
Since ASHS version 1.0 this is the preferred way to submit segmentations for a number of subjects.
.. code-block:: bash
[krause@master ~] qsub ashs_job.pbs -l nodes=1,ppn=8,mem=10gb
with :file:`ashs_job.pbs` containing:
.. code-block:: bash
module load ashs
export SUBJECT_ID=23
$ASHS_ROOT/bin/ashs_main.sh \
-a /opt/software/ashs/data/1.0/ashs_atlas_mpib_20180208\
-g ${SUBJECT_ID}/T1.nii -f ${SUBJECT_ID}/T2.nii -w {SUBJECT_ID}_wdir -T -P
Note the difference in the last parameter here (-Q vs -P).
Looping over a number of subject IDs would look something like this.
.. code-block:: bash
for id in 01 03 10 32 ; do
export SUBJECT_ID=$id
qsub ashs_job.pbs -l nodes=1,ppn=8,mem=10gb -V
done
In this case the option `-V` to qsub instructs qsub to inherit all formerly
exported variables. This way the variable `SUBJECT_ID` is accessible in the job
context.
Known Issues
------------
.. ATTENTION::
This only refers to the old submission approach. When using fastashs and `-P`
you can ignore this issue.
Due to the very large amount of intermediate jobs, script submission may become
unreliable when submitting more than 20 or 30 images at once.
......@@ -135,18 +242,14 @@ Let's assume your niftis are in a folder called niftis/ and the segmentations in
1105 niftis/1105/T1*gz niftis/1105/high*gz Segmentations/1105_*left.nii.gz Segmentations/1105_*right.nii.gz
...
At the moment you need to either download a copy of ASHS from gitlab as shown above or use the one in ``/opt/ashs-git/``:
.. code-block:: bash
export ASHS_ROOT=/opt/ashs-git
Note that unless Torque support will be added to ashs natively, you need at least version *0.1.0-mpib1* from the legacy branch or *1.0.0-mpib0* from the fastashs branch.
With all the files in place you can run ``ashs_main.sh`` like this:
.. code-block:: bash
module load ashs
$ASHS_ROOT/bin/ashs_train.sh -D manifest.txt -L labels.txt -w atlas_wdir
.. _ASHS: https://sites.google.com/site/hipposubfields/
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment