Introduction ============ This manual describes what the TARDIS is and provides some basic examples and useful code snippets for various software packages that can be used on LIP's cluster environment. We are trying to update the documentation every time we successfully test a new use case. It is meant to be a guideline and if you need help at any point we will be glad to assist you. We appreciate all of your feedback so please contact us: `grid-admin@mpib-berlin.mpg.de `_ Specs ----- Tardis **T**\ardis, **A** **R**\apid **D**\istributed **I**\nformation **S**\ystem .. image:: ../img/tardis_real.png :width: 60% Some technical facts: + **832** Intel® Xeon® CPU E5-2670 CPU cores(no HT) inside 48 Dell m6x0 blade servers + **R**:sub:`max` = 9.9 TFlops, **R**:sub:`peak` = 14.8 TFlops + **10.6TB** total amount of memory + **747 TB** of attached BeeGFS storage + **10GB/s** fully-connected Ethernet Workflows --------- **Sequential** The simplest and very typical processing workflow consist of 3 steps. + data download from file servers + sequential data processing + result upload to file servers .. image:: ../img/WF-seq.svg :width: 80% Obviously this is not very efficient and with data processing on the Tardis we would want to do something more efficient. Also this approach requires desktop machines in a shared working environment to be always on. This procudes excessive noise and heat. It is also not very nice considering other colleagues might want to use your machine as well. But most importantly it is **slow**. **Parallel** With the Tardis you can login from your laptop or workstation with SSH (see: :doc:`login`) to a single head node called ``tardis``. On that node users can prepare and test their code and analysis and then submit it to a queue (see: :ref:`torque`). Jobs will then **eventually** be submitted to one of the computing nodes to get a guaranteed set of processing resources. Afterwards users can collect the results and copy them back to the file servers. .. image:: ../img/WF-par.svg :width: 80%