--- output: github_document --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Connect to GitLab If you haven't used git at the MPI yet, you should generate SSH keys and upload the public key to Gitlab (see the [Gitlab docs](https://docs.gitlab.com/ee/gitlab-basics/create-your-ssh-keys.html)). (Skip if you have ssh keys): Open a terminal and run: ```{bash, eval=FALSE} ssh-keygen -t ed25519 -C "peikert@mpib-berlin.mpg.de" ``` to create the new keys. Open the newly generated public key (or your existing one) and copy it. Put it here https://git.mpib-berlin.mpg.de/profile/keys. # Connect to Tardis For the cluster you have to setup two additional protocols, namely VPN, SSH for public Git projects. For private GitLab projects you need yet another protocol (GitLab Tokens). ## Connect to VPN VPN enables you to MPI internal resources (aka ip addresses) like printers or TARDIS, the beloved cluster. Follow the steps of the wiki: https://wiki.mpib-berlin.mpg.de/books/it/page/vpn ## Connect to Tardis with SSH To connect to tardis you have copy your ssh keys to it so it recognizes you without password. ### Using ssh-copy-id The `ssh-copy-id` command does this for you: ```{bash, eval=FALSE} ssh-copy-id peikert@tardis.mpib-berlin.mpg.de ``` Test connection: ```{bash, eval=FALSE} ssh peikert@tardis.mpib-berlin.mpg.de ``` If you can login without password everything works. If not ask Michael Krause. ### If ssh-copy-id does not work But `ssh-copy-id` is not available everywhere. You can also login to tardis and paste the public key (usually `~/.ssh/id_rsa.pub`) as a newline in the file `~/.ssh/authorized_keys` (if it is empty does not exist, don't worry just create it): ```{bash, eval=FALSE} ssh peikert@tardis.mpib-berlin.mpg.de # this command edits/creates the file # paste content of ~/.ssh/id_rsa.pub on its own line nano ~/.ssh/authorized_keys ``` Test connection: ```{bash, eval=FALSE} ssh peikert@tardis.mpib-berlin.mpg.de ``` If you can login without password everything works. If not ask Michael Krause. ## Your first future, brought to you by tardis Remember to activate VPN. You should now be able to connect to tardis: ```{r} tardis <- parallelly::makeClusterPSOCK("tardis.mpib-berlin.mpg.de", port='random', user="peikert", # this R version does not matter # it only requires that future.batchtools is there rscript=c("/opt/software/R/4.0.3/bin/Rscript"), homogeneous = TRUE) ``` And you can use futures to evaluate stuff on tardis. ```{r} library(future) # set option to ignore a warning options(future.rng.onMisuse = "ignore") plan(tweak(cluster, workers=tardis)) value(future(4*3)) # 4*3 is calculated on tardis login node ``` Anyhow, while this works you do not use the full power of tardis because these things are only calculated on the login node of tardis (and you should never do this for computing intense tasks). You need to use slurm to submit jobs and request resources. ```{r} library(future.batchtools) plan(list(tweak(cluster, workers=tardis), tweak(batchtools_slurm, workers = 1, resources=list(ncpus=1, memory='200m', walltime=600, partition=c('quick'))))) ``` Then you use a second layer of futures to evaluate things on tardis' workers: ```{r, eval=FALSE} local <- 2*4 login_node <- value(future(2*4)) worker <- value(future(value(future(2*4)))) ``` It may be that this does not work for you, because the package and R versions on tardis, its workers and your computer have to match. To solve this we use Docker and Singularity. # Align your R with Tardis ## Docker on GitLab Docker provides a container with an operating system and software that is portable across machines. Real handy if you want to ensure the same software across machines. The great thing is you do not even need Docker installed for this to work, because tardis just needs the container image and you do not have to build it yourself. The image is build by GitLab from the `Dockerfile` which you will create in a minute. To align tardis with the software you use yourself, first update all your packages then edit the [Dockerfile provided in this repo](https://git.mpib-berlin.mpg.de/peikert/a-future-for-tardis/-/blob/master/Dockerfile). Change the version number in line 1 to the R version you use, change the date to today and add packages at will to the list. Commit the Dockerfile to Git/GitLab. While you have a Dockerfile, GitLab does not yet know that it should build the image for you. Just copy and commit the [`.gitlab-ci.yml` file provided in this repo](https://git.mpib-berlin.mpg.de/peikert/a-future-for-tardis/-/blob/master/.gitlab-ci.yml) When you have pushed it you see that GitLab is working on the page of your repo on the left hand side under CI/CD. When it is finished you see the image under "Packages & Registries" -> "Docker Registry". ## Singularity on Tardis If you have made the repository public things are a little bit easier because you do not need to authenticate to GitLab from Tardis to download the Docker image. Login on Tardis: ```{bash, eval=FALSE} ssh peikert@tardis.mpib-berlin.mpg.de ``` ```{bash, eval=FALSE} # navigate to your project cd project # pull image from gitlab singularity pull docker://registry.git.mpib-berlin.mpg.de/peikert/a-future-for-tardis:latest ``` Now batchtools has to know that you want to use singularity. For that you copy [`.batchtools.slurm.singularity.tmpl`](https://git.mpib-berlin.mpg.de/peikert/a-future-for-tardis/-/blob/master/.batchtools.slurm.singularity.tmpl) to tardis. And change the path of the singularity image in this template. You have to change the plan again to include the slurm template: ```{r} plan(list(tweak(cluster, workers=tardis), tweak(batchtools_slurm, workers = 1, #number of tasks that may run parallel on tardis # the R/package version of the singularity image has to match # the local machine, therefore check Dockerfile for R version template = "/home/mpib/peikert/.batchtools.slurm.singularity.tmpl", resources=list(ncpus=1, memory='200m', walltime=600, partition=c('quick'))))) ``` ```{r} local <- 2*4 login_node <- value(future(2*4)) worker <- value(future(value(future(2*4)))) ``` For this workflow only local and worker need to be aligned. Notice that the login node runs Debian but the worker run Ubuntu because the container are based on Ubuntu. ```{r} session <- function()with(sessionInfo(), list(rversion = paste0(R.version$major, ".", R.version$minor), os = running)) list(local = session(), login_node = value(future(session())), worker = value(future(value(future(session()))))) ```