From ce570725a10aa11a91fe1506c598c018a647de1e Mon Sep 17 00:00:00 2001 From: Thomas Date: Fri, 16 Aug 2024 12:14:16 +0200 Subject: [PATCH] recommend cpus-per-task instead of ntasks (#170) --- bih-cluster/docs/hpc-tutorial/episode-0.md | 8 ++++---- bih-cluster/docs/hpc-tutorial/episode-1.md | 2 +- bih-cluster/docs/hpc-tutorial/episode-2.md | 8 ++++---- bih-cluster/docs/hpc-tutorial/episode-4.md | 4 ++-- bih-cluster/docs/slurm/commands-sbatch.md | 4 ++-- 5 files changed, 13 insertions(+), 13 deletions(-) diff --git a/bih-cluster/docs/hpc-tutorial/episode-0.md b/bih-cluster/docs/hpc-tutorial/episode-0.md index 96dcfdf05..4045a77ce 100644 --- a/bih-cluster/docs/hpc-tutorial/episode-0.md +++ b/bih-cluster/docs/hpc-tutorial/episode-0.md @@ -28,11 +28,11 @@ While file paths are highlighted like this: `/data/cephfs-1/work/projects/cubit/ ## Instant Gratification After connecting to the cluster, you are located on a login node. -To get to your first compute node, type `srun --time 7-00 --mem=8G --ntasks=8 --pty bash -i` which will launch an interactive Bash session on a free remote node running up to 7 days, enabling you to use 8 cores and 8 Gb memory. Typing `exit` will you bring back to the login node. +To get to your first compute node, type `srun --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i` which will launch an interactive Bash session on a free remote node running up to 7 days, enabling you to use 8 cores and 8 Gb memory. Typing `exit` will you bring back to the login node. ```terminal -$ srun -p long --time 7-00 --mem=8G --ntasks=8 --pty bash -i -med0107 $ exit +hpc-login-1$ srun -p long --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i +hpc-cpu-1$ exit $ ``` @@ -45,7 +45,7 @@ In preparation for our first steps tutorial series, we would like you to install In general the users on the cluster will manage their own software with the help of conda. If you haven't done so so far, please [follow the instructions in installing conda](../best-practice/software-installation-with-conda.md) first. The only premise is that you are able to [log into the cluster](../connecting/advanced-ssh/linux.md). -Make also sure that you are logged in to a computation node using `srun -p medium --time 1-00 --mem=4G --ntasks=1 --pty bash -i`. +Make also sure that you are logged in to a computation node using `srun -p medium --time 1-00 --mem=4G --cpus-per-task=1 --pty bash -i`. Now we will create a new environment, so as to not interfere with your current or planned software stack, and install into it all the diff --git a/bih-cluster/docs/hpc-tutorial/episode-1.md b/bih-cluster/docs/hpc-tutorial/episode-1.md index abc55d669..b840872c0 100644 --- a/bih-cluster/docs/hpc-tutorial/episode-1.md +++ b/bih-cluster/docs/hpc-tutorial/episode-1.md @@ -14,7 +14,7 @@ The premise is that you have the tools installed as described in [Episode 0](epi are on a compute node. As a reminder, the command to access a compute node with the required resources is ``` -$ srun --time 7-00 --mem=8G --ntasks=8 --pty bash -i +$ srun --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i ``` ## Tutorial Input Files diff --git a/bih-cluster/docs/hpc-tutorial/episode-2.md b/bih-cluster/docs/hpc-tutorial/episode-2.md index a21b4964e..8d13600ae 100644 --- a/bih-cluster/docs/hpc-tutorial/episode-2.md +++ b/bih-cluster/docs/hpc-tutorial/episode-2.md @@ -46,8 +46,8 @@ The content of the file: # Set the file to write the stdout and stderr to (if -e is not set; -o or --output). #SBATCH --output=logs/%x-%j.log -# Set the number of cores (-n or --ntasks). -#SBATCH --ntasks=8 +# Set the number of cores (-c or --cpus-per-task). +#SBATCH --cpus-per-task=8 # Force allocation of the two cores on ONE node. #SBATCH --nodes=1 @@ -100,8 +100,8 @@ Your file should look something like this: # Set the file to write the stdout and stderr to (if -e is not set; -o or --output). #SBATCH --output=logs/%x-%j.log -# Set the number of cores (-n or --ntasks). -#SBATCH --ntasks=8 +# Set the number of cores (-c or --cpus-per-task). +#SBATCH --cpus-per-task=8 # Force allocation of the two cores on ONE node. #SBATCH --nodes=1 diff --git a/bih-cluster/docs/hpc-tutorial/episode-4.md b/bih-cluster/docs/hpc-tutorial/episode-4.md index a132ef1ec..72508505e 100644 --- a/bih-cluster/docs/hpc-tutorial/episode-4.md +++ b/bih-cluster/docs/hpc-tutorial/episode-4.md @@ -43,8 +43,8 @@ The `Snakefile` is already known to you but let me explain the wrapper script `s # Set the file to write the stdout and stderr to (if -e is not set; -o or --output). #SBATCH --output=logs/%x-%j.log -# Set the number of cores (-n or --ntasks). -#SBATCH --ntasks=2 +# Set the number of cores (-c or --cpus-per-task). +#SBATCH --cpus-per-task=2 # Force allocation of the two cores on ONE node. #SBATCH --nodes=1 diff --git a/bih-cluster/docs/slurm/commands-sbatch.md b/bih-cluster/docs/slurm/commands-sbatch.md index a48242628..1406f786a 100644 --- a/bih-cluster/docs/slurm/commands-sbatch.md +++ b/bih-cluster/docs/slurm/commands-sbatch.md @@ -27,8 +27,8 @@ The command will create a batch job and add it to the queue to be executed at a This is only given here as an important argument as the maximum number of nodes allocatable to any partition but `mpi` is set to one (1). This is done as there are few users on the BIH HPC that actually use multi-node paralleilsm. Rather, most users will use multi-core parallelism and might forget to limit the number of nodes which causes inefficient allocation of resources. -- `--ntasks` - -- This corresponds to the number of threads allocated to each node. +- `--cpus-per-task` + -- This corresponds to the number of CPU cores allocated to each task. - `--mem` -- The memory to allocate for the job. As you can define minimal and maximal number of tasks/CPUs/cores, you could also specify `--mem-per-cpu` and get more flexible scheduling of your job.