Skip to content

Commit

Permalink
recommend cpus-per-task instead of ntasks (#170)
Browse files Browse the repository at this point in the history
  • Loading branch information
sellth authored Aug 16, 2024
1 parent 2c43287 commit ce57072
Show file tree
Hide file tree
Showing 5 changed files with 13 additions and 13 deletions.
8 changes: 4 additions & 4 deletions bih-cluster/docs/hpc-tutorial/episode-0.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@ While file paths are highlighted like this: `/data/cephfs-1/work/projects/cubit/
## Instant Gratification

After connecting to the cluster, you are located on a login node.
To get to your first compute node, type `srun --time 7-00 --mem=8G --ntasks=8 --pty bash -i` which will launch an interactive Bash session on a free remote node running up to 7 days, enabling you to use 8 cores and 8 Gb memory. Typing `exit` will you bring back to the login node.
To get to your first compute node, type `srun --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i` which will launch an interactive Bash session on a free remote node running up to 7 days, enabling you to use 8 cores and 8 Gb memory. Typing `exit` will you bring back to the login node.

```terminal
$ srun -p long --time 7-00 --mem=8G --ntasks=8 --pty bash -i
med0107 $ exit
hpc-login-1$ srun -p long --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i
hpc-cpu-1$ exit
$
```

Expand All @@ -45,7 +45,7 @@ In preparation for our first steps tutorial series, we would like you to install
In general the users on the cluster will manage their own software with the help of conda.
If you haven't done so so far, please [follow the instructions in installing conda](../best-practice/software-installation-with-conda.md) first.
The only premise is that you are able to [log into the cluster](../connecting/advanced-ssh/linux.md).
Make also sure that you are logged in to a computation node using `srun -p medium --time 1-00 --mem=4G --ntasks=1 --pty bash -i`.
Make also sure that you are logged in to a computation node using `srun -p medium --time 1-00 --mem=4G --cpus-per-task=1 --pty bash -i`.

Now we will create a new environment, so as to not interfere
with your current or planned software stack, and install into it all the
Expand Down
2 changes: 1 addition & 1 deletion bih-cluster/docs/hpc-tutorial/episode-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The premise is that you have the tools installed as described in [Episode 0](epi
are on a compute node. As a reminder, the command to access a compute node with the required resources is

```
$ srun --time 7-00 --mem=8G --ntasks=8 --pty bash -i
$ srun --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i
```

## Tutorial Input Files
Expand Down
8 changes: 4 additions & 4 deletions bih-cluster/docs/hpc-tutorial/episode-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ The content of the file:
# Set the file to write the stdout and stderr to (if -e is not set; -o or --output).
#SBATCH --output=logs/%x-%j.log

# Set the number of cores (-n or --ntasks).
#SBATCH --ntasks=8
# Set the number of cores (-c or --cpus-per-task).
#SBATCH --cpus-per-task=8

# Force allocation of the two cores on ONE node.
#SBATCH --nodes=1
Expand Down Expand Up @@ -100,8 +100,8 @@ Your file should look something like this:
# Set the file to write the stdout and stderr to (if -e is not set; -o or --output).
#SBATCH --output=logs/%x-%j.log

# Set the number of cores (-n or --ntasks).
#SBATCH --ntasks=8
# Set the number of cores (-c or --cpus-per-task).
#SBATCH --cpus-per-task=8

# Force allocation of the two cores on ONE node.
#SBATCH --nodes=1
Expand Down
4 changes: 2 additions & 2 deletions bih-cluster/docs/hpc-tutorial/episode-4.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ The `Snakefile` is already known to you but let me explain the wrapper script `s
# Set the file to write the stdout and stderr to (if -e is not set; -o or --output).
#SBATCH --output=logs/%x-%j.log

# Set the number of cores (-n or --ntasks).
#SBATCH --ntasks=2
# Set the number of cores (-c or --cpus-per-task).
#SBATCH --cpus-per-task=2

# Force allocation of the two cores on ONE node.
#SBATCH --nodes=1
Expand Down
4 changes: 2 additions & 2 deletions bih-cluster/docs/slurm/commands-sbatch.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ The command will create a batch job and add it to the queue to be executed at a
This is only given here as an important argument as the maximum number of nodes allocatable to any partition but `mpi` is set to one (1).
This is done as there are few users on the BIH HPC that actually use multi-node paralleilsm.
Rather, most users will use multi-core parallelism and might forget to limit the number of nodes which causes inefficient allocation of resources.
- `--ntasks`
-- This corresponds to the number of threads allocated to each node.
- `--cpus-per-task`
-- This corresponds to the number of CPU cores allocated to each task.
- `--mem`
-- The memory to allocate for the job.
As you can define minimal and maximal number of tasks/CPUs/cores, you could also specify `--mem-per-cpu` and get more flexible scheduling of your job.
Expand Down

0 comments on commit ce57072

Please sign in to comment.