Skip to content

Commit

Permalink
Fix wrong parameter: --memory > --mem
Browse files Browse the repository at this point in the history
  • Loading branch information
mkuhring authored Jul 25, 2024
1 parent cddecc7 commit 5b0305c
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions bih-cluster/docs/how-to/connect/high-memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ highmem up 14-00:00:0 3 idle med040[1-4]
To connect to one of them, simply allocate more than 200GB of RAM in your job.

```
hpc-login-1:~$ srun --pty --memory=300GB bash -i
hpc-login-1:~$ srun --pty --mem=300GB bash -i
med0401:~$
```

You can also pick one of the hostnames:

```
hpc-login-1:~$ srun --pty --memory=300GB --nodelist=med0403 bash -i
hpc-login-1:~$ srun --pty --mem=300GB --nodelist=med0403 bash -i
med0403:~$
```

Expand Down
4 changes: 2 additions & 2 deletions bih-cluster/docs/slurm/memory-allocation.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,13 +118,13 @@ In the BIH HPC context, the following is recommended to:

!!! note "Memory Allocation in Slurm Summary"

- most user will simply use `--memory=<size>` (e.g., `<size>=3G`) to allocate memory per node
- most user will simply use `--mem=<size>` (e.g., `<size>=3G`) to allocate memory per node
- both interactive `srun` and batch `sbatch` jobs are governed by Slurm memory allocation
- the sum of all memory of all processes started by your job may not exceed the job reservation.
- please don't over-allocate memory, see "Memory Accounting in Slurm" below for details

Our Slurm configuration uses Linux cgroups to enforce a maximum amount of resident memory.
You simply specify it using `--memory=<size>` in your `srun` and `sbatch` command.
You simply specify it using `--mem=<size>` in your `srun` and `sbatch` command.

In the (rare) case that you provide more flexible number of threads (Slurm tasks) or GPUs, you could also look into `--mem-per-cpu` and `--mem-per-gpu`.
The [official Slurm sbatch](https://slurm.schedmd.com/sbatch.html) manual is quite helpful, as is `man sbatch` on the cluster command line.
Expand Down

0 comments on commit 5b0305c

Please sign in to comment.