diff --git a/bih-cluster/docs/how-to/connect/high-memory.md b/bih-cluster/docs/how-to/connect/high-memory.md index 26ed4f7e3..8ed4465ff 100644 --- a/bih-cluster/docs/how-to/connect/high-memory.md +++ b/bih-cluster/docs/how-to/connect/high-memory.md @@ -23,14 +23,14 @@ highmem up 14-00:00:0 3 idle med040[1-4] To connect to one of them, simply allocate more than 200GB of RAM in your job. ``` -hpc-login-1:~$ srun --pty --memory=300GB bash -i +hpc-login-1:~$ srun --pty --mem=300GB bash -i med0401:~$ ``` You can also pick one of the hostnames: ``` -hpc-login-1:~$ srun --pty --memory=300GB --nodelist=med0403 bash -i +hpc-login-1:~$ srun --pty --mem=300GB --nodelist=med0403 bash -i med0403:~$ ``` diff --git a/bih-cluster/docs/slurm/memory-allocation.md b/bih-cluster/docs/slurm/memory-allocation.md index 42a9a1c35..ae1f8a221 100644 --- a/bih-cluster/docs/slurm/memory-allocation.md +++ b/bih-cluster/docs/slurm/memory-allocation.md @@ -118,13 +118,13 @@ In the BIH HPC context, the following is recommended to: !!! note "Memory Allocation in Slurm Summary" - - most user will simply use `--memory=` (e.g., `=3G`) to allocate memory per node + - most user will simply use `--mem=` (e.g., `=3G`) to allocate memory per node - both interactive `srun` and batch `sbatch` jobs are governed by Slurm memory allocation - the sum of all memory of all processes started by your job may not exceed the job reservation. - please don't over-allocate memory, see "Memory Accounting in Slurm" below for details Our Slurm configuration uses Linux cgroups to enforce a maximum amount of resident memory. -You simply specify it using `--memory=` in your `srun` and `sbatch` command. +You simply specify it using `--mem=` in your `srun` and `sbatch` command. In the (rare) case that you provide more flexible number of threads (Slurm tasks) or GPUs, you could also look into `--mem-per-cpu` and `--mem-per-gpu`. The [official Slurm sbatch](https://slurm.schedmd.com/sbatch.html) manual is quite helpful, as is `man sbatch` on the cluster command line.