From 5a0c34d4c00d21e4b7746d2f0a3bbad766d7c24e Mon Sep 17 00:00:00 2001 From: zjwegert <60646897+zjwegert@users.noreply.github.com> Date: Fri, 5 Jul 2024 11:19:27 +1000 Subject: [PATCH] update --- scripts/Benchmarks/hpc-enviroments-setonix/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/scripts/Benchmarks/hpc-enviroments-setonix/README.md b/scripts/Benchmarks/hpc-enviroments-setonix/README.md index fa1a890e..0220d630 100644 --- a/scripts/Benchmarks/hpc-enviroments-setonix/README.md +++ b/scripts/Benchmarks/hpc-enviroments-setonix/README.md @@ -40,6 +40,7 @@ where `SLURM_JOB_NUM_NODES` and `SLURM_NTASKS` are set via #SBATCH --ntasks-per-node=128 #SBATCH --exclusive ``` +Note that `SLURM_JOB_NUM_NODES` is automatically populated at job runtime. Here we exclusively use full nodes. There is another example of shared nodes in `TestEnvironment/job.sh`. See this [link](https://pawsey.atlassian.net/wiki/spaces/US/pages/51927426/Example+Slurm+Batch+Scripts+for+Setonix+on+CPU+Compute+Nodes#ExampleSlurmBatchScriptsforSetonixonCPUComputeNodes-Exclusiveaccesstothenode.1) for further info.