From 971f61d7bc117fc58375557c7584c297e96cda3d Mon Sep 17 00:00:00 2001 From: Thomas Date: Thu, 29 Aug 2024 10:34:35 +0200 Subject: [PATCH] fix gpu command list (#175) --- bih-cluster/docs/how-to/connect/gpu-nodes.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/bih-cluster/docs/how-to/connect/gpu-nodes.md b/bih-cluster/docs/how-to/connect/gpu-nodes.md index f2cc3d3bd..23d9c36ec 100644 --- a/bih-cluster/docs/how-to/connect/gpu-nodes.md +++ b/bih-cluster/docs/how-to/connect/gpu-nodes.md @@ -4,8 +4,9 @@ The cluster has seven nodes with four Tesla V100 GPUs each: `hpc-gpu-{1..7}` and Connecting to a node with GPUs is easy. You request one or more GPU cores by adding a generic resources flag to your Slurm job submission via `srun` or `sbatch`. + - `--gres=gpu:tesla:COUNT` will request NVIDIA V100 cores. -- `--gres=gpu:tesla:COUNT` will request NVIDIA A40 cores. +- `--gres=gpu:a40:COUNT` will request NVIDIA A40 cores. - `--gres=gpu:COUNT` will request any available GPU cores. Your job will be automatically placed in the Slurm `gpu` partition and allocated a number of `COUNT` GPUs.