Skip to content

Commit

Permalink
Hide Slurm/srun example, because it's still non-working - needs more …
Browse files Browse the repository at this point in the history
…testing with real Slurm cases
  • Loading branch information
HenrikBengtsson committed Jun 30, 2024
1 parent 64db244 commit 8498184
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 68 deletions.
32 changes: 32 additions & 0 deletions incl/makeClusterPSOCK,Slurm.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
## EXAMPLE: 'Slurm' is a high-performance compute (HPC) job scheduler
## where one can request compute resources on multiple nodes, each
## running multiple cores. Here is an example:
##
## ## Request 18 slots on one or more hosts
## sbatch --ntasks=18 script.sh
##
## This will launch the job script 'script.sh' on one host, while have
## reserved in total 18 slots (CPU cores) on this host and possible
## another hosts.
##
## This example shows how to use the Slurm command 'srun' to launch
## 18 parallel workers from R, which is assumed to have been launched
## by 'script.sh'. The `srun` command does not take a hostname; instead
## it will automatically launch the work on the next allotted host.
## The number of workers we want to launch can be inferred from
## length(availableWorkers()). We will use a dummy hostname for the
## workers to avoid them being interpreted as a localhost worker.
##
## The parallel workers are launched as:
## 'srun' --ntasks=1 ...
## 'srun' --ntasks=1 ...
## ...
## 'srun' --ntasks=1 ...
workers <- sub(".*", "<dummy>", availableWorkers())
rshcmd_fcn <- function(rshopts, worker) paste(shQuote("srun"), rshopts)
cl <- makeClusterPSOCK(
workers,
rshcmd = rshcmd_fcn, rshopts = c("--ntasks=1"),
rscript_sh = c("auto", "none"),
dryrun = TRUE, quiet = TRUE
)
34 changes: 0 additions & 34 deletions incl/makeClusterPSOCK.R
Original file line number Diff line number Diff line change
Expand Up @@ -199,40 +199,6 @@ cl <- makeClusterPSOCK(



## EXAMPLE: 'Slurm' is a high-performance compute (HPC) job scheduler
## where one can request compute resources on multiple nodes, each
## running multiple cores. Here is an example:
##
## ## Request 18 slots on one or more hosts
## sbatch --ntasks=18 script.sh
##
## This will launch the job script 'script.sh' on one host, while have
## reserved in total 18 slots (CPU cores) on this host and possible
## another hosts.
##
## This example shows how to use the Slurm command 'srun' to launch
## 18 parallel workers from R, which is assumed to have been launched
## by 'script.sh'. The `srun` command does not take a hostname; instead
## it will automatically launch the work on the next allotted host.
## The number of workers we want to launch can be inferred from
## length(availableWorkers()). We will use a dummy hostname for the
## workers to avoid them being interpreted as a localhost worker.
##
## The parallel workers are launched as:
## 'srun' --ntasks=1 ...
## 'srun' --ntasks=1 ...
## ...
## 'srun' --ntasks=1 ...
workers <- sub(".*", "<dummy>", availableWorkers())
rshcmd_fcn <- function(rshopts, worker) paste(shQuote("srun"), rshopts)
cl <- makeClusterPSOCK(
workers,
rshcmd = rshcmd_fcn, rshopts = c("--ntasks=1"),
rscript_sh = c("auto", "none"),
dryrun = TRUE, quiet = TRUE
)


## ---------------------------------------------------------------
## Section 4. Setting up remote parallel workers in the cloud
## ---------------------------------------------------------------
Expand Down
34 changes: 0 additions & 34 deletions man/makeClusterPSOCK.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit 8498184

Please sign in to comment.