This profile configures Snakemake to run on the SLURM Workload Manager
To deploy this profile, run
mkdir -p ~/.config/snakemake
cd ~/.config/snakemake
cookiecutter https://github.com/tdayris-perso/slurm.git
Then, you can run Snakemake with
snakemake --profile slurm ...
There are two submission scripts that configure jobs for job
submission. slurm-submit.py
, the default, will use the supplied
parameters untouched, making no attempt to modify the job submission
to comply with the cluster partition configuration. On the other hand,
slurm-submit-advanced.py
will attempt to modify memory, time and
number of tasks if these exceed the limits specified by the slurm
configuration. If possible, the number of tasks will be increased if
the requested memory is larger than the available memory, defined as
the requested number of tasks times memory per cpu unit.
The following resources are supported by on a per-rule basis:
- mem, mem_mb: set the memory resource request in mb.
- walltime, runtime, time_min: set the time resource in min.
As of 2018-10-08, the profile supports setting options in the cluster
configuration file. The options must be named according to sbatch
long option names. Note that
setting memory or time will override resources specified by rules.
Therefore, setting time or memory in the __default__
section will
effectively override all resource specifications that set these
parameters.
As an example, you can specify constraints in the cluster configuration file:
__default__:
constraint: mem500MB
large_memory_requirement_job:
constraint: mem2000MB
Test-driven development is enabled through the tests folder. Provided that the user has installed docker and enabled docker swarm, the SLURM tests will download two images:
In addition, testing of the cookiecutter template itself is enabled through the pytest plugin for Cookiecutters. You can run the tests by issuing
pytest -v -s