Skip to content

Commit

Permalink
Merge branch 'Snakemake-Profiles:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
Christian-Heyer authored Jun 9, 2023
2 parents 71d3d82 + d9116a2 commit bd52864
Show file tree
Hide file tree
Showing 13 changed files with 344 additions and 57 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ 3.6, 3.7, 3.8 ]
python-version: [ 3.7, 3.8, 3.9, '3.10' ]
steps:
- uses: actions/checkout@v2

Expand All @@ -32,7 +32,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ 3.5, 3.6, 3.7, 3.8 ]
python-version: [ 3.7, 3.8, 3.9, '3.10' ]
steps:
- uses: actions/checkout@v2

Expand All @@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ 3.5, 3.6, 3.7, 3.8 ]
python-version: [ 3.7, 3.8, 3.9, '3.10' ]
steps:
- uses: actions/checkout@v2

Expand Down
38 changes: 37 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,37 @@ This document tracks changes to the `master` branch of the profile.

## [Unreleased]

### Added

- `jobscript_timeout` cookiecutter variable that sets the number of seconds to wait for the jobscript to exist before submiting the job

## [0.3.0] - 13/07/2022

### Added
- Exposed `max_status_check` and `wait_between_tries` for status checker [[#48][48]]

### Changed
- Cluster cancel is now a script instead of the `bkill` command in order to handle the log file paths that come with the job ID [[#55][55]]

## [0.2.0] - 28/05/2022

### Added

- Default project in cookiecutter
- Cluster cancel (`--cluster-cancel`) command (`bkill`)

### Removed

- Default threads in cookiecutter

### Changed

- Default project and queue will be removed from the submission command if they are present in the `lsf.yaml`

### Fixed

- Support quoted jobid from `snakemake>=v7.1.1` [[#45][45]]

## [0.1.2] - 01/04/2021

### Added
Expand Down Expand Up @@ -71,8 +102,13 @@ This document tracks changes to the `master` branch of the profile.
[9]: https://github.com/Snakemake-Profiles/lsf/pull/9
[36]: https://github.com/Snakemake-Profiles/lsf/issues/36
[39]: https://github.com/Snakemake-Profiles/lsf/issues/39
[45]: https://github.com/Snakemake-Profiles/lsf/issues/45
[48]: https://github.com/Snakemake-Profiles/lsf/issues/48
[55]: https://github.com/Snakemake-Profiles/lsf/issues/55
[0.1.0]: https://github.com/Snakemake-Profiles/lsf/releases/tag/0.1.0
[0.1.1]: https://github.com/Snakemake-Profiles/lsf/releases/tag/0.1.1
[0.1.2]: https://github.com/Snakemake-Profiles/lsf/releases/tag/0.1.2
[Unreleased]: https://github.com/Snakemake-Profiles/lsf/compare/v0.1.2...HEAD
[0.2.0]: https://github.com/Snakemake-Profiles/lsf/releases/tag/0.2.0
[0.3.0]: https://github.com/Snakemake-Profiles/lsf/compare/0.2.0...0.3.0
[Unreleased]: https://github.com/Snakemake-Profiles/lsf/compare/0.3.0...HEAD

38 changes: 25 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
![Python versions](https://img.shields.io/badge/Python%20versions->=3.5-blue)
![License](https://img.shields.io/github/license/Snakemake-Profiles/lsf)

> 📢 **NOTICE: We are seeking volunteers to maintain this repository as the current maintainers no longer use LSF. See [this issue](https://github.com/Snakemake-Profiles/lsf/issues/57).** 📢
[Snakemake profile][profile] for running jobs on an [LSF][lsf] cluster.

[TOC]: #
Expand Down Expand Up @@ -187,16 +189,6 @@ without `mem_mb` set under `resources`.
See [below](#standard-rule-specific-cluster-resource-settings) for how to overwrite this
in a `rule`.

#### `default_threads`

**Default**: `1`

This sets the default number of threads for a `rule` being submitted to the cluster
without the `threads` variable set.

See [below](#standard-rule-specific-cluster-resource-settings) for how to overwrite this
in a `rule`.

#### `default_cluster_logdir`

**Default**: `"logs/cluster"`
Expand Down Expand Up @@ -229,6 +221,15 @@ The default queue on the cluster to submit jobs to. If left unset, then the defa
your cluster will be used.
The `bsub` parameter that this controls is [`-q`][bsub-q].

#### `default_project`

**Default**: None

The default project on the cluster to submit jobs with. If left unset, then the default on
your cluster will be used.

The `bsub` parameter that this controls is [`-P`][bsub-P].

#### `max_status_checks_per_second`

**Default**: `10`
Expand All @@ -255,6 +256,18 @@ From the `snakemake --help` menu
default is 10, fractions allowed.
```

#### `max_status_checks`

**Default**: `1`

How many times to check the status of a job.

#### `wait_between_tries`

**Default**: `0.001`

How many seconds to wait until checking the status of a job again (if `max_status_checks` is greater than 1).

#### `profile_name`

**Default**: `lsf`
Expand Down Expand Up @@ -314,7 +327,7 @@ rule foo:
output: "bar.txt"
shell:
"grep 'bar' {input} > {output}"

rule bar:
input: "bar.txt"
output: "file.out"
Expand Down Expand Up @@ -358,7 +371,7 @@ The above is also a valid form of the previous example but **not recommended**.

#### Quote-escaping

Some LSF commands require multiple levels of quote-escaping.
Some LSF commands require multiple levels of quote-escaping.
For example, to exclude a node from job submission which has non-alphabetic characters
in its name ([docs](https://www.ibm.com/support/knowledgecenter/SSWRJV_10.1.0/lsf_command_ref/bsub.__r.1.html?view=embed)): `bsub -R "select[hname!='node-name']"`.

Expand Down Expand Up @@ -412,4 +425,3 @@ Please refer to [`CONTRIBUTING.md`](CONTRIBUTING.md).
[yaml-collections]: https://yaml.org/spec/1.2/spec.html#id2759963
[leandro]: https://github.com/leoisl
[snakemake_params]: https://snakemake.readthedocs.io/en/stable/executable.html#all-options

27 changes: 22 additions & 5 deletions cookiecutter.json
Original file line number Diff line number Diff line change
@@ -1,18 +1,35 @@
{
"LSF_UNIT_FOR_LIMITS": ["KB", "MB", "GB", "TB", "PB", "EB", "ZB"],
"UNKWN_behaviour": ["wait", "kill"],
"ZOMBI_behaviour": ["ignore", "kill"],
"LSF_UNIT_FOR_LIMITS": [
"KB",
"MB",
"GB",
"TB",
"PB",
"EB",
"ZB"
],
"UNKWN_behaviour": [
"wait",
"kill"
],
"ZOMBI_behaviour": [
"ignore",
"kill"
],
"latency_wait": 5,
"use_conda": false,
"use_singularity": false,
"restart_times": 0,
"print_shell_commands": false,
"jobs": 500,
"default_mem_mb": 1024,
"default_threads": 1,
"default_cluster_logdir": "logs/cluster",
"default_queue": "",
"default_project": "",
"max_status_checks_per_second": 10,
"max_jobs_per_second": 10,
"max_status_checks": 1,
"wait_between_tries": 0.001,
"jobscript_timeout": 10,
"profile_name": "lsf"
}
}
136 changes: 136 additions & 0 deletions tests/test_lsf_cancel.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
import unittest
from unittest.mock import patch

from tests.src.OSLayer import OSLayer
from tests.src.lsf_cancel import kill_jobs, parse_input, KILL


class TestParseInput(unittest.TestCase):
script = "lsf_cancel.py"

def test_parse_input_no_args(self):
fake_args = [self.script]
with patch("sys.argv", fake_args):
actual = parse_input()

assert not actual

def test_parse_input_one_job_no_log(self):
fake_args = [self.script, "1234"]
with patch("sys.argv", fake_args):
actual = parse_input()

expected = fake_args[1:]
assert actual == expected

def test_parse_input_one_job_and_log(self):
fake_args = [self.script, "1234", "log/file.out"]
with patch("sys.argv", fake_args):
actual = parse_input()

expected = [fake_args[1]]
assert actual == expected

def test_parse_input_two_jobs_and_log(self):
fake_args = [self.script, "1234", "log/file.out", "9090", "log/other.out"]
with patch("sys.argv", fake_args):
actual = parse_input()

expected = [fake_args[1], fake_args[3]]
assert actual == expected

def test_parse_input_two_jobs_and_digits_in_log(self):
fake_args = [self.script, "1234", "log/file.out", "9090", "log/123"]
with patch("sys.argv", fake_args):
actual = parse_input()

expected = [fake_args[1], fake_args[3]]
assert actual == expected

def test_parse_input_multiple_args_but_no_jobs(self):
fake_args = [self.script, "log/file.out", "log/123"]
with patch("sys.argv", fake_args):
actual = parse_input()

assert not actual


class TestKillJobs(unittest.TestCase):
@patch.object(
OSLayer,
OSLayer.run_process.__name__,
return_value=(
"Job <123> is being terminated",
"",
),
)
def test_kill_jobs_one_job(
self,
run_process_mock,
):
jobids = ["123"]
expected_kill_cmd = "{} {}".format(KILL, " ".join(jobids))

kill_jobs(jobids)

run_process_mock.assert_called_once_with(expected_kill_cmd, check=False)

@patch.object(
OSLayer,
OSLayer.run_process.__name__,
return_value=(
"Job <123> is being terminated\nJob <456> is being terminated",
"",
),
)
def test_kill_jobs_two_jobs(
self,
run_process_mock,
):
jobids = ["123", "456"]
expected_kill_cmd = "{} {}".format(KILL, " ".join(jobids))

kill_jobs(jobids)

run_process_mock.assert_called_once_with(expected_kill_cmd, check=False)

@patch.object(
OSLayer,
OSLayer.run_process.__name__,
return_value=("", ""),
)
def test_kill_jobs_no_jobs(
self,
run_process_mock,
):
jobids = []

kill_jobs(jobids)

run_process_mock.assert_not_called()

@patch.object(
OSLayer,
OSLayer.run_process.__name__,
return_value=("", ""),
)
def test_kill_jobs_empty_jobs(self, run_process_mock):
jobids = ["", ""]

kill_jobs(jobids)

run_process_mock.assert_not_called()

@patch.object(
OSLayer,
OSLayer.run_process.__name__,
return_value=("", ""),
)
def test_kill_jobs_empty_job_and_non_empty_job(self, run_process_mock):
jobids = ["", "123"]

expected_kill_cmd = "{} {}".format(KILL, " ".join(jobids))

kill_jobs(jobids)

run_process_mock.assert_called_once_with(expected_kill_cmd, check=False)
Loading

0 comments on commit bd52864

Please sign in to comment.