Skip to content

Commit

Permalink
refactor(tests): import fixtures/utils from modflow-devtools
Browse files Browse the repository at this point in the history
  • Loading branch information
wpbonelli committed Nov 15, 2022
1 parent c6aa66a commit c1ca09c
Show file tree
Hide file tree
Showing 65 changed files with 1,286 additions and 1,785 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -91,3 +91,6 @@ autotest/.noseids
.docs/_notebooks

*.bak

# environment files
**.env
172 changes: 12 additions & 160 deletions DEVELOPER.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,10 +161,14 @@ Like the scripts and tutorials, each notebook is configured to create and (attem

## Tests

To run the tests you will need `pytest` and a few plugins, including [`pytest-xdist`](https://pytest-xdist.readthedocs.io/en/latest/) and [`pytest-benchmark`](https://pytest-benchmark.readthedocs.io/en/latest/index.html). Test dependencies are specified in the `test` extras group in `setup.cfg` (with pip, use `pip install ".[test]"`). Test dependencies are included in the Conda environment `etc/environment`.
To run the tests you will need `pytest` and a few plugins, including [`pytest-xdist`](https://pytest-xdist.readthedocs.io/en/latest/), [`pytest-dotenv`](https://github.com/quiqua/pytest-dotenv), and [`pytest-benchmark`](https://pytest-benchmark.readthedocs.io/en/latest/index.html). Test dependencies are specified in the `test` extras group in `setup.cfg` (with pip, use `pip install ".[test]"`). Test dependencies are included in the Conda environment `etc/environment`.

**Note:** to prepare your code for a pull request, you will need a few more packages specified in the `lint` extras group in `setup.cfg` (also included by default for Conda). See the docs on [submitting a pull request](CONTRIBUTING.md) for more info.

### Configuring tests

Some of the tests (namely those for the [`get-modflow`](docs/get_modflow.md) utility) invoke the GitHub API.

### Running tests

Tests must be run from the `autotest` directory. To run a single test script in verbose mode:
Expand Down Expand Up @@ -209,15 +213,19 @@ This should complete in under a minute on most machines. Smoke testing aims to c

**Note:** most the `regression` and `example` tests are `slow`, but there are some other slow tests, e.g. in `test_export.py`, and some regression tests and examples are fast.

### Writing tests

Test functions and files should be named informatively, with related tests grouped in the same file. The test suite runs on GitHub Actions in parallel, so tests should not access the working space of other tests, example scripts, tutorials or notebooks. A number of shared test fixtures are [imported](conftest.py) from [`modflow-devtools`](https://github.com/MODFLOW-USGS/modflow-devtools). These include keepable temporary directory fixtures and miscellanous utilities (see `modflow-devtools` repository README for more information on fixture usage). New tests should use these facilities where possible. See also the [contribution guidelines](CONTRIBUTING.md) before submitting a pull request.

### Debugging tests

To debug a failed test it can be helpful to inspect its output, which is cleaned up automatically by default. To run a failing test and keep its output, use the `--keep` option to provide a save location:
To debug a failed test it can be helpful to inspect its output, which is cleaned up automatically by default. `modflow-devtools` provides temporary directory fixtures that allow optionally keeping test outputs in a specified location. To run a test and keep its output, use the `--keep` option to provide a save location:

pytest test_export.py --keep exports_scratch

This will retain the test directories created by the test, which allows files to be evaluated for errors. Any tests using the function-scoped `tmpdir` and related fixtures (e.g. `class_tmpdir`, `module_tmpdir`) defined in `conftest.py` are compatible with this mechanism.
This will retain any files created by the test in `exports_scratch` in the current working directory. Any tests using the function-scoped `function_tmpdir` and related fixtures (e.g. `class_tmpdir`, `module_tmpdir`) defined in `modflow_devtools/fixtures` are compatible with this mechanism.

There is also a `--keep-failed <dir>` option which preserves the outputs of failed tests in the given location, however this option is only compatible with function-scoped temporary directories (the `tmpdir` fixture defined in `conftest.py`).
There is also a `--keep-failed <dir>` option which preserves the outputs of failed tests in the given location, however this option is only compatible with function-scoped temporary directories (the `function_tmpdir` fixture).

### Performance testing

Expand Down Expand Up @@ -281,159 +289,3 @@ Benchmark results are only printed to `stdout` by default. To save results to a
Profiling is [distinct](https://stackoverflow.com/a/39381805/6514033) from benchmarking in evaluating a program's call stack in detail, while benchmarking just invokes a function repeatedly and computes summary statistics. Profiling is also accomplished with `pytest-benchmark`: use the `--benchmark-cprofile` option when running tests which use the `benchmark` fixture described above. The option's value is the column to sort results by. For instance, to sort by total time, use `--benchmark-cprofile="tottime"`. See the `pytest-benchmark` [docs](https://pytest-benchmark.readthedocs.io/en/stable/usage.html#commandline-options) for more information.

By default, `pytest-benchmark` will only print profiling results to `stdout`. If the `--benchmark-autosave` flag is provided, performance profile data will be included in the JSON files written to the `.benchmarks` save directory as described in the benchmarking section above.

### Writing tests

Test functions and files should be named informatively, with related tests grouped in the same file. The test suite runs on GitHub Actions in parallel, so tests must not pollute the working space of other tests, example scripts, tutorials or notebooks. A number of shared test fixtures are provided in `autotest/conftest.py`. New tests should use these facilities where possible, to standardize conventions, help keep maintenance minimal, and prevent shared test state and proliferation of untracked files. See also the [contribution guidelines](CONTRIBUTING.md) before submitting a pull request.

#### Keepable temporary directories

The `tmpdir` fixtures defined in `conftest.py` provide a path to a temporary directory which is automatically created before test code runs and automatically removed afterwards. (The builtin `pytest` `temp_path` fixture can also be used, but is not compatible with the `--keep` command line argument detailed above.)

For instance, using temporary directory fixtures for various scopes:

```python
from pathlib import Path
import inspect

def test_tmpdirs(tmpdir, module_tmpdir):
# function-scoped temporary directory
assert isinstance(tmpdir, Path)
assert tmpdir.is_dir()
assert inspect.currentframe().f_code.co_name in tmpdir.stem

# module-scoped temp dir (accessible to other tests in the script)
assert module_tmpdir.is_dir()
assert "autotest" in module_tmpdir.stem
```

These fixtures can be substituted transparently for `pytest`'s built-in `tmp_path`, with the additional benefit that when `pytest` is invoked with the `--keep` argument, e.g. `pytest --keep temp`, outputs will automatically be saved to subdirectories of `temp` named according to the test case, class or module. (As described above, this is useful for debugging a failed test by inspecting its outputs, which would otherwise be cleaned up.)

#### Locating example data

Shared fixtures and utility functions are also provided for locating example data on disk. The `example_data_path` fixture resolves to `examples/data` relative to the project root, regardless of the location of the test script (as long as it's somewhere in the `autotest` directory).

```python
def test_with_data(tmpdir, example_data_path):
model_path = example_data_path / "freyberg"
# load model...
```

This is preferable to manually handling relative paths as if the location of the example data changes in the future, only a single fixture in `conftest.py` will need to be updated rather than every test case individually.

An equivalent function `get_example_data_path()` is also provided in `conftest.py`. This is useful to dynamically generate data for test parametrization. (Due to a [longstanding `pytest` limitation](https://github.com/pytest-dev/pytest/issues/349), fixtures cannot be used to generate test parameters.)

#### Locating the project root

A similar `get_project_root_path()` function is also provided, doing what it says on the tin:

```python
from autotest.conftest import get_project_root_path, get_example_data_path

def test_get_paths():
example_data = get_example_data_path()
project_root = get_project_root_path()

assert example_data.parent.parent == project_root
```

Note that this function expects tests to be run from the `autotest` directory, as mentioned above.

#### Conditionally skipping tests

Several `pytest` markers are provided to conditionally skip tests based on executable availability, Python package environment or operating system.

To skip tests if one or more executables are not available on the path:

```python
from shutil import which
from autotest.conftest import requires_exe

@requires_exe("mf6")
def test_mf6():
assert which("mf6")

@requires_exe("mf6", "mp7")
def test_mf6_and_mp7():
assert which("mf6")
assert which("mp7")
```

To skip tests if one or more Python packages are not available:

```python
from autotest.conftest import requires_pkg

@requires_pkg("pandas")
def test_needs_pandas():
import pandas as pd

@requires_pkg("pandas", "shapefile")
def test_needs_pandas():
import pandas as pd
from shapefile import Reader
```

To mark tests requiring or incompatible with particular operating systems:

```python
import os
import platform
from autotest.conftest import requires_platform, excludes_platform

@requires_platform("Windows")
def test_needs_windows():
assert platform.system() == "Windows"

@excludes_platform("Darwin", ci_only=True)
def test_breaks_osx_ci():
if "CI" in os.environ:
assert platform.system() != "Darwin"
```

Platforms must be specified as returned by `platform.system()`.

Both these markers accept a `ci_only` flag, which indicates whether the policy should only apply when the test is running on GitHub Actions CI.

There is also a `@requires_github` marker, which will skip decorated tests if the GitHub API is unreachable.

### Miscellaneous

A few other useful tools for FloPy development include:

- [`doctoc`](https://www.npmjs.com/package/doctoc): automatically generate table of contents sections for markdown files
- [`act`](https://github.com/nektos/act): test GitHub Actions workflows locally (requires Docker)

#### Generating TOCs with `doctoc`

The [`doctoc`](https://www.npmjs.com/package/doctoc) tool can be used to automatically generate table of contents sections for markdown files. `doctoc` is distributed with the [Node Package Manager](https://docs.npmjs.com/cli/v7/configuring-npm/install). With Node installed use `npm install -g doctoc` to install `doctoc` globally. Then just run `doctoc <file>`, e.g.:

```shell
doctoc DEVELOPER.md
```

This will insert HTML comments surrounding an automatically edited region, scanning for headers and creating an appropriately indented TOC tree. Subsequent runs are idempotent, updating if the file has changed or leaving it untouched if not.

To run `doctoc` for all markdown files in a particular directory (recursive), use `doctoc some/path`.

#### Testing CI workflows with `act`

The [`act`](https://github.com/nektos/act) tool uses Docker to run containerized CI workflows in a simulated GitHub Actions environment. [Docker Desktop](https://www.docker.com/products/docker-desktop/) is required for Mac or Windows and [Docker Engine](https://docs.docker.com/engine/) on Linux.

With Docker installed and running, run `act -l` from the project root to see available CI workflows. To run all workflows and jobs, just run `act`. To run a particular workflow use `-W`:

```shell
act -W .github/workflows/commit.yml
```

To run a particular job within a workflow, add the `-j` option:

```shell
act -W .github/workflows/commit.yml -j build
```

**Note:** GitHub API rate limits are easy to exceed, especially with job matrices. Authenticated GitHub users have a much higher rate limit: use `-s GITHUB_TOKEN=<your token>` when invoking `act` to provide a personal access token. Note that this will log your token in shell history &mdash; leave the value blank for a prompt to enter it more securely.

The `-n` flag can be used to execute a dry run, which doesn't run anything, just evaluates workflow, job and step definitions. See the [docs](https://github.com/nektos/act#example-commands) for more.

**Note:** `act` can only run Linux-based container definitions, so Mac or Windows workflows or matrix OS entries will be skipped.
Loading

0 comments on commit c1ca09c

Please sign in to comment.