Skip to content

Latest commit

 

History

History
167 lines (118 loc) · 3.81 KB

CONTRIBUTING.md

File metadata and controls

167 lines (118 loc) · 3.81 KB

Contributing

continuiti aims to be a repository of architectures and benchmarks for operator learning with neural networks and its applications.

Contributions are welcome from anyone in the form of pull requests, bug reports and feature requests.

Local development

In order to contribute to the library, you will need to set up your local development environment. First, clone the repository:

git clone https://github.com/aai-institute/continuiti.git
cd continuiti

Setting up your environment

We strongly suggest using some form of virtual environment for working with the library, e.g., with venv:

python3 -m venv ./venv
source venv/bin/activate

Installing in editable mode

A very convenient way of working with your library during development is to install it in editable mode into your environment by running:

pip install -e .[dev]

The [dev] extra installs all dependencies needed for development, including testing, documentation and benchmarking.

Pre-commit hooks

This project uses black to format code and pre-commit to invoke it as a git pre-commit hook.

Run the following to set up the pre-commit git hook to run before pushes:

pre-commit install

Build documentation

API documentation is built with mkdocs. Notebooks are an integral part of the documentation as well.

You can use this command to continuously rebuild documentation on changes to the docs and src folder:

mkdocs serve

This will rebuild the documentation on changes to .md files inside docs, notebooks and python files.

Testing

Automated builds, tests, generation of documentation and publishing are handled by CI pipelines. Before pushing your changes to the remote we recommend to execute pytest locally in order to detect mistakes early on and to avoid failing pipelines.

To run all tests, use:

pytest

To run specific tests, use:

pytest -k test_pattern

Slow tests (> 5s) are marked by the @pytest.mark.slow decorator. To run all tests except the slow ones, use:

pytest -m "not slow"

Notebooks

We use notebooks both as documentation and as integration tests. Because we want documentation to include the full dataset, we commit notebooks with their outputs running with full datasets to the repo.

Hiding cells in notebooks

You can isolate boilerplate code into separate cells which are then hidden in the documentation. In order to do this, cells are marked with tags understood by the mkdocs plugin mkdocs-jupyter, namely adding the following to the metadata of the relevant cells:

"tags": [
  "hide"
]

To hide the cell's input and output.

Or:

"tags": [
  "hide-input"
]

To only hide the input and

"tags": [
  "hide-output"
]

for hiding the output only.

If a cell should be skipped in CI (e.g. because the full data set is missing), you can use:

"tags": [
  "skip-execution"
]

Plots in Notebooks

If you add a plot to a notebook, which should also render nicely in browser dark mode, add the tag invertible-output, i.e.

"tags": [
  "invertible-output"
]

This applies a simple CSS-filter to the output image of the cell.

Release process

In order to create a new release, make sure that the project's venv is active and the repository is clean and on the main branch.

Create a new release using the script build_scripts/release.sh. This script will create a release tag on the repository and bump the version number:

./build_scripts/release.sh

Afterwards, create a GitHub release for that tag. That will a trigger a CI pipeline that will automatically create a package and publish it from CI to PyPI.