Skip to content

Commit

Permalink
Address PR comments
Browse files Browse the repository at this point in the history
- Removed several new and old environment files to lower maintenance
  burden.
  - environment.yaml
  - envs/py3.10-tests.yaml
  - envs/py3.11-tests.yaml
  - envs/py3.12-tests.yaml
- Removal of the environment files required updating the README.md to
  handle creating an environment file for the user.
- We ignore the created environment.yaml file in the .gitignore file
- Removed requirements.txt as all requirements are handled by the
  pyproject.toml file.
- Updated docs to fix URL links.
- Also updated comments in untested code to use correct URLs.
- Refactored the test.yaml workflow.
- Refactored the following to remove extraneous line character
  - tests/test_basic.py (also renamed the method and module to correlate
    with the method being tested in the module.)
  - src/holoseq/exceptions.py
  - pyproject.toml
  - .github/workflows/test.yaml
  • Loading branch information
amaloney committed Dec 12, 2024
1 parent 6c579a3 commit 442e37f
Show file tree
Hide file tree
Showing 17 changed files with 174 additions and 412 deletions.
184 changes: 29 additions & 155 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
@@ -1,164 +1,38 @@
name: tests
name: Run tests
on:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+a[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+b[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+rc[0-9]+'
pull_request:
branches:
- '*'
workflow_dispatch:
inputs:
target:
description: "How much of the test suite to run"
type: choice
default: default
options:
- default
- full
- downstream
cache:
description: "Use cache"
type: boolean
default: true
schedule:
- cron: '0 15 * * SUN'

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
push:
branches: [main]
paths-ignore:
- "docs/"
- "notebooks/"
- "scripts/"

jobs:
pre_commit:
name: Run pre-commit
runs-on: 'ubuntu-latest'
steps:
- uses: holoviz-dev/holoviz_tasks/[email protected]
setup:
name: Setup workflow
runs-on: ubuntu-latest
outputs:
matrix: ${{ env.MATRIX }}
matrix_option: ${{ env.MATRIX_OPTION }}
steps:
- name: Set matrix option
run: |
if [[ '${{ github.event_name }}' == 'workflow_dispatch' ]]; then
OPTION=${{ github.event.inputs.target }}
elif [[ '${{ github.event_name }}' == 'schedule' ]]; then
OPTION="full"
elif [[ '${{ github.event_name }}' == 'push' && '${{ github.ref_type }}' == 'tag' ]]; then
OPTION="full"
else
OPTION="default"
fi
echo "MATRIX_OPTION=$OPTION" >> $GITHUB_ENV
- name: Set test matrix with 'default' option
if: env.MATRIX_OPTION == 'default'
run: |
MATRIX=$(jq -nsc '{
"os": ["ubuntu-latest", "macos-latest", "windows-latest"],
"python-version": ["3.10", "3.12"],
"exclude": [
{
"python-version": "3.10",
"os": "macos-latest"
}
]
}')
echo "MATRIX=$MATRIX" >> $GITHUB_ENV
- name: Set test matrix with 'full' option
if: env.MATRIX_OPTION == 'full'
run: |
MATRIX=$(jq -nsc '{
"os": ["ubuntu-latest", "macos-latest", "windows-latest"],
"python-version": ["3.10", "3.12"],
"include": [
{
"python-version": "3.10",
"os": "ubuntu-latest"
},
{
"python-version": "3.11",
"os": "ubuntu-latest"
},
,
{
"python-version": "3.12",
"os": "ubuntu-latest"
}
]
}')
echo "MATRIX=$MATRIX" >> $GITHUB_ENV
- name: Set test matrix with 'downstream' option
if: env.MATRIX_OPTION == 'downstream'
run: |
MATRIX=$(jq -nsc '{
"os": ["ubuntu-latest"],
"python-version": ["3.12"]
}')
echo "MATRIX=$MATRIX" >> $GITHUB_ENV
conda_suite:
name: conda tests:${{ matrix.os }}:${{ matrix.python-version }}
needs: [pre_commit, setup]
if: needs.setup.outputs.matrix_option != 'default'
runs-on: ${{ matrix.os }}
test:
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.setup.outputs.matrix) }}
timeout-minutes: 90
defaults:
run:
shell: bash -el {0}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: conda-incubator/setup-miniconda@v3
with:
auto-update-conda: true
environment-file: envs/py${{ matrix.python-version }}-tests.yaml
activate-environment: holoseqtests
- name: conda info
run: conda info
- name: conda list
run: conda list
- name: unit tests
run: pytest -v holoseq --cov=holoseq --cov-append
pip_test:
name: pip tests:${{ matrix.os }}:${{ matrix.python-version }}
needs: [pre_commit, setup]
timeout-minutes: 90
matrix:
os: [ubuntu-latest]
version: ["3.10", "3.11", "3.12"]

runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.setup.outputs.matrix) }}
defaults:
run:
shell: bash -e {0}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-python@v5
- name: Checkout
uses: actions/checkout@v4

- name: Set up Python ${{ matrix.version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: install with geo
run: python -m pip install -v --prefer-binary -e '.[tests, examples-tests, geo, hvdev, hvdev-geo, dev-extras]'
- name: python version and pip list
python-version: ${{ matrix.version }}

- name: Install dependencies
run: |
python --version --version
python -m pip list
- name: unit tests
run: pytest -v hvplot --cov=hvplot --cov-append
- name: Upload coverage reports to Codecov
if: github.event_name == 'push' || github.event_name == 'pull_request'
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: false
verbose: false
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
python -m pip install --upgrade pip
pip install .[dev,test]
- name: Run ruff
uses: astral-sh/ruff-action@v2

- name: Run tests
run: |
pytest -v tests --cov=src/holoseq --cov-append
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ share/python-wheels/
.installed.cfg
*.egg
MANIFEST
environment.yaml

# PyInstaller
# Usually these files are written by a python script from a template
Expand Down
46 changes: 29 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ down to individual points and back.*
This is new work in progress.

Development started in late October 2024. A draft framework
[description and specification is here.](https://github.com/holoviz-topics/holoSeq/blob/main/HoloSeqOverview.md).
[description and specification is here.](https://github.com/holoviz-topics/holoSeq/blob/main/docs/HoloSeqOverview.md)

## Core idea: Features on intervals arranged along linear axes for browsing

Expand All @@ -35,7 +35,7 @@ and plotted using rasterize and datashader, with each tap converted into contig
in an IPython notebook, or if the dependencies are available, can be served from this repository's
root, as:

`panel serve holoSeq_random.py --show`
`panel serve scripts/holoSeq_random.py --show`

Edit the default 10000 xmax value to get a sense of scale capacity - 10M is not a problem. There is
very little code needed for plotting. Most of the code is needed to create some sample contigs of
Expand Down Expand Up @@ -196,15 +196,15 @@ the plot.
- Only pairs involving H1 contigs (H1 cis) are used in the demonstration.

Briefly, the framework creates the
[minimum data required](https://github.com/holoviz-topics/holoSeq/blob/main/HoloSeqOverview.md) to create a
plot. A genome lengths file is required, and the named contigs can be reordered by name or length.
The axes are defined by the ordering. The lengths are cumulated to give an offset to the first
nucleotide of each contig, so the track can be read and feature locations converted into the plot
coordinate system, and stored as a compressed intermediate file. The display application reads these
pre-computed plot coordinate files, with enough metadata about the reference sequence to add tic
marks to the axes and to back-calculate the stream of user tap coordinates. A converter for PAF to
compressed hseq format for input is available and was used to generate the demonstration. Bigwig is
working and other common genomic annotation formats, such as gff and vcf will follow.
[minimum data required](https://github.com/holoviz-topics/holoSeq/blob/main/docs/HoloSeqOverview.md)
to create a plot. A genome lengths file is required, and the named contigs can be reordered by name
or length. The axes are defined by the ordering. The lengths are cumulated to give an offset to the
first nucleotide of each contig, so the track can be read and feature locations converted into the
plot coordinate system, and stored as a compressed intermediate file. The display application reads
these pre-computed plot coordinate files, with enough metadata about the reference sequence to add
tic marks to the axes and to back-calculate the stream of user tap coordinates. A converter for PAF
to compressed hseq format for input is available and was used to generate the demonstration. Bigwig
is working and other common genomic annotation formats, such as gff and vcf will follow.

Multiple input files will produce a stack of plots that work independently:

Expand Down Expand Up @@ -255,8 +255,8 @@ else
```
This repository includes a python script conversion utility for PAF inputs,
`scripts/holoSeq_prepare_paf.py`, that works with the awk PAF output and converts it into a compressed
coordinate file. The compressed demonstration plotting data were prepared using:
`scripts/holoSeq_prepare_paf.py`, that works with the awk PAF output and converts it into a
compressed coordinate file. The compressed demonstration plotting data were prepared using:
```bash
python scripts/holoSeq_prepare_paf.py \
Expand Down Expand Up @@ -393,19 +393,31 @@ cd holoSeq
```
Create a virtual environment using your favorite method, _e.g._ `conda`, `venv`, `poetry`, `pixi`
_etc_. We will use `conda` as an example.
_etc_. We will use `conda` as an example. No `conda` environment file is supplied within the repo,
however, we can generate one using
[`pyproject2conda`](https://github.com/usnistgov/pyproject2conda%60). See `pyproject2conda` on how
to install it in your system.
```bash
pyproject2conda yaml --file pyproject.toml \
--no-header \
--name holoseq-dev \
--channel conda-forge \
--python-include infer \
--extra dev --extra notebooks --extra test \
--output environment.yaml
conda create env --file environment.yaml
conda activate holoseq-dev
```
Next install `holoSeq` into the virtual environment, and install the pre-commit hooks. If you would
like to contribute to work with Jupyter notebooks, install `notebooks` along with the `dev` and
`tests` flags.
like to contribute to work with Jupyter notebooks, be sure to install `notebooks` in the `pip`
command.
```bash
pip install --editable .[dev,test] # Include notebooks if you would like to install Juptyer.
pip install --editable .
# For installing notebooks development as well, use the following command instead of the one above.
#pip install --editable .[notebooks]
pre-commit install
```
Expand Down
Loading

0 comments on commit 442e37f

Please sign in to comment.