Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytest framework implementation with Task Runner GitHub Workflow #1133

Merged
merged 20 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 95 additions & 0 deletions .github/workflows/task_runner_e2e.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
#---------------------------------------------------------------------------
# Workflow to run Task Runner end to end tests
# Authors - Noopur, Payal Chaurasiya
#---------------------------------------------------------------------------
name: Task Runner E2E

on:
schedule:
- cron: '0 0 * * *' # Run every day at midnight
workflow_dispatch:
inputs:
num_rounds:
description: 'Number of rounds to train'
required: false
default: "5"
type: string
num_collaborators:
description: 'Number of collaborators'
required: false
default: "2"
type: string

permissions:
contents: read

# Environment variables common for all the jobs
env:
NUM_ROUNDS: ${{ inputs.num_rounds || '5' }}
NUM_COLLABORATORS: ${{ inputs.num_collaborators || '2' }}

jobs:
test_run:
name: test
runs-on: ubuntu-22.04
timeout-minutes: 120 # 2 hours
strategy:
matrix:
# There are open issues for some of the models, so excluding them for now:
# model_name: [ "torch_cnn_mnist", "keras_cnn_mnist", "torch_cnn_histology" ]
model_name: [ "torch_cnn_mnist", "keras_cnn_mnist" ]
python_version: [ "3.8", "3.9", "3.10" ]
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
fail-fast: false # do not immediately fail if one of the combinations fail

env:
MODEL_NAME: ${{ matrix.model_name }}
PYTHON_VERSION: ${{ matrix.python_version }}

steps:
- name: Checkout OpenFL repository
id: checkout_openfl
uses: actions/[email protected]
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}

- name: Set up Python
id: setup_python
uses: actions/setup-python@v3
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install dependencies
id: install_dependencies
run: |
python -m pip install --upgrade pip
pip install .
pip install -r test-requirements.txt

- name: Run Task Runner E2E tests
id: run_task_runner_tests
run: |
python -m pytest -s tests/end_to_end/test_suites/task_runner_tests.py -m ${{ env.MODEL_NAME }} --num_rounds $NUM_ROUNDS --num_collaborators $NUM_COLLABORATORS --model_name ${{ env.MODEL_NAME }}
echo "Task runner end to end test run completed"

- name: Print test summary # Print the test summary only if the tests were run
id: print_test_summary
if: steps.run_task_runner_tests.outcome == 'success' || steps.run_task_runner_tests.outcome == 'failure'
run: |
export PYTHONPATH="$PYTHONPATH:."
python tests/end_to_end/utils/xml_helper.py
echo "Test summary printed"

- name: Tar files # Tar the test results only if the tests were run
id: tar_files
if: steps.run_task_runner_tests.outcome == 'success' || steps.run_task_runner_tests.outcome == 'failure'
run: tar -cvf result.tar results

- name: Upload Artifacts # Upload the test results only if the tar was created
id: upload_artifacts
uses: actions/upload-artifact@v4
if: steps.tar_files.outcome == 'success'
with:
name: task_runner_${{ env.MODEL_NAME }}_python${{ env.PYTHON_VERSION }}_${{ github.run_id }}
path: result.tar
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ venv/*
.eggs
eggs/*
*.pyi
results/*
1 change: 1 addition & 0 deletions test-requirements.txt
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
lxml==5.3.0
pytest==8.3.3
pytest-asyncio==0.24.0
pytest-mock==3.14.0
60 changes: 60 additions & 0 deletions tests/end_to_end/README.md
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# End-to-end Pytest Framework
noopurintel marked this conversation as resolved.
Show resolved Hide resolved

This project aims at integration testing of ```openfl-workspace``` using pytest framework.

## Test Structure

```
tests/end_to_end
├── models # Central location for all model-related code for testing purpose
├── test_suites # Folder containing test files
├── utils # Folder containing helper files
├── conftest.py # Pytest framework configuration file
├── pytest.ini # Pytest initialisation file
└── README.md # Readme file
```

** File `sample_tests.py` provided under `test_suites` acts as a reference on how to add a new test case.

## Pre-requisites

1. Setup virtual environment and install OpenFL using [online documentation](https://openfl.readthedocs.io/en/latest/get_started/installation.html).
2. Ensure that the OpenFL workspace (inside openfl-workspace) is present for the model being tested. If not, create it first.

## Installation

To install the required dependencies on above virtual environment, run:

```sh
pip install -r test-requirements.txt
```

## Usage

### Running Tests

To run a specific test case, use below command:

```sh
python -m pytest tests/end_to_end/test_suites/<test_case_filename> -k <marker> -s
```

** -s will ensure all the logs are printed on screen. Ignore, if not required.

To modify the number of collaborators, rounds to train and/or model name, use below parameters:
1. --num_collaborators
2. --num_rounds
3. --model_name

### Output Structure

```
results
├── <workspace_name> # Based on the workspace name provided during test run.
├── results.xml # Output file in JUNIT.
└── deployment.log # Log file containing step by step test progress.
```

## Contribution

https://github.com/securefederatedai/openfl/blob/develop/CONTRIBUTING.md
Loading
Loading