-
Notifications
You must be signed in to change notification settings - Fork 210
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'securefederatedai:develop' into 12-Nov-24
- Loading branch information
Showing
16 changed files
with
1,411 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,95 @@ | ||
#--------------------------------------------------------------------------- | ||
# Workflow to run Task Runner end to end tests | ||
# Authors - Noopur, Payal Chaurasiya | ||
#--------------------------------------------------------------------------- | ||
name: Task Runner E2E | ||
|
||
on: | ||
schedule: | ||
- cron: '0 0 * * *' # Run every day at midnight | ||
workflow_dispatch: | ||
inputs: | ||
num_rounds: | ||
description: 'Number of rounds to train' | ||
required: false | ||
default: "5" | ||
type: string | ||
num_collaborators: | ||
description: 'Number of collaborators' | ||
required: false | ||
default: "2" | ||
type: string | ||
|
||
permissions: | ||
contents: read | ||
|
||
# Environment variables common for all the jobs | ||
env: | ||
NUM_ROUNDS: ${{ inputs.num_rounds || '5' }} | ||
NUM_COLLABORATORS: ${{ inputs.num_collaborators || '2' }} | ||
|
||
jobs: | ||
test_run: | ||
name: test | ||
runs-on: ubuntu-22.04 | ||
timeout-minutes: 120 # 2 hours | ||
strategy: | ||
matrix: | ||
# There are open issues for some of the models, so excluding them for now: | ||
# model_name: [ "torch_cnn_mnist", "keras_cnn_mnist", "torch_cnn_histology" ] | ||
model_name: [ "torch_cnn_mnist", "keras_cnn_mnist" ] | ||
python_version: [ "3.8", "3.9", "3.10" ] | ||
fail-fast: false # do not immediately fail if one of the combinations fail | ||
|
||
env: | ||
MODEL_NAME: ${{ matrix.model_name }} | ||
PYTHON_VERSION: ${{ matrix.python_version }} | ||
|
||
steps: | ||
- name: Checkout OpenFL repository | ||
id: checkout_openfl | ||
uses: actions/[email protected] | ||
with: | ||
fetch-depth: 2 # needed for detecting changes | ||
submodules: "true" | ||
token: ${{ secrets.GITHUB_TOKEN }} | ||
|
||
- name: Set up Python | ||
id: setup_python | ||
uses: actions/setup-python@v3 | ||
with: | ||
python-version: ${{ env.PYTHON_VERSION }} | ||
|
||
- name: Install dependencies | ||
id: install_dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
pip install . | ||
pip install -r test-requirements.txt | ||
- name: Run Task Runner E2E tests | ||
id: run_task_runner_tests | ||
run: | | ||
python -m pytest -s tests/end_to_end/test_suites/task_runner_tests.py -m ${{ env.MODEL_NAME }} --num_rounds $NUM_ROUNDS --num_collaborators $NUM_COLLABORATORS --model_name ${{ env.MODEL_NAME }} | ||
echo "Task runner end to end test run completed" | ||
- name: Print test summary # Print the test summary only if the tests were run | ||
id: print_test_summary | ||
if: steps.run_task_runner_tests.outcome == 'success' || steps.run_task_runner_tests.outcome == 'failure' | ||
run: | | ||
export PYTHONPATH="$PYTHONPATH:." | ||
python tests/end_to_end/utils/xml_helper.py | ||
echo "Test summary printed" | ||
- name: Tar files # Tar the test results only if the tests were run | ||
id: tar_files | ||
if: steps.run_task_runner_tests.outcome == 'success' || steps.run_task_runner_tests.outcome == 'failure' | ||
run: tar -cvf result.tar results | ||
|
||
- name: Upload Artifacts # Upload the test results only if the tar was created | ||
id: upload_artifacts | ||
uses: actions/upload-artifact@v4 | ||
if: steps.tar_files.outcome == 'success' | ||
with: | ||
name: task_runner_${{ env.MODEL_NAME }}_python${{ env.PYTHON_VERSION }}_${{ github.run_id }} | ||
path: result.tar |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,3 +15,4 @@ venv/* | |
.eggs | ||
eggs/* | ||
*.pyi | ||
results/* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,4 @@ | ||
lxml==5.3.0 | ||
pytest==8.3.3 | ||
pytest-asyncio==0.24.0 | ||
pytest-mock==3.14.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# End-to-end Pytest Framework | ||
|
||
This project aims at integration testing of ```openfl-workspace``` using pytest framework. | ||
|
||
## Test Structure | ||
|
||
``` | ||
tests/end_to_end | ||
├── models # Central location for all model-related code for testing purpose | ||
├── test_suites # Folder containing test files | ||
├── utils # Folder containing helper files | ||
├── conftest.py # Pytest framework configuration file | ||
├── pytest.ini # Pytest initialisation file | ||
└── README.md # Readme file | ||
``` | ||
|
||
** File `sample_tests.py` provided under `test_suites` acts as a reference on how to add a new test case. | ||
|
||
## Pre-requisites | ||
|
||
1. Setup virtual environment and install OpenFL using [online documentation](https://openfl.readthedocs.io/en/latest/get_started/installation.html). | ||
2. Ensure that the OpenFL workspace (inside openfl-workspace) is present for the model being tested. If not, create it first. | ||
|
||
## Installation | ||
|
||
To install the required dependencies on above virtual environment, run: | ||
|
||
```sh | ||
pip install -r test-requirements.txt | ||
``` | ||
|
||
## Usage | ||
|
||
### Running Tests | ||
|
||
To run a specific test case, use below command: | ||
|
||
```sh | ||
python -m pytest tests/end_to_end/test_suites/<test_case_filename> -k <marker> -s | ||
``` | ||
|
||
** -s will ensure all the logs are printed on screen. Ignore, if not required. | ||
|
||
To modify the number of collaborators, rounds to train and/or model name, use below parameters: | ||
1. --num_collaborators | ||
2. --num_rounds | ||
3. --model_name | ||
|
||
### Output Structure | ||
|
||
``` | ||
results | ||
├── <workspace_name> # Based on the workspace name provided during test run. | ||
├── results.xml # Output file in JUNIT. | ||
└── deployment.log # Log file containing step by step test progress. | ||
``` | ||
|
||
## Contribution | ||
|
||
https://github.com/securefederatedai/openfl/blob/develop/CONTRIBUTING.md |
Oops, something went wrong.