Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytest framework implementation with Task Runner GitHub Workflow #1133

Merged
merged 20 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 104 additions & 0 deletions .github/workflows/task_runner_e2e.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
#---------------------------------------------------------------------------
# Workflow to run Task Runner end to end tests
# Authors - Noopur, Payal Chaurasiya
#---------------------------------------------------------------------------
name: Task Runner E2E

on:
schedule:
- cron: '0 0 * * *' # Run every day at midnight
workflow_dispatch:
inputs:
num_rounds:
description: 'Number of rounds to train'
required: false
default: "5"
type: string
num_collaborators:
description: 'Number of collaborators'
required: false
default: "2"
type: string

permissions:
contents: read

# Environment variables common for all the jobs
env:
NUM_ROUNDS: ${{ inputs.num_rounds || '5' }}
NUM_COLLABORATORS: ${{ inputs.num_collaborators || '2' }}

jobs:
test_run:
name: test
runs-on: ubuntu-22.04

strategy:
matrix:
# There are open issues for some of the models, so excluding them for now:
# 1. https://github.com/securefederatedai/openfl/issues/1126
# 2. https://github.com/securefederatedai/openfl/issues/1127
# model_name: [ "torch_cnn_mnist", "keras_cnn_mnist", "torch_cnn_histology", "tf_2dunet", "tf_cnn_histology" ]
model_name: [ "torch_cnn_mnist", "keras_cnn_mnist" ]
python_version: [ "3.8", "3.9", "3.10" ]
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
fail-fast: false # do not immediately fail if one of the combinations fail

env:
MODEL_NAME: ${{ matrix.model_name }}
PYTHON_VERSION: ${{ matrix.python_version }}

steps:
- name: Checkout OpenFL repository
id: checkout_openfl
uses: actions/[email protected]
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}

- name: Set up Python
id: setup_python
uses: actions/setup-python@v3
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install dependencies
id: install_dependencies
run: |
python -m pip install --upgrade pip
pip install .
pip install -r test-requirements.txt

- name: Add runner IP to /etc/hosts
id: add_runner_ip
run: |
sudo echo "127.0.0.1 aggregator" | sudo tee -a /etc/hosts
echo "Added runner IP to /etc/hosts"
noopurintel marked this conversation as resolved.
Show resolved Hide resolved

- name: Run Task Runner E2E tests
id: run_task_runner_tests
run: |
pytest -v tests/end_to_end/test_suites/task_runner_tests.py -m ${{ env.MODEL_NAME }} -s --num_rounds $NUM_ROUNDS --num_collaborators $NUM_COLLABORATORS --model_name ${{ env.MODEL_NAME }}
echo "Task runner end to end test run completed"
env:
NO_PROXY: localhost,127.0.0.1,aggregator

- name: Print test summary # Print the test summary only if the tests were run
id: print_test_summary
if: steps.run_task_runner_tests.outcome == 'success' || steps.run_task_runner_tests.outcome == 'failure'
run: |
python tests/end_to_end/utils/xml_helper.py
echo "Test summary printed"

- name: Tar files # Tar the test results only if the tests were run
id: tar_files
if: steps.run_task_runner_tests.outcome == 'success' || steps.run_task_runner_tests.outcome == 'failure'
run: tar -cvf result.tar results

- name: Upload Artifacts # Upload the test results only if the tar was created
id: upload_artifacts
uses: actions/upload-artifact@v4
if: steps.tar_files.outcome == 'success'
with:
name: task_runner_${{ env.MODEL_NAME }}_python${{ env.PYTHON_VERSION }}_${{ github.run_id }}
path: result.tar
1 change: 1 addition & 0 deletions test-requirements.txt
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
pytest==8.3.3
pytest-asyncio==0.24.0
pytest-mock==3.14.0
ruamel.yaml
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
54 changes: 54 additions & 0 deletions tests/end_to_end/README.md
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Project Title

This project is a machine learning workspace that includes various models and test suites. It is structured to facilitate the development, testing, and deployment of machine learning models.
noopurintel marked this conversation as resolved.
Show resolved Hide resolved

## Project Structure
noopurintel marked this conversation as resolved.
Show resolved Hide resolved

end_to_end
├── models # Central location for all model-related code for testing purpose
├── test_suites # Folder containing test files
├── utils # Folder containing helper files
├── __init__.py # To mark test directory as a Python package
├── conftest.py # Pytest framework configuration file
├── pytest.ini # Pytest initialisation file
└── README.md # Readme file

## Pre-requisites

Setup virtual environment and install OpenFL using [online documentation](https://openfl.readthedocs.io/en/latest/get_started/installation.html).

## Installation

To install the required dependencies on above virtual environment, run:

```sh
pip install -r test-requirements.txt
```

## Usage

### Running Tests

To run all the test cases under test_suites, use the following command:

```python -m pytest -s```

To run a specific test case, use below command:

```python -m pytest test_suites/<test_case_filename> -k <marker> -s```

** -s will ensure all the logs are printed on screen. Ignore, if not required.

### Output Structure

end_to_end
├── results
├── <workspace_name> # Based on the workspace name provided during test run.
├── results.xml # Output file in JUNIT.
├── deployment.log # Log file containing step by step test progress.

## Contribution
Please ensure that you have tested your changes thoroughly before submitting a pull request.
noopurintel marked this conversation as resolved.
Show resolved Hide resolved

## License
This project is licensed under [Apache License Version 2.0](LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
3 changes: 3 additions & 0 deletions tests/end_to_end/__init__.py
noopurintel marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Copyright (C) 2024-2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Tests package."""
Loading
Loading