Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance tests #597

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open

Performance tests #597

wants to merge 18 commits into from

Conversation

renatav
Copy link
Collaborator

@renatav renatav commented Mar 4, 2025

Description (e.g. "Related to ...", etc.)

Integrate benchmark tests into CI:

  • We now run benchmark tests after every push. Made it possible to use a JSON configuration file to set different performance thresholds when necessary. This is useful especially when new features are added that might slow down the system because they do more work. The default value is set to 10.
  • Every run compares the current results with the master branch's performance. This meant that some custom download code was necessary, as we want to download from a different branch.
  • This custom JSON file also helps us keep track of why and when we needed to change these thresholds
  • When changes are merged into the master branch, we generate new benchmark results and upload the corresponding artifacts.
  • Add scheduled builds set to run every 90 days to prevent our artifacts from being automatically deleted due to GitHub's cleanup policies.

There are probably many ways in which we can compare this whole logic, but I think this is a good starting point and we'll have some basic performance tests. I think that we can close the issue and open new ones if we think of any improvements

Closes #571

@renatav renatav force-pushed the renatav/performance-tests branch from 14dbb31 to 846c842 Compare March 5, 2025 01:01
@renatav renatav force-pushed the renatav/performance-tests branch from 9d2ccb6 to d7af1ff Compare March 5, 2025 07:00
@renatav renatav force-pushed the renatav/performance-tests branch from 1c8604d to 1845b19 Compare March 5, 2025 12:19
@renatav renatav force-pushed the renatav/performance-tests branch from 239cf57 to a648792 Compare March 5, 2025 12:38
@renatav renatav marked this pull request as ready for review March 5, 2025 15:19
@renatav renatav requested review from sale3 and n-dusan March 5, 2025 15:19
@renatav renatav self-assigned this Mar 5, 2025
Copy link
Contributor

@n-dusan n-dusan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks awesome. Left some minor suggestions

Comment on lines 28 to 32
```json
{
"feature/cool-feature": 20,
"bugfix/urgent-fix": 15
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: A sentence or two explaining what the threshold number means would be useful

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this file and merged everything with the testing docs that I asked Cal to write

@@ -0,0 +1,3 @@
{
"renatav/performance-tests": 15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: can we bring this threshold to be lower? Is 15 times slower before it errors out too much?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's 15%. The default is 10%, this is just an example of how to overwrite the default value


env:
ARTIFACT_NAME: benchmark-results${{ matrix.python-version }}
BRANCH_NAME: 'master' # needs to be changed to master before merging
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
BRANCH_NAME: 'master' # needs to be changed to master before merging
BRANCH_NAME: 'master'

Comment on lines 90 to 118
script: |
const artifacts = await github.rest.actions.listArtifactsForRepo({
owner: context.repo.owner,
repo: context.repo.repo,
});
const filteredArtifacts = artifacts.data.artifacts.filter(artifact => artifact.name === process.env.ARTIFACT_NAME);
let latestArtifact = null;

for (const artifact of filteredArtifacts) {
const run = await github.rest.actions.getWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: artifact.workflow_run.id,
});

if (run.data.head_branch === process.env.BRANCH_NAME) {
if (!latestArtifact || new Date(artifact.created_at) > new Date(latestArtifact.created_at)) {
latestArtifact = artifact;
}
}
}

if (latestArtifact) {
console.log(`Found latest artifact: ${latestArtifact.id}`);
core.setOutput('artifact_id', latestArtifact.id.toString());
} else {
console.log('No matching artifacts found.');
core.setOutput('artifact_id', '');
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: could we check in this javascript (and other) code somewhere in the .github (e.g. .github/scripts) directory and have GitHub actions require the file? The benefit is that we have syntax highlighting and would be easier to debug [1]?

[1] - https://github.com/orgs/community/discussions/26248

@renatav renatav force-pushed the renatav/performance-tests branch from a1fd500 to a7f863f Compare March 12, 2025 05:29
@renatav renatav requested a review from n-dusan March 12, 2025 05:32
@renatav renatav force-pushed the renatav/performance-tests branch from 19a4aca to a7f863f Compare March 12, 2025 06:27
Comment on lines 59 to 70
- name: Run pre-commit and test with pytest
# - name: Run pre-commit and test with pytest
# run: |
# pre-commit run --all-files
# pytest taf/tests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: is running pytest taf/tests/ and pre-commit supposed to be commented out?

# Use a step to extract the branch name from github.ref
- name: Extract Branch Name
id: extract_branch
run: echo "::set-output name=branch::$(echo ${GITHUB_REF#refs/heads/})"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: set-output is getting deprecated. Could we please use the new approach [1]? The deprecation notice is here.

[1] - https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

## Artifact Handling

- **Uploading Artifacts**: After benchmarks are run, the resulting `0001_output.json` file is uploaded as an artifact to GitHub Actions, allowing it to be used in future benchmark comparisons.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: adding link to the already uploaded artifacts (from some older GitHub Actions build as an example) would be very useful here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement performance regression updater tests
3 participants