Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GitHub Action for Continuous Benchmarking with zBench #47

Open
hendriknielaender opened this issue Feb 7, 2024 · 5 comments
Open

GitHub Action for Continuous Benchmarking with zBench #47

hendriknielaender opened this issue Feb 7, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@hendriknielaender
Copy link
Owner

Summary

Introduce a GitHub Action specifically designed for continuous benchmarking using zBench. This action will automate the process of running benchmarks on every commit/pull request, collecting performance data, and visualizing the results over time. This feature aims to help maintainers and contributors monitor and improve the performance of their Zig projects consistently.

Background

Performance is a critical aspect of software development, and zBench offers a robust benchmarking tool for the Zig programming language. However, there's a gap in automatically monitoring these benchmarks over the development cycle. Integrating zBench with GitHub Actions provides an automated solution to fill this gap, making it easier to track performance changes and prevent regressions.

Proposed Features

  • Automated Benchmark Runs: Automatically run zBench benchmarks on every commit to the main branch and on every pull request.
  • Results Visualization: Store benchmark results in a GitHub Pages branch and generate visual charts to track performance over time.
  • Regression Alerts: Compare new benchmark results with historical data. If performance degrades beyond a specified threshold, raise alerts through commit comments or fail the workflow, drawing immediate attention to potential issues.
  • Flexible Configuration: Allow configuration of benchmark commands, comparison thresholds, and other settings to fit the project's needs.

Implementation Notes

  • The GitHub Action should be containerized to ensure consistency across runs.
  • Benchmark data should be stored in a structured format (e.g., JSON) to facilitate easy parsing and visualization.
  • The action should include templates or easy setup options for users to integrate it into their workflows with minimal configuration.

Potential Challenges

  • Ensuring the action is easy to set up and use, minimizing barriers to adoption.
  • Handling large benchmark datasets and generating visualizations efficiently.
  • Providing meaningful and actionable alerts without overwhelming users with false positives.

Setup

  1. Add the GitHub Action to Your Workflow: Include the action in your .github/workflows directory. An example workflow file is provided below.
  2. Configure GitHub Pages: Ensure that GitHub Pages is set up for your repository to host the benchmark results.
  3. Adjust Action Settings as Needed: Customize the action settings, such as the performance regression threshold, to suit your project's needs.

Example Workflow

Create a file .github/workflows/zbench-benchmark.yml with the following content:

name: zBench Benchmark

on:
  push:
    branches: [main]
  pull_request:

jobs:
  benchmark:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Run zBench benchmarks
      run: # Add your command to run zBench benchmarks here
    - name: zBench Continuous Benchmarking
      uses: your-username/your-action-repo@main
      with:
        # Action inputs like GitHub Pages branch, performance thresholds, etc.

Conclusion

Integrating continuous benchmarking into the development process can significantly enhance the performance awareness and optimization efforts of Zig projects. By developing a dedicated GitHub Action for zBench, we can automate this process, providing developers with the tools they need to monitor and improve performance continuously.

@hendriknielaender hendriknielaender added the enhancement New feature or request label Feb 7, 2024
@bens
Copy link
Collaborator

bens commented Mar 27, 2024

One possible continuous benchmarking system that might be suitable is https://bencher.dev/, I've been poking around trying to get it running as a self-hosted service on a VPS, I'll see how it goes. It also has a paid cloud service.

@hendriknielaender
Copy link
Owner Author

that looks cool, i'll have a look at the docs 👍

@hendriknielaender
Copy link
Owner Author

I think what we can do here is to enhance the current json export, to support different JSON benchmark reports like the BMF JSON from https://bencher.dev/docs/explanation/adapters/#-json.

Or for this github action (benchmark-action), the are also supporting custom json.

Maybe we can also think about contributing to these projects to support our reports natively, if there are some benefits.

@hendriknielaender
Copy link
Owner Author

Another one that i found, but they do not offer a JSON spec. https://codspeed.io/

@bens
Copy link
Collaborator

bens commented May 4, 2024

It looks like currently codspeed only supports python, rust, and node, but I guess they'll add more support as time goes on.

The way they benchmark seems a bit weird to me, they seem to focus on CPU time exclusively, so something that hardly uses the CPU but spends lots of unnecessary time waiting on IO would benchmark better than something that does a little more work on the CPU but reduces the IO load and reduces the overall run time. Unless I've misunderstood of course.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants