Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ACTION] Create Falco benchmark workflow #121

Open
3 of 5 tasks
Tracked by #117
rossf7 opened this issue Aug 28, 2024 · 11 comments
Open
3 of 5 tasks
Tracked by #117

[ACTION] Create Falco benchmark workflow #121

rossf7 opened this issue Aug 28, 2024 · 11 comments

Comments

@rossf7
Copy link
Contributor

rossf7 commented Aug 28, 2024

Part of Proposal-002 - Run the CNCF project benchmark tests as part of the automated pipeline

Task Description

This issue tracks the implementation part of proposal 2: create a GitHub Actions workflow in this repo that runs the 3 benchmark tests created by the Falco team.

The 3 manifest files can be found here https://github.com/falcosecurity/cncf-green-review-testing/tree/main/benchmark-tests

See the Design Details section of proposal 2 for more detail on the benchmark workflow and jobs.

#118 is related and is to implement the pipeline changes to call this benchmark workflow

Tasks

  • Set the [benchmark_path](https://github.com/cncf-tags/green-reviews-tooling/blob/main/projects/projects.json#L6)
  • Add test steps to run 3 jobs that applies each manifest and waits 15 mins
  • Ensure that workflow also deletes kubernetes resources at end of pipeline run
  • Update Proposal 2 with any changes
  • Add documentation
@rossf7 rossf7 changed the title Create Falco benchmark workflow [ACTION] Create Falco benchmark workflow Aug 28, 2024
@nikimanoledaki nikimanoledaki self-assigned this Sep 25, 2024
@locomundo
Copy link
Contributor

Let's work together on this @nikimanoledaki

@nikimanoledaki
Copy link
Contributor

Completing this should resolve this error: #122 (comment)

@nikimanoledaki
Copy link
Contributor

nikimanoledaki commented Oct 8, 2024

TODO:

@nikimanoledaki
Copy link
Contributor

nikimanoledaki commented Oct 11, 2024

Hi folks, @locomundo, @rossf7 👋

We've come across a missing feature on GitHub Actions 😬

Problem

A reusable workflow that is in a separate repository can only be referenced in the top level uses, defined right after the job name:

benchmark-job:
	uses: nikimanoledaki/cncf-green-review-testing/.github/workflows/workflow.yml@nm/add-bench-workflow

However, this top-level uses does not evaluate expressions. So it is not possible to set the benchmark path as ${{ inputs.benchmark_path }}. Error: the 'uses' attribute must be a path, a Docker image, or owner/repo@ref - failed workflow

It is possible to evaluate expressions when using a nested uses, like below:

benchmark-job:
	uses: jenseng/dynamic-uses@v1
	with:
		uses: ${{ inputs.benchmark_path }}

However, unfortunately, a nested uses cannot reference a workflow that is in a different repository. It fails with the error: invalid value workflow reference: references to workflows must be rooted in '.github/workflows' - failed workflow

I tried to set only part of it as an expression that should evaluate but this is not possible either, see error:

error parsing called workflow
".github/workflows/benchmark-pipeline.yaml"
-> "nikimanoledaki/cncf-green-review-testing/.github/workflows/workflow.yml@${{inputs.benchmark_path}}"
: failed to fetch workflow: reference to workflow should be either a valid branch, tag, or commit

I found multiple issues referencing this problem so it seems to be a missing feature e.g. https://github.com/orgs/community/discussions/9050


I'm going to look into some elegant solutions to workaround this and report back.

@locomundo
Copy link
Contributor

What if we just keep the benchmark workflows in the green reviews repository? In the CNCF project repo they would have the k8s manifests that need to be applied for a benchmark test (in the case of Falco, stress-ng.yaml, redis-yaml and falco-event-generator.yaml), and in the GR repo workflow we would apply-f them and delete them.
It would definitely simplify things and solve this, right?

Maybe the downside would be that the CNCF project maintainer would have less flexibility and would have to ask us if there are new tests to be included. But we can decide that each file in the directory they specify is a test, and then make the github action iterate the files and run one test for each? Not sure if it's easy to do that in github actoins?

Does this make any sense?

@nikimanoledaki
Copy link
Contributor

nikimanoledaki commented Oct 21, 2024

This makes sense! I think that we can keep the benchmark workflows in the Green Reviews repository and point to a script in the run step. This script can live in the Falco repository and apply, wait, and delete each of the tests. That would give Falco maintainers the ability to do additional checks, for example, to validate that they have reached the optimal kernel rate, etc. What do you think?

POC here: #124 -- @locomundo could you review and merge it if you agree?

@dipankardas011
Copy link
Contributor

dipankardas011 commented Nov 14, 2024

[Proposal] to change the current Implementation

Description

We can update the projects.json to this new schema so with this change we can pass the benchmark_gh_workflow some of these and in the jobs.benchmark_job it can perform kubectl apply -f . and the bench-marking setup starts after this we can wait for some time and once its done we need to decommission this in the delete or yet-another-job in sequence

{
  "projects": [
    {
      "name": "falco",
      "organization": "falcosecurity",
      "project_configs": {
        "ebpf": {
          "benchmark_job": {
            "manifest_location": "https://raw.githubusercontent.com/falcosecurity/falco/main/.github/workflows/benchmark-ebpf.yml",
            "duration": {  // just added for no reason
              "unit": "minutes",
              "value": 10
            }
          }
        },
        "modern-ebpf": {// ...},
        "kmod": {// ...}
      }
    }
  ]
}

Changes needed for the script-trigger.sh and the jobs.benchmark_jobs

@nikimanoledaki @locomundo @rossf7

@rossf7
Copy link
Contributor Author

rossf7 commented Nov 14, 2024

with this change we can pass the benchmark_gh_workflow some of these and in the jobs.benchmark_job it can perform kubectl apply -f .

@dipankardas011 I'm not sure we're aligned here. I thought rather than running a workflow we would store the manifest URL in projects.json?

This is less flexible as a workflow can contain logic but its simpler. We just kubectl apply -f rather than have to fetch the workflow, inject it and run it dynamically. Later we could even add support for running shell scripts based on the file extension.

@nikimanoledaki @locomundo I realise this is quite a big change from the proposal so please chime in if I got this wrong or you disagree.

The other worry I have is for Falco we run the same benchmarks for all 3 configurations. Maybe for a future CNCF project we need this but its hard to predict those requirements. So I would keep project.json simple for now.

// just added for no reason

😂 We don't need it for this change but I like this idea! Rather than hardcoding the 15 mins for Falco in the pipeline it makes it clear how long the benchmarks run for.

I took your structure and adapted it with these changes. WDYT?

{
    "projects": [
        {
            "name": "falco",
            "organization": "falcosecurity",
            "benchmark": {
                "manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
                "duration_mins": 15
            },
            "configs": [
                "ebpf",
                "modern-ebpf",
                "kmod"
            ]
        }
    ]
}

@dipankardas011
Copy link
Contributor

dipankardas011 commented Nov 15, 2024

@dipankardas011 I'm not sure we're aligned here. I thought rather than running a workflow we would store the manifest URL in projects.json?

Yes, had used the manifest location URL 🤔

I think it is less obvious from the json what kind of file it should be

{
  "projects": [
    {
      "name": "falco",
      "organization": "falcosecurity",
      "benchmark": {
-       "manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
+       "k8s_manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
        "duration_mins": 15
      },
      "configs": [
        "ebpf",
        "modern-ebpf",
        "kmod"
      ]
    }
  ]
}

We can even choose this formatting as well instead

- https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml
+ github.com/falcosecurity/cncf-green-review-testing@main:benchmark-tests/deployments.yaml

Yet another approach instead of telling the users to assemble the url we can give them fields which they can fill and we can aassemble independenly

{
  "projects": [
    {
      "name": "falco",
      "organization": "falcosecurity",
      "benchmark": {
-        "manifest_url": "https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/refs/heads/main/benchmark-tests/deployments.yaml",
+        "manifest_ref": {
+          "git_provider": "github",
+          "organization": "falcosecurity",
+          "repository": "cncf-green-review-testing",
+          "branch": "main",
+          "path": "benchmark-tests/deployments.yaml"
+       },
        "duration_mins": 15
      },
      "configs": [
        "ebpf",
        "modern-ebpf",
        "kmod"
      ]
    }
  ]
}

wdyt

@rossf7
Copy link
Contributor Author

rossf7 commented Nov 15, 2024

@dipankardas011 I'm fine with renaming to k8s_manifest_url I don't think the other changes are needed.

@dipankardas011
Copy link
Contributor

Updated the schema

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants