Skip to content

Commit

Permalink
Move proposal 2 to docs/proposals
Browse files Browse the repository at this point in the history
  • Loading branch information
nikimanoledaki committed Jun 28, 2024
1 parent 9e5daa4 commit 599bf88
Show file tree
Hide file tree
Showing 5 changed files with 26 additions and 53 deletions.
File renamed without changes
File renamed without changes
79 changes: 26 additions & 53 deletions docs/proposals/proposal-002-run.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Draft
- [Design Details](#design-details)
- [Definitions](#definitions)
- [How the project workflow calls the benchmark job](#how-the-project-workflow-calls-the-benchmark-job)
- [Running \& Defining Benchmark Jobs](#running--defining-benchmark-jobs)
- [Running \& Defining Benchmark Workflow](#running--defining-benchmark-workflow)
- [Drawbacks (Optional)](#drawbacks-optional)
- [Alternatives](#alternatives)
- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)
Expand Down Expand Up @@ -185,7 +185,7 @@ The Green Reviews automated pipeline relies on putting together different reusab

There are different components defined here and shown in the following diagram.

![Green Reviews pipeline components](green-reviews-pipeline-components.png "Green Reviews pipeline components")
![Green Reviews pipeline components](diagrams/green-reviews-pipeline-components.png "Green Reviews pipeline components")

1. **Green Reviews pipeline**: the Continuous Integration pipeline which deploys a CNCF project to a test cluster, runs a set of benchmarks while measuring carbon emissions and stores the results. It is implemented by the workflows listed below.
2. **Cron workflow**: This refers to the initial GitHub Action workflow (described in proposal 1) and which dispatches a project workflow (see next definition).
Expand All @@ -194,63 +194,33 @@ There are different components defined here and shown in the following diagram.
5. **[new] Benchmark workflow**: This is a separate manifest containing the Benchmark Instructions.
6. **[new] Benchmark Instructions**: This refers to the actual benchmark test steps that should run on the cluster. These are usually related to the tool's Functional Unit as defined by the SCI. It is described in Defining Benchmark Jobs.


### How the project workflow calls the benchmark job

When the project workflow starts, it deploys the project on the test environment and then runs the benchmark job. For modularity and/or clarity, the benchmark test instructions could be defined in two different ways:

As a Job that calls another GitHub Action workflow (yes, yet another workflow 🙂) that contains the instructions. The workflow can be either:
1. In the Green Reviews WG repository
2. In a separate repository

The two options for defining a benchmark test are illustrated below.
1. In the Green Reviews WG repository (**preferred**)
2. In a separate repository, such as an upstream CNCF project repository

![Calling the benchmark job](calling-benchmark-job.png "Calling the Benchmark job")
The two use cases for defining a benchmark test are illustrated below.

### Running & Defining Benchmark Jobs
![Calling the benchmark job](diagrams/calling-benchmark-job.png "Calling the Benchmark job")

This section defines Benchmark Jobs (referred as Jobs) and benchmark instructions. It describes how to run them from the Project workflow. It dives deeper into the following:
### Running & Defining Benchmark Workflow

* How a Job should be called from the project workflow
* What a Job must contain in order to run on the cluster
* How a Job is related to benchmark instructions
This section defines _benchmark workflow_ and _benchmark test instructions_. It describes how to run them from the _project workflow_. It dives deeper into the following:

* How a benchmark workflow should be called from the project workflow
* What a benchmark workflow must contain in order to run on the cluster
* How a benchmark workflow is related to benchmark instructions

At a bare minimum, the benchmark test must contain test instructions of what should run in the Kubernetes cluster. For example, the Falco project maintainers have identified that one way to test the Falco project is through a test that runs `stress-ng` for a given period of time. The test instructions are contained in a Deployment manifest which can be directly applied to the benchmark test cluster using `kubectl`
At a bare minimum, the benchmark workflow must contain test instructions of what should run in the Kubernetes cluster. For example, the Falco project maintainers have identified that one way to test the Falco project is through a test that runs `stress-ng` for a given period of time. The test instructions are contained in a Deployment manifest which can be directly applied to the benchmark test cluster using `kubectl`

Below are 3 benchmark test use cases and their implementations.
Below are two use cases: the benchmark workflow may be defined in the Green Reviews repository or in a separate repository.

**Use Case 1: The Benchmark Job contains the test instructions in-line**
**Use Case 1: A GitHub Action job using a workflow defined in the _same_ repository**

Below is an example of a self-contained benchmark job with instructions defined in-line.

```
# falco-ebpf-project-workflow.yaml
jobs:
# first, must authenticate to the Kubernetes cluster
example_falco_test:
runs-on: ubuntu-latest
steps:
- run: |
# depends on the Functional Unit of the CNCF project
# apply manifest with stress-ng loop
kubectl apply -f https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/main/kustomize/falco-driver/ebpf/stress-ng.yaml
wait 15m
kubectl delete -f https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/main/kustomize/falco-driver/ebpf/stress-ng.yaml
```

We assume that the workflow already contains a secret with a kubeconfig to authenticate with the test cluster (see for example here) and Falco has already been deployed to it. It is required that the pipeline authenticates with the Kubernetes cluster before running the job with the test.

The job applies the upstream Kubernetes manifest. The Kubernetes manifest contains the test instructions for Falco. It contains a while loop that runs stress-ng. The functional unit test is time-bound in this case and scoped to 15 minutes. Therefore, we deploy this test, wait for 15 minutes, then delete the manifest to end the loop. The action to take in the Job depends on the Functional Unit of the CNCF project.

For other CNCF projects, the test instructions could look different. For example, for Flux and ArgoCD, the benchmark tests defined here test these CNCF projects by using their CLI and performing CRUD operations on a demo workload. The functional unit of Flux and ArgoCD are different to Flux. For these GitOps tools, the functional unit depends on whether there are changes detected that they need to reconcile. In other words, the GitHub Action Job could do different things depending on how the CNCF project should be tested.

Lastly, it is important that the test should run in a way that is highly configurable. For example, the job can configure parameters to indicate where the test should run e.g. define which namespace.


**Use Case 2: The Benchmark Job contains a Benchmark workflow with instructions defined in the same repository**

The project workflow’s benchmark job calls a separate workflow which is located in the Green Reviews repository.
In this use case, the project workflow’s benchmark job calls a separate workflow which is located in the Green Reviews repository.

```yaml
# falco-ebpf-project-workflow.yaml
Expand All @@ -262,7 +232,9 @@ jobs:
uses: octo-org/current-repo/.github/workflows/workflow.yml@v1 # refers to Job contained in the current repository
```
The separate job workflow contains the benchmark test instructions.
First, we assume that the workflow already contains a secret with a kubeconfig to authenticate with the test cluster (see for example here) and Falco has already been deployed to it. It is required that the pipeline authenticates with the Kubernetes cluster before running the job with the test.
The job workflow which contains the workflow with the benchmark test instructions is defined separately. In this case, it is in the Green Reviews repository.
```yaml
# stress-ng-falco-test.yaml
Expand All @@ -277,10 +249,15 @@ jobs:
kubectl delete -f https://raw.githubusercontent.com/falcosecurity/cncf-green-review-testing/main/kustomize/falco-driver/ebpf/stress-ng.yaml
```
In the example above, the Kubernetes manifest is located in a different repository. This is due to unique circumstances and limitations within the Falco project. To work around this, the Job points to the location of the `falcosecurity/cncf-green-review-testing repository`. The pipeline should accommodate different CNCF projects. If CNCF project maintainers are able to contribute to green-reviews-tooling, then they are welcome to store test artefacts in the green-reviews-tooling repository. There is also nothing stopping us from having a Kubernetes manifest inline.
The job applies the upstream Kubernetes manifest. The Kubernetes manifest contains the test instructions for Falco. It contains a while loop that runs stress-ng. The functional unit test is time-bound in this case and scoped to 15 minutes. Therefore, we deploy this test, wait for 15 minutes, then delete the manifest to end the loop. The action to take in the Job depends on the Functional Unit of the CNCF project.
In the example above, the Kubernetes manifest is located in a different repository. This may be due to preferences for CNCF project maintainers who may have their own repository with configuration related to the Green Reviews pipeline, as is the case with the Falco project. To work around this, the Job points to the location of the `falcosecurity/cncf-green-review-testing repository`. The pipeline should accommodate different CNCF projects. If CNCF project maintainers are able to contribute to green-reviews-tooling, then they are welcome to store test artefacts in the green-reviews-tooling repository. There is also nothing stopping us from having a Kubernetes manifest inline.

For other CNCF projects, the test instructions could look different. For example, for Flux and ArgoCD, the benchmark tests defined here test these CNCF projects by using their CLI and performing CRUD operations on a demo workload. The functional unit of Flux and ArgoCD are different to Flux. For these GitOps tools, the functional unit depends on whether there are changes detected that they need to reconcile. In other words, the GitHub Action Job could do different things depending on how the CNCF project should be tested.

Lastly, it is important that the test should run in a way that is highly configurable. For example, the job can configure parameters to indicate where the test should run e.g. define which namespace.

**Use Case 3: The Benchmark Job contains a Benchmark workflow with instructions defined in a different repository**
**Use Case 2: A GitHub Action job using a workflow defined in a _different_ repository**

We want to accommodate different methods of setting up the tests depending on the CNCF project. Given this, the Job containing the test instructions could be defined in a different repository. In this case, the pipeline could call the job like this:

Expand All @@ -293,9 +270,7 @@ jobs:

A Job can be fetched from other GitHub organizations and repositories using the `jobs.<job_id>.uses` syntax defined [here](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses). This syntax can be configured to use `@main` or `@another-branch` which would be nice for versioning and testing specific releases.

![Pipeline run](pipeline-run.png "An example pipeline run")


![Pipeline run](diagrams/pipeline-run.png "An example pipeline run")

## Drawbacks (Optional)

Expand Down Expand Up @@ -323,8 +298,6 @@ SIG to get the process for these resources started right away.
-->

TODO:
* remove option 1 - too confusing
* explain that we want project contribs to contribute PRs in their repo
* From the trigger pipeline, go to the Project Pipeline to run the tests of a specific CNCF project. → I suppose this is covered between Proposal 1 and this. Only thing we didn’t dig into is the subcomponents.
* Directory structure for CNCF projects, their pipeline/Jobs, etc. Collecting the Jobs & categorising them in "libraries"? → We should probably specify a standard way to store the tests in directories, where do they go and what is the directory structure, something in line with the subcomponents in Falco.
* Cleaning up test artefacts → Oops, not yet there either 😛
Expand Down

0 comments on commit 599bf88

Please sign in to comment.