-
Notifications
You must be signed in to change notification settings - Fork 36
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Signed-off-by: Pranav Gaikwad <[email protected]>
- Loading branch information
1 parent
bcb240c
commit 7d8170c
Showing
1 changed file
with
269 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,269 @@ | ||
--- | ||
title: Analyzer - Testing Rules | ||
authors: | ||
- "@pranavgaikwad" | ||
reviewers: | ||
- "@djzager" | ||
- "@fabianvf" | ||
- "@jmle" | ||
- "@shawn-hurley" | ||
approvers: | ||
- "@djzager" | ||
- "@fabianvf" | ||
- "@jmle" | ||
- "@shawn-hurley" | ||
creation-date: 2024-01-22 | ||
last-updated: 2024-01-22 | ||
status: implementable | ||
--- | ||
|
||
# Analyzer - Testing Rules | ||
|
||
As of today, testing an analyzer rule involves creating a provider config file, running analysis using the rule against a sample application, finally verifying the results by inspecting the output file manually. | ||
|
||
This enhancement proposes a new test runner for YAML rules. It will allow write tests for rules declaratively, run analysis using the rules against sample data, verify that the expected output in the test matches the actual output and finally, produce a result detailing passed / failed tests. | ||
|
||
## Motivation | ||
|
||
The biggest drawback with testing rules right now is that the output must be verified by the rule author manually. They have to go through all incidents / violations and verify if they see a desired result. Once they do verify the output, the only way to share this expected output is to share the entire _output.yaml_ file generated post analysis. Comparing the expected output with the actual results is done by computing _diff_ between the expected and actual output files. While this works great in end-to-end CI tests that run on a single rules file, there are limitations. First, it's very hard to understand what exactly is failing by only looking at diff output. And second, you have to generate that very first "expected output" file by running the analyzer. You cannot expect someone to write that file by hand. | ||
|
||
Assume that you're a rule author creating rules for a certain technology. You write your first rule, you run it against sample application, manually verify that your rule matched in the `output.yaml` file. You count the number of incidents, check if produced messages are correct. You check for any tags created if applicable. All good! You move onto writing your second rule, you run the rule again and get another `output.yaml` file. This time, you ensure that the first rule still produces the same output as before in addition to verifying your new rule. In every iteration, you manually verify every rule in the ruleset. Ideally, we want the rule author to write a test first and then iteratively work towards perfecting their rule until it produces the desired output. It's necessary that they can write the test by hand and on top of that, they should be able to run a command to verify the results. | ||
|
||
In addition to the UX concern above, there is another important concern. As analyzer-lsp adds more and more providers, contribution to rulesets will increase as a result. For the long-term health of the project, it is absolutely essential that any new rules added to rulesets have associated tests and sample data. An arduous manual testing process will only discourage rule authors from submitting tests. | ||
|
||
Lastly, the project has been mainly relying on rules from the Windup project which all have associated test cases. At some point, we will discontinue the Windup shim and move onto the new YAML format for good. What happens to the existing test cases in that case? We need to make sure we can convert the existing tests into this new format and continue to have coverage for those old rules. | ||
|
||
### Goals | ||
|
||
- Propose a declarative way of writing a test for a rule | ||
|
||
- Propose a CLI tool to run the tests | ||
|
||
- Talk about general concerns around requiring tests in konveyor/rulesets repo and CI | ||
|
||
### Non-Goals | ||
|
||
- Discuss specific implementation details of the test runner | ||
|
||
### Proposal | ||
|
||
#### Writing a test for a rule | ||
|
||
Rule authors should should be able to "declare" a test for a rule in a YAML file: | ||
|
||
```yaml | ||
- ruleID: same-rule-id-as-the-rule-to-be-tested | ||
data: /path/to/sample/data/ | ||
testCases: | ||
- description: In source-only mode, should match 10 times | ||
analysisParams: | ||
mode: source-only | ||
depLabelSelector: "!konveyor.io/source=open-source" | ||
hasIncidents: | ||
count: 10 | ||
messageContains: "produced message should contain this text in all 10 incidents" | ||
codeSnipContains: "removedFunctionCall()" | ||
hasTags: | ||
- Category=tag1 | ||
``` | ||
* ruleID: ID of the rule this test applies to. | ||
* data: Optional path to a directory containing sample data for the test. By default, path declared at the ruleset level will be used (see next section for details). | ||
* testCases: A list of different test cases for this test with each case containing following fields: | ||
* description: This is what will be printed as text next to PASS / FAIL result | ||
* analysisParams: Additional _optional_ parameters to configure analysis when running this test. By default, these won't be set. | ||
* mode: Analysis mode, defaulted by analyzer-lsp to provider default. | ||
* depLabelSelector: Dependency label selector, defaulted to nil. | ||
* hasIncidents: The actual content of the incident which will be verified. | ||
* count: Desired count of incidents, test will FAIL when total incidents don't match this number in the output. | ||
* messageContains: String to search in the message, test will FAIL when any one or more incidents don't have this string in their message. | ||
* codeSnipContains: String to search in the code snip, test will FAIL when one or more incidents don't have this string in their code snippets. | ||
* hasTags: A list of tags to verify. The test will FAIL when any one or more tags in the list are not found in the output. | ||
Notice that there could be more than one _testCases_ in a test with different _analysisParams_. This is useful when you want to test the same rule but under different analysis parameters. | ||
Note that above example contains a test for one rule. The YAML file will contain a list of such tests, each item being a test for a distinct rule. | ||
Having a schema for the tests would be nice. That will allow users to auto-generate the boilerplate. | ||
#### Organizing test files in a ruleset | ||
The YAML file we described in the previous section will be saved in the same directory as its rules file. The name of this test file will be same as of the rules file appended with `_test` suffix. For instance, tests for a rules file `rules.yaml` will be stored in `rules_test.yaml` in the same directory. The analyzer-lsp engine will ignore any files that have `_test.yaml` during rules parsing. | ||
|
||
Each rule test above has an optional field `data` that points to the sample data directory to use for the test. In most cases, rule authors will end up re-using the same sample data for a ruleset. Therefore, a new field `testData` will be introduced in `ruleset.yaml` file: | ||
|
||
```yaml | ||
name: my-ruleset | ||
testData: ./data/ | ||
``` | ||
|
||
This sample data will be used as input in case a test case doesn't explicitely specify the `data` field. | ||
|
||
#### Running the tests | ||
|
||
We are proposing a new CLI that takes the tests as input. | ||
|
||
To run a single test file `rules_test.yaml`: | ||
|
||
```sh | ||
konveyor-analyzer-test --provider-settings provider_settings.json ./rules_test.yaml | ||
``` | ||
|
||
The test runner will run the rules in `rule.yaml`, it will use `provider_settings.json` to set up providers. | ||
|
||
To run more than one test files: | ||
|
||
```sh | ||
konveyor-analyzer-test --provider-settings provider_settings.json ./removals_rules_test.yaml ./dep_rules_test.yaml | ||
``` | ||
|
||
To run all rules in a directory of rulesets: | ||
|
||
```sh | ||
konveyor-analyzer-test --provider-settings provider_settings.json ./rulesets/ | ||
``` | ||
|
||
The test runner will recursively find and run all tests in `./rulesets/`. In this case, it is assumed that all directories have a `ruleset.yaml` file defined, error otherwise. | ||
|
||
> The `provider_settings.json` file here is only required to set up providers correctly. The location field will be mutated based on the actual sample data used and the _analysisParams_ in the test case. | ||
|
||
#### Test Output | ||
|
||
The output for a test should look something like: | ||
|
||
```sh | ||
--- rules_test.yaml 1/2 PASSED, 1/2 FAILED | ||
- rule-1000 2/2 PASSED | ||
- testCase[0] PASS | ||
- testCase[1] PASS | ||
- rule-1001 1/2 PASSED, 1/2 FAILED | ||
- testCase[0] PASS | ||
- testCase[1] FAIL | ||
- Unexpected message string in 8/10 incidents | ||
- Log created in ./tmp/analyzer-test-099912/analysis.log | ||
- Output created in ./tmp/analyzer-test-099912/output.yaml | ||
Total: 2, 1/2 PASSED, 1/2 FAILED, Pass Rate: 50% | ||
``` | ||
|
||
The output is produced per test file aggregating total number of failed / passed rules at the top. | ||
|
||
Within a test file, it's further broken down into results of individual rules, with results of each test case for a rule. | ||
|
||
For any failed test case, the log and the output location is printed so that users can debug. | ||
|
||
Finally, a summary line will be printed showing overall statistics. | ||
|
||
Additionally a YAML output file will be generated which will be later used by the CI: | ||
|
||
```yaml | ||
tests: | ||
total: 100 | ||
passed: 99 | ||
failed: 1 | ||
rules: | ||
total: 102 | ||
tested: 100 | ||
untested: 2 | ||
``` | ||
|
||
This output will be later used by the CI. When a new PR is opened in _konveyor/rulesets_ repo, a github action will run all _test.yaml files in the repo, and comment on the PR the above output. The reviewer's can compare the numbers with existing numbers on main branch, and determine whether to merge the PR or ask for more coverage. | ||
|
||
|
||
### Technical details | ||
|
||
#### Test Case Structure | ||
|
||
The Go structs are defined below: | ||
|
||
```go | ||
type RuleTest struct { | ||
RuleID string `yaml:"ruleID"` | ||
Data string `yaml:"data,omitempty"` | ||
TestCases []TestCase `yaml:"testCases"` | ||
} | ||
|
||
type TestCase struct { | ||
Description string `yaml:"description"` | ||
AnalysisParams `yaml:",inline"` | ||
HasTags []string `yaml:"hasTags,omitempty"` | ||
HasIncidents IncidentsTest `yaml:"hasIncidents,omitempty"` | ||
} | ||
|
||
type AnalysisParams struct { | ||
Mode *provider.AnalysisMode `yaml:"mode,omitemtpy"` | ||
DepLabelSelector *string `yaml:"depLabelSelector"` | ||
} | ||
|
||
type IncidentsTest struct { | ||
Count int `yaml:"count"` | ||
MessageContains *string `yaml:"messageContains"` | ||
CodeSnipContains *string `yaml:"codeSnipContains"` | ||
} | ||
``` | ||
|
||
_Mode_ and _DepLabelSelector_ are nil by default. So when they are not defined in the test, provider / engine defaults will be used. | ||
|
||
Also notice that _MessageContains_ and _CodeSnipContains_ checks in the testcase are nil by default. The only required field is _Count_. So at the minimum, the test will check for number of incidents for a rule. If any of these optional fields are defined, the test case will add those additional checks. | ||
|
||
#### CLI Options | ||
|
||
Here are options for the test runner CLI: | ||
|
||
* --provider-settings: Required. Points to provider configuration. | ||
* --output: Optional. Generate output in a YAML file in addition to stdout. Defaults to stdout only. | ||
* --tmp-dir: Optional. Temporary directory for storing logs & output. Defaults to system's temp dir. | ||
* --quiet: Optional. Suppress output from passed tests. | ||
|
||
#### Running Tests | ||
|
||
Depending on the `analysisParams` used, how many rulesets are given and what distinct data directories are present, there will be different combinations of tests with overalapping _analysisParams_ and _data_ directories. It only makes sense for the test runner to first, group tests by data directories, and second, group further by analysis params. This is faster than running every file at once. One drawback of this is that output of a certain rule _may_ change in presence of another rule in the system. Analyzer-lsp engine will eventually run all rules at once though, so _maybe_ it's better to surface these problems early on. | ||
|
||
The groups of tests will look something like with 4 instances of different tests running at a time: | ||
|
||
```sh | ||
. | ||
├── data-1 | ||
│ ├── analyzer-params-1 | ||
│ │ ├── kubernetes_rules_test.yaml | ||
│ │ └── storage_rules_test.yaml | ||
│ └── analyzer-params-2 | ||
│ ├── kubernetes_rules_test.yaml | ||
│ └── networking_rules_test.yaml | ||
└── data-2 | ||
├── analyzer-params-1 | ||
│ ├── kubernetes_rules_test.yaml | ||
│ └── storage_rules_test.yaml | ||
└── analyzer-params-2 | ||
├── kubernetes_rules_test.yaml | ||
└── networking_rules_test.yaml | ||
``` | ||
|
||
For this system to work correctly, it's necessary that every test specifies the correct _ruleID_ so that the runner can pick the correct rules. | ||
|
||
The test output and logs will be generated in a temp directory per analysis run: | ||
|
||
```sh | ||
. | ||
└── tmp | ||
├── analyzer-test-999021 | ||
│ ├── analysis.log | ||
│ ├── output.yaml | ||
│ ├── provider_settings.json | ||
│ └── rules | ||
│ ├── kubernetes_rules.yaml | ||
│ └── storage_rules.yaml | ||
└── analyzer-test-3321234 | ||
├── analysis.log | ||
├── output.yaml | ||
├── provider_settings.json | ||
└── rules | ||
├── kubernetes_rules.yaml | ||
└── storage_rules.yaml | ||
``` | ||
|
||
The provider_settings.json file will be created by mutating required fields per analysis. | ||
|
||
|
||
|