Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(testing-strategy): add testing strategy and test plan documents drafts #306

Merged
merged 6 commits into from
Jul 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/contributor/assets/v-model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
71 changes: 71 additions & 0 deletions docs/contributor/test-plan-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Test Plan Template

## **Purpose**

Explains the context in which the test plan applies to. A visual representation is strongly recommended to identify any documentation structure.

## Scope

Identifies the boundaries in terms of project life cycle and test levels, and describes any inclusions, exclusions, assumptions, and/or limitations. A visual representation or a matrix is strongly suggested as best practice to convey one or more of these details, but it’s not mandatory.

## Definitions

Provides a lexicon for the terms, abbreviations, and acronyms.

## Audience

The primary audience for this document are Engineer Leads and Software Engineer in Test involved in and responsible for the testing management operations. The secondary audience is Toyota Motor Corporation (TMC) and its suppliers when involved in the testing activities.

## References

Lists other referenced documents or records and identifies repositories for system, software, and test information. The references may be separated into “external” references that are imposed from outside the organization and “internal” references that are imposed from within the organization.

## **Context of testing**

### **Objective**

Identifies what problems this testing wants to solve and summarises how those problems could be solved. Current testing strategy can be mentioned here if it’s assumed it to be a possible cause of the problem.

### **Test scope**

Summarises the features of each test item to be tested. Also identifies any features of a test item that are to be specifically excluded from testing and the rationale for their exclusion.

### **Assumptions and constraints**

Describes any assumptions and constraints for the testing covered by this plan. These may include regulatory standards, requirements in the test policy and the organizational test practices, contractual requirements, project time and cost constraints, and availability of appropriately skilled staff, tools and/or environments.

## **Test strategy**

Describes the approach to testing the particular aspects. The use of visual representations, and the grouping of some of the information in the below subsections into one matrix, are recommended.

### **Test levels**

For a master test plan or a type test plan, this identifies the strategy adopted for each test item for specific levels of testing. The strategy should mention prioritization criteria when needed, as well as acceptance or countermeasures in case some test levels cannot be tested according to the identified ideal plan.

### **Test types**

For a master test plan or a level test plan, this identifies the strategy adopted for each test item for a specific type of test (e.g. safety level assignment and how that level will be fulfilled). The strategy should mention prioritization criteria when needed, as well as acceptance or countermeasures in case some test levels cannot be tested according to the identified ideal plan.

### **Test deliverables**

Identifies all test documentation that is to be delivered from the testing activity or equivalent information to be recorded electronically, for example in spreadsheets, databases, or dedicated test tools. It can also specify the format of the deliverables, certain common characteristics that the deliverables must possess and the tools to be used.

### **Entry and exit criteria for each phase**

Specifies when test processes for a given test phase can start and stop. This is to tailor the required quality to the appropriate phase of a project. A visual representation is highly recommended to convey this information.

### **Test completion criteria**

Specifies when test execution for a given test level, test type, or test technique can be considered complete and gives the rationale for a defined level of coverage (e.g. less than 100%). A representation in matrix form is recommended to list the criteria for all the given levels, types, or techniques.

### **Metrics to be collected**

Describes the metrics for which values are to be collected during the test activities. These metrics can be product or process-related if they are monitored at the project level to monitor the status of the development. Time to review the metrics will also be stated here.

### **Test environment requirements**

At a high level, specifies the necessary, current, and/or desired characteristics of the test environment, their reciprocal connection, and their usage policies (together with usage personnel definition). Information regarding selection, evaluation, acquisition, and support for each component part of the test environment may be included. A visual representation is highly recommended to convey some of this information.

## Appendix A. Quality Evidence

Provides a list of reports created during the test plan execution and all relevant quality evidence artifacts.
128 changes: 128 additions & 0 deletions docs/contributor/test-plan-unittesting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# Unittesting Test Plan

## **Purpose**

The unit testing phase focuses on the maintainability of the code. This involves testing the logic of the smallest grains of the code in isolation and it aims to prevent regression resulting from code changes.

## Scope

Unit test suits are the lowest-level test suites, and they test the smallest grains of logic, mainly methods, in isolation, to prevent regression resulting from code changes. Unit tests should be able to run in isolation independent of external systems such as databases, external APIs, etc.

When the code is using any such dependencies, they should be mocked by using piece of logic that would be running in the local process space.

## Definitions

Dependency Injection — a technique whereby the dependencies for a logic, are passed to it instead of having a hard dependency in the logic itself.

Code Coverage — a report of percentage measure of the degree to which the source code is executed when all unittests are run.

Test Driven Development (TDD) — in unit test context, is to write the tests as soon as the signature of a method is clarified to test the expected logic. This can be done by the same developer who implements the method or another developer.

## Audience

The primary audience for this document are `Team Leads` and `Software engineers`.

## References

- github.com/stretchr/testify
- https://github.com/vektra/mockery
- https://github.com/kubernetes-sigs/controller-runtime/tree/main/pkg/client/fake

## Context of testing

Unit testing is part of the build and PRs have a hard dependency on all the unit tests to pass.

When there are different versions of a code or different feature flags, that can impact the logic of a given unit, each context has to have its own dedicated unit test or the same unit test needs to be run with all the identified contexts.

### Objective

The objective is to automatically validate the functionality of small and individual code blocks for correct behavior, and it would be executed each time a build is made. It would ensure that any new changes done in the system is not breaking code logic.

### Test scope

All the units of the code, in the `cloud-manager` repository, and all its features are in the scope.

### Assumptions and constraints

- All the unit tests must pass or be skipped as part of the build.

- Code coverage must remain in an expected predefined threshold for the PR check to succeed.

- For any given source file, the percentage of the code coverage should not drop as part of the PR. (Good to have)

- To make a code testable, dependency injection as a set of design patterns must be used at all times. Having a hard dependency to a specific object in a code, makes unit testing it in isolation extremely difficult or even impossible.

- Developers are recommended to follow a TDD approach.

- For assertions, testify

- For mocking, developers have two options:

- Write their own fake/mock object
- Use testify/mock and/or mockery if needed.

- For mocking an HTTP server, e.g. a call to gcp, net.http.httptest to be used

- To test a unit in isolation, which calls k8s APIs in its logic, pass fake k8s client instead.

### Code style and naming convention

- A test should follow this signature: func Test<Method Name>(t *testing.T). <Method Name> should be replaced by the source code method to be unit tested.
- When unit testing a method needs more than one logical test, as it has a conditional execution path depending on the input, one of the following approaches must be taken:
- A testify.suite.Suite to be created with as many methods as needed to test all execution paths.
- A series of subtests to be executed with this signature: t.Run("name of the test, func(t *testing.T){...}).
- Naming of the methods/subtests for both above options should be in upper camel case and should convey the scenario/path it is testing.

## Test strategy

The unit tests will be run as part of each build process, and they should run in isolation independently of any external dependencies. The cloud manager unit tests would use

* Fake client library to mock the Kubernetes environment
* Mocks to simulate the GCP and AWS APIs. Or any other third-party API.

### Test levels

Unit tests are level 1.

### Test types

Level 1 tests, i.e. unit tests, must always pass and as they run in isolation, there is no scenario in which they can't be executed.

### Test deliverables

Code coverage report that shows the coverage percentage of all the separate source files, preferably in a format that provides a visual representation of the covered code.

Also an aggregate percentage of the entire source code coverage.

### Entry and exit criteria for each phase

As long as the build does not fail, all the unit tests can be executed. Exit criteria:

- All the unit tests should be completed successfully.
- Average coverage for each package should be more than or equal to 85%
- Preferably it should make sure that coverage percentage does not drop as part of a PR for any given source file.

### Test completion criteria

The unit test phase is complete when all the unit test suites are completed and the coverage and passed/skipped/failed reports are generated.

### Metrics to be collected

The following metrics need to be collected as part of unit test suite execution:

* Aggregate code coverage for the entire cloud-manager repo.
* Aggregate code coverage for a package unit test suite.
* Code coverage for all source files.
* Aggregate number of all passed, skipped, and failed tests for each suite.
* For any given failure, the name and the path of the failed test.

### Test environment requirements

To prevent any false positives, unit tests must get executed in an env with enough resources. The minimum required resources are 2 CPU cores and 8 GB of memory. This amount is less than resources available to Github actions.

## Appendix A. Quality Evidence

- For any given PR (to be replaced in the following URL), the unit test results can be found in https://github.com/kyma-project/cloud-manager/pull/<PR>/checks.
- For each build under the checks, the results of the test can be found in "Build and test" step.

## Appendix B. Supplementary References
71 changes: 71 additions & 0 deletions docs/contributor/test-plan-validation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Validation Test Plan

## Purpose

The validation phase focuses on functionally testing the features requested by customers to ensure they work correctly and are complete. This involves eliciting the requirements and defining them in a human-readable format. Later, this format is used to create an automated acceptance test suite, making the validation process an automated step in the CI/CD pipeline.

## Scope

The validation test suite is the highest-level test suite, which tests the fully integrated system end-to-end to ensure it conforms to user requirements.

## Definitions

Functional testing — a type of software testing where the basic functionalities of an application are tested against a predetermined set of specifications.

Gherkin — non-technical and human-readable language for writing software requirements.

Black box testing — a form of testing that is performed with no knowledge of a system's internals.

SDLC (software development life cycle) — is a process for planning, creating, testing, and deploying an information system.

## Audience

The key stakeholder for the validation testing is the `Product Owner` who elicits and formalizes the requirements in the `Gherkin` format.

`Software engineers` implement a test suite to automatically test the fully integrated system against the acceptance criteria defined by the product owner.

## Context of testing

### Objective

The objective of validation testing is to ensure that the system provides all functionality requested by a user in a consistent and predictable manner.

### Test scope

The system is tested end-to-end in a full integration as a black-box.

<THE DIAGRAM OF THE CLOUD MANAGER COMPONENT IN THE KYMA LANDSCAPE>

### Requirements

The software requirements capturing is a part of the SDLC and should happen in the very beginning of the new feature development.

<SDLC DIAGRAM FOR REQUIREMENTS DEFINITION AND VALIDATION>

The software requirements format is a [Gherkin syntax](https://cucumber.io/docs/gherkin/reference/).

The software requirements location is the issue-tracking system.

### Test deliverables

The validation test suite provides a report with a listing of all defined user requirements and the current status of checking the system's conformity to them.

### Environment description

Description goes here...

### Entry and exit criteria to each phase

The entry criteria for the testing phase is the list of software requirements in the Gherkin format that are collected from the repository.

The exit criteria is the validation report with a status for each test step in the software requirements.

### Appendix A. Quality evidence

* Link to the filter for the features without requirements.
* Link to the validation report.
* Link to the DoD for writing the requirement.

### Appendix B. Technical gap remediation

<The technical gap identification and the remediation plan for addressing it>
Loading
Loading