|
| 1 | +# Contributing to Vault Helm |
| 2 | + |
| 3 | +**Please note:** We take Vault's security and our users' trust very seriously. |
| 4 | +If you believe you have found a security issue in Vault, please responsibly |
| 5 | +disclose by contacting us at [email protected]. |
| 6 | + |
| 7 | +**First:** if you're unsure or afraid of _anything_, just ask or submit the |
| 8 | +issue or pull request anyways. You won't be yelled at for giving it your best |
| 9 | +effort. The worst that can happen is that you'll be politely asked to change |
| 10 | +something. We appreciate any sort of contributions, and don't want a wall of |
| 11 | +rules to get in the way of that. |
| 12 | + |
| 13 | +That said, if you want to ensure that a pull request is likely to be merged, |
| 14 | +talk to us! You can find out our thoughts and ensure that your contribution |
| 15 | +won't clash or be obviated by Vault's normal direction. A great way to do this |
| 16 | +is via the [Vault Google Group][2]. Sometimes Vault devs are in `#vault-tool` |
| 17 | +on Freenode, too. |
| 18 | + |
| 19 | +This document will cover what we're looking for in terms of reporting issues. |
| 20 | +By addressing all the points we're looking for, it raises the chances we can |
| 21 | +quickly merge or address your contributions. |
| 22 | + |
| 23 | +## Issues |
| 24 | + |
| 25 | +### Reporting an Issue |
| 26 | + |
| 27 | +* Make sure you test against the latest released version. It is possible |
| 28 | + we already fixed the bug you're experiencing. Even better is if you can test |
| 29 | + against `main`, as bugs are fixed regularly but new versions are only |
| 30 | + released every few months. |
| 31 | + |
| 32 | +* Provide steps to reproduce the issue, and if possible include the expected |
| 33 | + results as well as the actual results. Please provide text, not screen shots! |
| 34 | + |
| 35 | +* Respond as promptly as possible to any questions made by the Vault |
| 36 | + team to your issue. Stale issues will be closed periodically. |
| 37 | + |
| 38 | +### Issue Lifecycle |
| 39 | + |
| 40 | +1. The issue is reported. |
| 41 | + |
| 42 | +2. The issue is verified and categorized by a Vault Helm collaborator. |
| 43 | + Categorization is done via tags. For example, bugs are marked as "bugs". |
| 44 | + |
| 45 | +3. Unless it is critical, the issue may be left for a period of time (sometimes |
| 46 | + many weeks), giving outside contributors -- maybe you!? -- a chance to |
| 47 | + address the issue. |
| 48 | + |
| 49 | +4. The issue is addressed in a pull request or commit. The issue will be |
| 50 | + referenced in the commit message so that the code that fixes it is clearly |
| 51 | + linked. |
| 52 | + |
| 53 | +5. The issue is closed. Sometimes, valid issues will be closed to keep |
| 54 | + the issue tracker clean. The issue is still indexed and available for |
| 55 | + future viewers, or can be re-opened if necessary. |
| 56 | + |
| 57 | +## Testing |
| 58 | + |
| 59 | +The Helm chart ships with both unit and acceptance tests. |
| 60 | + |
| 61 | +The unit tests don't require any active Kubernetes cluster and complete |
| 62 | +very quickly. These should be used for fast feedback during development. |
| 63 | +The acceptance tests require a Kubernetes cluster with a configured `kubectl`. |
| 64 | + |
| 65 | +### Test Using Docker Container |
| 66 | + |
| 67 | +The following are the instructions for running bats tests using a Docker container. |
| 68 | + |
| 69 | +#### Prerequisites |
| 70 | + |
| 71 | +* Docker installed |
| 72 | +* `vault-helm` checked out locally |
| 73 | + |
| 74 | +#### Test |
| 75 | + |
| 76 | +**Note:** the following commands should be run from the `vault-helm` directory. |
| 77 | + |
| 78 | +First, build the Docker image for running the tests: |
| 79 | + |
| 80 | +```shell |
| 81 | +docker build -f ${PWD}/test/docker/Test.dockerfile ${PWD}/test/docker/ -t vault-helm-test |
| 82 | +``` |
| 83 | +Next, execute the tests with the following commands: |
| 84 | +```shell |
| 85 | +docker run -it --rm -v "${PWD}:/test" vault-helm-test bats /test/test/unit |
| 86 | +``` |
| 87 | +It's possible to only run specific bats tests using regular expressions. |
| 88 | +For example, the following will run only tests with "injector" in the name: |
| 89 | +```shell |
| 90 | +docker run -it --rm -v "${PWD}:/test" vault-helm-test bats /test/test/unit -f "injector" |
| 91 | +``` |
| 92 | + |
| 93 | +### Test Manually |
| 94 | +The following are the instructions for running bats tests on your workstation. |
| 95 | +#### Prerequisites |
| 96 | +* [Bats](https://github.com/bats-core/bats-core) |
| 97 | + ```bash |
| 98 | + brew install bats-core |
| 99 | + ``` |
| 100 | +* [yq](https://pypi.org/project/yq/) |
| 101 | + ```bash |
| 102 | + brew install python-yq |
| 103 | + ``` |
| 104 | +* [helm](https://helm.sh) |
| 105 | + ```bash |
| 106 | + brew install kubernetes-helm |
| 107 | + ``` |
| 108 | + |
| 109 | +#### Test |
| 110 | + |
| 111 | +To run the unit tests: |
| 112 | + |
| 113 | + bats ./test/unit |
| 114 | + |
| 115 | +To run the acceptance tests: |
| 116 | + |
| 117 | + bats ./test/acceptance |
| 118 | + |
| 119 | +If the acceptance tests fail, deployed resources in the Kubernetes cluster |
| 120 | +may not be properly cleaned up. We recommend recycling the Kubernetes cluster to |
| 121 | +start from a clean slate. |
| 122 | + |
| 123 | +**Note:** There is a Terraform configuration in the |
| 124 | +[`test/terraform/`](https://github.com/hashicorp/vault-helm/tree/main/test/terraform) directory |
| 125 | +that can be used to quickly bring up a GKE cluster and configure |
| 126 | +`kubectl` and `helm` locally. This can be used to quickly spin up a test |
| 127 | +cluster for acceptance tests. Unit tests _do not_ require a running Kubernetes |
| 128 | +cluster. |
| 129 | + |
| 130 | +### Writing Unit Tests |
| 131 | + |
| 132 | +Changes to the Helm chart should be accompanied by appropriate unit tests. |
| 133 | + |
| 134 | +#### Formatting |
| 135 | + |
| 136 | +- Put tests in the test file in the same order as the variables appear in the `values.yaml`. |
| 137 | +- Start tests for a chart value with a header that says what is being tested, like this: |
| 138 | + ``` |
| 139 | + #-------------------------------------------------------------------- |
| 140 | + # annotations |
| 141 | + ``` |
| 142 | +
|
| 143 | +- Name the test based on what it's testing in the following format (this will be its first line): |
| 144 | + ``` |
| 145 | + @test "<section being tested>: <short description of the test case>" { |
| 146 | + ``` |
| 147 | +
|
| 148 | + When adding tests to an existing file, the first section will be the same as the other tests in the file. |
| 149 | +
|
| 150 | +#### Test Details |
| 151 | +
|
| 152 | +[Bats](https://github.com/bats-core/bats-core) provides a way to run commands in a shell and inspect the output in an automated way. |
| 153 | +In all of the tests in this repo, the base command being run is [helm template](https://docs.helm.sh/helm/#helm-template) which turns the templated files into straight yaml output. |
| 154 | +In this way, we're able to test that the various conditionals in the templates render as we would expect. |
| 155 | +
|
| 156 | +Each test defines the files that should be rendered using the `--show-only` flag, then it might adjust chart values by adding `--set` flags as well. |
| 157 | +The output from this `helm template` command is then piped to [yq](https://pypi.org/project/yq/). |
| 158 | +`yq` allows us to pull out just the information we're interested in, either by referencing its position in the yaml file directly or giving information about it (like its length). |
| 159 | +The `-r` flag can be used with `yq` to return a raw string instead of a quoted one which is especially useful when looking for an exact match. |
| 160 | +
|
| 161 | +The test passes or fails based on the conditional at the end that is in square brackets, which is a comparison of our expected value and the output of `helm template` piped to `yq`. |
| 162 | +
|
| 163 | +The `| tee /dev/stderr ` pieces direct any terminal output of the `helm template` and `yq` commands to stderr so that it doesn't interfere with `bats`. |
| 164 | +
|
| 165 | +#### Test Examples |
| 166 | +
|
| 167 | +Here are some examples of common test patterns: |
| 168 | +
|
| 169 | +- Check that a value is disabled by default |
| 170 | +
|
| 171 | + ``` |
| 172 | + @test "ui/Service: no type by default" { |
| 173 | + cd `chart_dir` |
| 174 | + local actual=$(helm template \ |
| 175 | + --show-only templates/ui-service.yaml \ |
| 176 | + . | tee /dev/stderr | |
| 177 | + yq -r '.spec.type' | tee /dev/stderr) |
| 178 | + [ "${actual}" = "null" ] |
| 179 | + } |
| 180 | + ``` |
| 181 | +
|
| 182 | + In this example, nothing is changed from the default templates (no `--set` flags), then we use `yq` to retrieve the value we're checking, `.spec.type`. |
| 183 | + This output is then compared against our expected value (`null` in this case) in the assertion `[ "${actual}" = "null" ]`. |
| 184 | +
|
| 185 | +
|
| 186 | +- Check that a template value is rendered to a specific value |
| 187 | + ``` |
| 188 | + @test "ui/Service: specified type" { |
| 189 | + cd `chart_dir` |
| 190 | + local actual=$(helm template \ |
| 191 | + --show-only templates/ui-service.yaml \ |
| 192 | + --set 'ui.serviceType=LoadBalancer' \ |
| 193 | + . | tee /dev/stderr | |
| 194 | + yq -r '.spec.type' | tee /dev/stderr) |
| 195 | + [ "${actual}" = "LoadBalancer" ] |
| 196 | + } |
| 197 | + ``` |
| 198 | +
|
| 199 | + This is very similar to the last example, except we've changed a default value with the `--set` flag and correspondingly changed the expected value. |
| 200 | +
|
| 201 | +- Check that a template value contains several values |
| 202 | + ``` |
| 203 | + @test "server/standalone-StatefulSet: custom resources" { |
| 204 | + cd `chart_dir` |
| 205 | + local actual=$(helm template \ |
| 206 | + --show-only templates/server-statefulset.yaml \ |
| 207 | + --set 'server.standalone.enabled=true' \ |
| 208 | + --set 'server.resources.requests.memory=256Mi' \ |
| 209 | + --set 'server.resources.requests.cpu=250m' \ |
| 210 | + . | tee /dev/stderr | |
| 211 | + yq -r '.spec.template.spec.containers[0].resources.requests.memory' | tee /dev/stderr) |
| 212 | + [ "${actual}" = "256Mi" ] |
| 213 | +
|
| 214 | + local actual=$(helm template \ |
| 215 | + --show-only templates/server-statefulset.yaml \ |
| 216 | + --set 'server.standalone.enabled=true' \ |
| 217 | + --set 'server.resources.limits.memory=256Mi' \ |
| 218 | + --set 'server.resources.limits.cpu=250m' \ |
| 219 | + . | tee /dev/stderr | |
| 220 | + yq -r '.spec.template.spec.containers[0].resources.limits.memory' | tee /dev/stderr) |
| 221 | + [ "${actual}" = "256Mi" ] |
| 222 | + ``` |
| 223 | +
|
| 224 | + *Note:* If testing more than two conditions, it would be good to separate the `helm template` part of the command from the `yq` sections to reduce redundant work. |
| 225 | +
|
| 226 | +- Check that an entire template file is not rendered |
| 227 | + ``` |
| 228 | + @test "syncCatalog/Deployment: disabled by default" { |
| 229 | + cd `chart_dir` |
| 230 | + local actual=$( (helm template \ |
| 231 | + --show-only templates/server-statefulset.yaml \ |
| 232 | + --set 'global.enabled=false' \ |
| 233 | + . || echo "---") | tee /dev/stderr | |
| 234 | + yq 'length > 0' | tee /dev/stderr) |
| 235 | + [ "${actual}" = "false" ] |
| 236 | + } |
| 237 | + ``` |
| 238 | + Here we are check the length of the command output to see if the anything is rendered. |
| 239 | + This style can easily be switched to check that a file is rendered instead. |
0 commit comments