From 2e098751a14cb6a6122f7d7368f3c2eb54cf7f8d Mon Sep 17 00:00:00 2001 From: Nina Hingerl <76950046+NHingerl@users.noreply.github.com> Date: Mon, 24 Jul 2023 15:58:31 +0200 Subject: [PATCH] review README (#87) * review README * add Grzegorz's suggestions Co-authored-by: Grzegorz Karaluch --------- Co-authored-by: Grzegorz Karaluch --- README.md | 274 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 156 insertions(+), 118 deletions(-) diff --git a/README.md b/README.md index d2dd2c4f..28b4d5fa 100644 --- a/README.md +++ b/README.md @@ -1,74 +1,86 @@ # NATS Manager + Manages the lifecycle of a NATS JetStream deployment. ## Description -It is a standard Kubernetes operator which observes the state of NATS JetStream deployment and reconciles its state according to desired state. + +NATS Manager is a standard Kubernetes operator that observes the state of NATS JetStream deployment and reconciles its state according to the desired state. ### How it works -This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/). +This project aims to follow the [Kubernetes Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/). -It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/), -which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster. +It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/), which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster. -This project is scaffolded using [Kubebuilder](https://book.kubebuilder.io), and all the Kubebuilder `makefile` helpers mentioned [here](https://book.kubebuilder.io/reference/makefile-helpers.html) can be used. +This project is scaffolded using [Kubebuilder](https://book.kubebuilder.io), and all the Kubebuilder [`makefile` helpers](https://book.kubebuilder.io/reference/makefile-helpers.html) can be used. ## Development -### Pre-requisites +### Prerequisites - [Go](https://go.dev/) - [Docker](https://www.docker.com/) - [kubectl](https://kubernetes.io/docs/tasks/tools/) -- [kubebuilder](https://book.kubebuilder.io/) +- [Kubebuilder](https://book.kubebuilder.io/) - [kustomize](https://kustomize.io/) -- Access to Kubernetes cluster ([k3d](https://k3d.io/) / k8s) +- Access to Kubernetes cluster ([k3d](https://k3d.io/) / Kubernetes) + +### Run NATS Manager locally -### Running locally 1. Download Go packages: + ```sh go mod vendor && go mod tidy ``` + 2. Install the CRDs into the cluster: - ```sh - make install - ``` -3. Run your controller (this will run in the foreground, so if you want to leave it running, switch to a new terminal). - ```sh - make run - ``` + ```sh + make install + ``` + +3. Run your NATS Manager (this will run in the foreground, so if you want to leave it running, switch to a new terminal). + + ```sh + make run + ``` **NOTE:** You can also run this in one step by running: `make install run` -### Running tests +### Run tests + Run the unit and integration tests: + ```sh make test-only ``` ### Linting + 1. Fix common lint issues: - ```sh - make imports-local - make fmt-local - ``` + + ```sh + make imports-local + make fmt-local + ``` 2. Run lint check: - ```sh - make lint-thoroughly - ``` -### Modifying the API definitions + ```sh + make lint-thoroughly + ``` + +### Modify the API definitions + If you are editing the API definitions, generate the manifests such as CRs or CRDs using: ```sh make manifests ``` -**NOTE:** Run `make --help` for more information on all potential `make` targets +> **NOTE:** Run `make --help` for more information on all potential `make` targets. -More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html) +For more information, see the [Kubebuilder documentation](https://book.kubebuilder.io/introduction.html). ### Build container images @@ -78,26 +90,32 @@ Build and push your image to the location specified by `IMG`: make docker-build docker-push IMG=/nats-manager: # If using docker, is your username. ``` -**NOTE**: Run the following for MacBook M1 devices: -```sh -make docker-buildx IMG=/nats-manager: -``` +> **NOTE:** Run the following for MacBook M1 devices: +> +> ```sh +> make docker-buildx IMG=/nats-manager: +> ``` ## Deployment + You’ll need a Kubernetes cluster to run against. You can use [k3d](https://k3d.io/) to get a local cluster for testing, or run against a remote cluster. -**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows). -### Deploying on the cluster +> **NOTE:** Your NATS Manager automatically uses the current context in your kubeconfig file, that is, whatever cluster `kubectl cluster-info` shows. + +### Deploy on the cluster 1. Download Go packages: + ```sh go mod vendor && go mod tidy ``` 2. Install the CRDs to the cluster: - ```sh - make install - ``` + + ```sh + make install + ``` + 3. Build and push your image to the location specified by `IMG`: ```sh @@ -105,9 +123,10 @@ You’ll need a Kubernetes cluster to run against. You can use [k3d](https://k3d ``` 4. Deploy the `nats-manager` controller to the cluster: - ```sh - make deploy IMG=/nats-manager: - ``` + + ```sh + make deploy IMG=/nats-manager: + ``` 5. [Optional] Install NATS Custom Resource: @@ -115,15 +134,15 @@ You’ll need a Kubernetes cluster to run against. You can use [k3d](https://k3d kubectl apply -f config/samples/eventing-nats-eval.yaml ``` -**Undeploy controller** +### Undeploy NATS Manager -Undeploy the controller from the cluster: +Undeploy the NATS Manager from the cluster: ```sh make undeploy ``` -**Uninstall CRDs** +### Uninstall CRDs To delete the CRDs from the cluster: @@ -131,93 +150,110 @@ To delete the CRDs from the cluster: make uninstall ``` -### Deploying using [Kyma Lifecycle Manager](https://github.com/kyma-project/lifecycle-manager/tree/main) -1. Deploy the Lifecycle Manager & Module Manager to the Control Plane cluster with: +### Deploy NATS Manager with [Kyma Lifecycle Manager](https://github.com/kyma-project/lifecycle-manager/tree/main) -```shell -kyma alpha deploy -``` +1. Deploy the Lifecycle Manager and Module Manager to the Control Plane cluster: -**NOTE**: For single-cluster mode edit the lifecycle manager role to give access to all resources with `kubectl edit clusterrole lifecycle-manager-manager-role` and have the following under `rules`: -```shell -- apiGroups: - - "*" - resources: - - "*" - verbs: - - "*" -``` + ```shell + kyma alpha deploy + ``` -2. Prepare OCI container registry: +2. For single-cluster mode: + 1. Edit the Lifecycle Manager role to give access to all resources: `kubectl edit clusterrole lifecycle-manager-manager-role` + 2. Put the following sequence under `rules`: -It can be Github, DockerHub, GCP or local registry. -The following resources worth having a look to set up a container registry unless you have one: -* Lifecycle manager [provision-cluster-and-registry](https://github.com/kyma-project/lifecycle-manager/blob/main/docs/developer/provision-cluster-and-registry.md) documentation -* [Github container registry documentation](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry). Change the visibility of a GH package to public if you don't provide a registry secret. + ```shell + - apiGroups: + - "*" + resources: + - "*" + verbs: + - "*" + ``` -3. Generate module template and push container image by running the following command in the project root director: -```sh -kyma alpha create module -n kyma-project.io/module/nats --version 0.0.1 --registry ghcr.io/{GH_USERNAME}/nats-manager -c {REGISTRY_USER_NAME}:{REGISTRY_AUTH_TOKEN} -w -``` -In the command GH container registry sample is used. Replace GH_USERNAME=REGISTRY_USER_NAME and REGISTRY_AUTH_TOKEN with the GH username and token/password respectively. +3. Prepare the OCI container registry. -The command generates a ModuleTemplate `template.yaml` file in the project folder. + It can be GitHub, DockerHub, GCP or a local registry. + If you haven't set up a container registry yet, refer to the following documentation: -**NOTE:** Change `template.yaml` content with `spec.target=remote` to `spec.target=control-plane` for **single-cluster** mode as it follows: + - Lifecycle Manager: [Provision a cluster and OCI registry](https://github.com/kyma-project/lifecycle-manager/blob/main/docs/developer-tutorials/provision-cluster-and-registry.md) + + - GitHub: [Working with the Container registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry). If you don't provide a registry Secret, change the visibility of a GitHub package to `public`. -```yaml -spec: - target: control-plane - channel: regular -``` +4. Generate the module template and push the container image. In the project root directory, run: -4. Apply the module template to the K8s cluster: + ```sh + kyma alpha create module -n kyma-project.io/module/nats --version 0.0.1 --registry ghcr.io/{GH_USERNAME}/nats-manager -c {REGISTRY_USER_NAME}:{REGISTRY_AUTH_TOKEN} -w + ``` -```sh -kubectl apply -f template.yaml -``` + Replace the placeholders GH_USERNAME=REGISTRY_USER_NAME and REGISTRY_AUTH_TOKEN with the GitHub username and token/password respectively. -5. Deploy the `nats` module by adding it to `kyma` custom resource `spec.modules`: + In the project directory, now there is a ModuleTemplate `template.yaml` file. -```sh -kubectl edit -n kyma-system kyma default-kyma -``` -The spec part should have the following: -```yaml -... -spec: - modules: - - name: nats -... -``` +5. For single-cluster mode, change the `template.yaml` content with `spec.target=remote` to `spec.target=control-plane` as follows: -6. Check whether your modules is deployed properly: + ```yaml + spec: + target: control-plane + channel: regular + ``` -Check nats resource if it has ready state: -```shell -kubectl get -n kyma-system nats -``` +6. Apply the module template to the Kubernetes cluster: -Check Kyma resource if it has ready state: -```shell -kubectl get -n kyma-system kyma -``` -If they don't have ready state, one can troubleshoot it by checking the pods under `nats-manager-system` namespace where the module is installed: -```shell -kubectl get pods -n nats-manager-system -``` + ```sh + kubectl apply -f template.yaml + ``` -**Uninstalling controller with [Kyma Lifecycle Manager](https://github.com/kyma-project/lifecycle-manager/tree/main)** +7. Deploy the `nats` module by adding it to `kyma` custom resource `spec.modules`: -1. Delete nats from `kyma` resource `spec.modules` `kubectl edit -n kyma-system kyma default-kyma`: + ```sh + kubectl edit -n kyma-system kyma default-kyma + ``` -2. Check `nats` resource and module namespace whether they are deleted + Now, the **spec** section should show the following: -```shell -kubectl get -n kyma-system nats -``` + ```yaml + ... + spec: + modules: + - name: nats + ... + ``` + +8. Check whether your NATS module is deployed properly: + + - Check if the NATS resource has the ready state: + + ```shell + kubectl get -n kyma-system nats + ``` + + - Check if the Kyma resource has the ready state: + + ```shell + kubectl get -n kyma-system kyma + ``` + + > **TIP:** If they don't have ready state, check the Pods under `nats-manager-system` Namespace where the module is installed: + > + > ```shell + > kubectl get pods -n nats-manager-system + > ``` + +### Uninstall the controller with [Kyma Lifecycle Manager](https://github.com/kyma-project/lifecycle-manager/tree/main) + +1. Delete NATS from `kyma` resource `spec.modules` `kubectl edit -n kyma-system kyma default-kyma`: + +2. Chec whether the `nats` resource and module Namespace are deleted: + + ```shell + kubectl get -n kyma-system nats + ``` ## E2E tests + +> **NOTE:** Because the E2E tests need a Kubernetes cluster to run on, they are separated from the remaining tests and are only executed if the `e2e` build tags are passed. + For the E2E tests, provide a Kubernetes cluster and run: ```shell @@ -225,42 +261,48 @@ make e2e IMG=/nats-manager: ``` If you already have deployed the NATS-Manager on your cluster, you can simply run: + ```shell make e2e-only ``` -To adjust the log-level, use the environment variable `E2E_LOG_LEVEL`. It accepts the values `debug`, `info`, `warn` and `error`. The default value is `debug`. To set the level, enter: +To adjust the log level, use the environment variable `E2E_LOG_LEVEL`. It accepts the values `debug`, `info`, `warn` and `error`. The default value is `debug`. To set the level, enter: + ```shell export E2E_LOG_LEVEL="error" ``` -The e2e test consist of four consecutive steps. You can run them individually as well. +The E2E test consists of four consecutive steps. You can run them individually as well. 1. To set up a NATS CR and check that it and all correlated resources like Pods, Services and PVCs are set up as expected, run: + ```shell make e2e-setup ``` 2. To execute a [bench test](https://docs.nats.io/using-nats/nats-tools/nats_cli/natsbench) on the NATS-Server, run: + ```shell make e2e-bench ``` + This relies on the setup from `make e2e-setup`. > **NOTE:** Running this on slow hardware like CI systems or k3d clusters results in poor performance. However, this is still a great tool to simply show that NATS JetStream is in an operational configuration. 3. To check that the internals of the NATS-Server are healthy and configured as expected, run: + ```shell make e2e-nats-server ``` - This will rely on the setup from `make e2e-setup`. + This will rely on the setup from `make e2e-setup`. 4. To clean up the test environment and to check that all resources correlated to the NATS CR are removed, run: + ```shell make e2e-cleanup ``` -> **NOTE:** Because the e2e-tests need a Kubernetes Cluster to run on, they are separated from the remaining tests and will only be executed if the `e2e` build tags are passed. ## License @@ -272,9 +314,5 @@ You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and limitations under the License.