diff --git a/event-email-service/README.md b/event-email-service/README.md index eb86d2bb..59e2d8d3 100644 --- a/event-email-service/README.md +++ b/event-email-service/README.md @@ -28,17 +28,17 @@ This example illustrates how to write a service in `NodeJS` that listens for Eve 1. Export your Namespace as a variable by replacing the **{namespace}** placeholder in the following command and running it: ```bash - export KYMA_EXAMPLE_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 2. Deploy the service: ```bash - kubectl apply -f deployment -n $KYMA_EXAMPLE_NS + kubectl apply -f deployment -n $K8S_NAMESPACE ``` 3. Expose the service endpoint: ```bash - kubectl port-forward -n $KYMA_EXAMPLE_NS $(kubectl get pod -n $KYMA_EXAMPLE_NS -l example=event-email-service | grep event-email-service | awk '{print $1}') 3000 + kubectl port-forward -n $K8S_NAMESPACE $(kubectl get pod -n $K8S_NAMESPACE -l example=event-email-service | grep event-email-service | awk '{print $1}') 3000 ``` ### Test the service @@ -56,5 +56,5 @@ After sending the Event, you should see a log entry either in your terminal (if Clean all deployed example resources from Kyma with the following command: ```bash -kubectl delete all -l example=event-email-service -n $KYMA_EXAMPLE_NS +kubectl delete all -l example=event-email-service -n $K8S_NAMESPACE ``` diff --git a/gateway/README.md b/gateway/README.md index f4253132..f9b2965a 100644 --- a/gateway/README.md +++ b/gateway/README.md @@ -117,7 +117,7 @@ There are additional prerequisites to exposing a service manually using kubectl: 1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it: ```bash - export KYMA_EXAMPLE_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 2. Export your Kyma domain as a variable. Replace the `{domain}` placeholder in the following command and run it: @@ -129,7 +129,7 @@ There are additional prerequisites to exposing a service manually using kubectl: 3. Apply the `deployment.yaml` file from the `service` directory in this example. ```bash - kubectl apply -f ./service/deployment.yaml -n $KYMA_EXAMPLE_NS + kubectl apply -f ./service/deployment.yaml -n $K8S_NAMESPACE ``` #### Expose a service without authentication @@ -137,7 +137,7 @@ There are additional prerequisites to exposing a service manually using kubectl: Run this command to create an API Rule: ```bash -kubectl apply -f ./service/api-without-auth.yaml -n $KYMA_EXAMPLE_NS +kubectl apply -f ./service/api-without-auth.yaml -n $K8S_NAMESPACE ``` #### Test the API Rules without authentication @@ -163,7 +163,7 @@ metadata: labels: example: gateway-service name: http-db-service - namespace: $KYMA_EXAMPLE_NS + namespace: $K8S_NAMESPACE spec: gateway: kyma-gateway.kyma-system.svc.cluster.local rules: @@ -190,7 +190,7 @@ EOF You can also manually adjust the `https://dex.kyma.local` domain in the `trusted_issuers` section of the `api-with-jwt.yaml` file to fit your setup. After adjusting the domain, run: ```bash -kubectl apply -f ./service/api-with-jwt.yaml -n $KYMA_EXAMPLE_NS +kubectl apply -f ./service/api-with-jwt.yaml -n $K8S_NAMESPACE ``` #### Test the APIs with JWT authentication @@ -211,7 +211,7 @@ curl -ik https://http-db-service.$KYMA_EXAMPLE_DOMAIN/orders -H 'Authorization: Create an API Rule with the OAuth2 authentication settings: ```bash -kubectl apply -f ./service/api-with-oauth2.yaml -n $KYMA_EXAMPLE_NS +kubectl apply -f ./service/api-with-oauth2.yaml -n $K8S_NAMESPACE ``` #### Fetch the OAuth2 token @@ -219,7 +219,7 @@ kubectl apply -f ./service/api-with-oauth2.yaml -n $KYMA_EXAMPLE_NS 1. Create an OAuth2 client: ```bash - kubectl apply -f ./service/oauth2client.yaml -n $KYMA_EXAMPLE_NS + kubectl apply -f ./service/oauth2client.yaml -n $K8S_NAMESPACE ``` 2. Fetch the access token with the required scopes. The access token in the response is referred to as the **\{oauth2-token\}**. Run: @@ -245,9 +245,9 @@ curl -ik https://http-db-service.$KYMA_EXAMPLE_DOMAIN/orders -H 'Authorization: Run the following command to completely remove the example and all its resources from the cluster: ```bash -kubectl delete all -l example=gateway-service -n $KYMA_EXAMPLE_NS -kubectl delete apirules.gateway.kyma-project.io -l example=gateway-service -n $KYMA_EXAMPLE_NS -kubectl delete oauth2clients.hydra.ory.sh -l example=gateway-service -n $KYMA_EXAMPLE_NS +kubectl delete all -l example=gateway-service -n $K8S_NAMESPACE +kubectl delete apirules.gateway.kyma-project.io -l example=gateway-service -n $K8S_NAMESPACE +kubectl delete oauth2clients.hydra.ory.sh -l example=gateway-service -n $K8S_NAMESPACE ``` ## Troubleshooting @@ -257,7 +257,7 @@ kubectl delete oauth2clients.hydra.ory.sh -l example=gateway-service -n $KYMA_EX **No healthy upstream:** Check if the Pod you created is running. Run: ```bash -kubectl get pods -n $KYMA_EXAMPLE_NS +kubectl get pods -n $K8S_NAMESPACE ``` Wait until all containers of the Pod are running. @@ -267,7 +267,7 @@ Wait until all containers of the Pod are running. **Upstream connect error or disconnect/reset before headers:** Check if the Pod you created has the istio-proxy container injected. Run this command: ```bash -kubectl get pods -l example=gateway-service -n $KYMA_EXAMPLE_NS -o json | jq '.items[].spec.containers[].name' +kubectl get pods -l example=gateway-service -n $K8S_NAMESPACE -o json | jq '.items[].spec.containers[].name' ``` One of the returned strings should be the istio-proxy. If there is no such string, the Namespace probably does not have Istio injection enabled. For more information, read the document about the [sidecar proxy injection](https://kyma-project.io/#/04-operation-guides/operations/smsh-01-istio-enable-sidecar-injection). diff --git a/http-db-service/README.md b/http-db-service/README.md index 8bf6d901..ffe6988e 100755 --- a/http-db-service/README.md +++ b/http-db-service/README.md @@ -29,12 +29,12 @@ Run the following commands to deploy the published service to Kyma: 1. Export your Namespace as variable by replacing the `{namespace}` placeholder in the following command and running it: ```bash - export KYMA_EXAMPLE_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 2. Deploy the service: ```bash - kubectl create namespace $KYMA_EXAMPLE_NS - kubectl apply -f deployment/deployment.yaml -n $KYMA_EXAMPLE_NS + kubectl create namespace $K8S_NAMESPACE + kubectl apply -f deployment/deployment.yaml -n $K8S_NAMESPACE ``` ### MSSQL Database tests @@ -58,5 +58,5 @@ The command runs the specific unit tests for MSSQL databases with the environmen Run the following command to completely remove the example and all its resources from the cluster: ```bash -kubectl delete all -l example=http-db-service -n $KYMA_EXAMPLE_NS +kubectl delete all -l example=http-db-service -n $K8S_NAMESPACE ``` diff --git a/jaeger/README.md b/jaeger/README.md index 34756847..237db94e 100644 --- a/jaeger/README.md +++ b/jaeger/README.md @@ -18,7 +18,7 @@ The following instructions outline how to use [`Jaeger`](https://github.com/jaeg 1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it: ```bash - export KYMA_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 1. Export the Helm release name that you want to use. The release name must be unique for the chosen Namespace. Be aware that all resources in the cluster will be prefixed with that name. Run the following command: @@ -39,7 +39,7 @@ The following instructions outline how to use [`Jaeger`](https://github.com/jaeg Run the Helm upgrade command, which installs the chart if not present yet. ```bash -helm upgrade --install --create-namespace -n $KYMA_NS $HELM_JAEGER_RELEASE jaegertracing/jaeger -f https://raw.githubusercontent.com/kyma-project/examples/main/jaeger/values.yaml +helm upgrade --install --create-namespace -n $K8S_NAMESPACE $HELM_JAEGER_RELEASE jaegertracing/jaeger -f https://raw.githubusercontent.com/kyma-project/examples/main/jaeger/values.yaml ``` You can either use the [`values.yaml`](./values.yaml) provided in this `jaeger` folder, which contains customized settings deviating from the default settings, or create your own `values.yaml` file. @@ -48,7 +48,7 @@ You can either use the [`values.yaml`](./values.yaml) provided in this `jaeger` Check if the `jaeger` Pod was successfully created in the Namespace and is in the `Running` state: ```bash -kubectl -n $KYMA_NS rollout status deploy $HELM_JAEGER_RELEASE +kubectl -n $K8S_NAMESPACE rollout status deploy $HELM_JAEGER_RELEASE ``` ### Activate a TracePipeline @@ -56,7 +56,7 @@ kubectl -n $KYMA_NS rollout status deploy $HELM_JAEGER_RELEASE To configure the Kyma trace collector with the deployed Jaeger instance as the backend. To create a new [TracePipeline](https://kyma-project.io/#/telemetry-manager/user/03-traces), execute the following command: ```bash - cat <
@@ -104,7 +104,7 @@ Apply the LogPipeline: output: custom: | name loki - host ${HELM_LOKI_RELEASE}-headless.${KYMA_NS}.svc.cluster.local + host ${HELM_LOKI_RELEASE}-headless.${K8S_NAMESPACE}.svc.cluster.local port 3100 auto_kubernetes_labels off labels job=fluentbit, container=\$kubernetes['container_name'], namespace=\$kubernetes['namespace_name'], pod=\$kubernetes['pod_name'], node=\$kubernetes['host'], app=\$kubernetes['labels']['app'],app=\$kubernetes['labels']['app.kubernetes.io/name'] @@ -123,7 +123,7 @@ When the status of the applied LogPipeline resource turns into `Running`, the un 1. To access the Loki API, use kubectl port forwarding. Run: ```bash - kubectl -n ${KYMA_NS} port-forward svc/$(kubectl get svc -n ${KYMA_NS} -l app.kubernetes.io/name=loki -ojsonpath='{.items[0].metadata.name}') 3100 + kubectl -n ${K8S_NAMESPACE} port-forward svc/$(kubectl get svc -n ${K8S_NAMESPACE} -l app.kubernetes.io/name=loki -ojsonpath='{.items[0].metadata.name}') 3100 ``` 1. Loki queries need a query parameter **time**, provided in nanoseconds. To get the current nanoseconds in Linux or macOS, run: @@ -151,12 +151,12 @@ Because Grafana provides a very good Loki integration, you might want to install 1. To deploy Grafana, run: ```bash - helm upgrade --install --create-namespace -n ${KYMA_NS} grafana grafana/grafana -f https://raw.githubusercontent.com/kyma-project/examples/main/loki/grafana-values.yaml + helm upgrade --install --create-namespace -n ${K8S_NAMESPACE} grafana grafana/grafana -f https://raw.githubusercontent.com/kyma-project/examples/main/loki/grafana-values.yaml ``` 1. To enable Loki as Grafana data source, run: ```bash - cat < **TIP:** It's recommended that you provide tokens using a Secret. To achieve that, you can mount the relevant attributes of your secret via the `extraEnvs` parameter and use placeholders for referencing the actual environment values. Take a look at the provided sample [secret-values.yaml](./secret-values.yaml) file, adjust it to your Secret, and run: ```bash - helm upgrade metrics-gateway open-telemetry/opentelemetry-collector --version 0.62.2 --install --namespace $KYMA_NS \ + helm upgrade metrics-gateway open-telemetry/opentelemetry-collector --version 0.62.2 --install --namespace $K8S_NAMESPACE \ -f https://raw.githubusercontent.com/kyma-project/examples/main/metrics-otlp/metrics-gateway-values.yaml \ -f secret-values.yaml ``` @@ -64,7 +64,7 @@ Furthermore, the setup brings an OpenTelemetry Collector DaemonSet acting as age 1. Verify the deployment. Check that the related Pod has been created in the Namespace and is in the `Running` state: ```bash - kubectl -n $KYMA_NS rollout status deploy metrics-gateway + kubectl -n $K8S_NAMESPACE rollout status deploy metrics-gateway ``` ## Install the agent @@ -76,15 +76,15 @@ Furthermore, the setup brings an OpenTelemetry Collector DaemonSet acting as age Deploy an Otel Collector using the upstream Helm chart with a prepared [values](./metrics-agent-values.yaml). The values file defines a metrics pipeline for scraping workload by annotation and pushing them to the gateway. It also defines a second metrics pipeline determining node-specific metrics for your workload from the nodes kubelet and the nodes filesystem itself. ```bash - helm upgrade metrics-agent open-telemetry/opentelemetry-collector --version 0.62.2 --install --namespace $KYMA_NS \ + helm upgrade metrics-agent open-telemetry/opentelemetry-collector --version 0.62.2 --install --namespace $K8S_NAMESPACE \ -f https://raw.githubusercontent.com/kyma-project/examples/main/metrics-otlp/metrics-agent-values.yaml \ - --set config.exporters.otlp.endpoint=metrics-gateway.$KYMA_NS:4317 + --set config.exporters.otlp.endpoint=metrics-gateway.$K8S_NAMESPACE:4317 ``` 1. Verify the deployment. Check that the related Pod has been created in the Namespace and is in the `Running` state: ```bash - kubectl -n $KYMA_NS rollout status daemonset metrics-agent + kubectl -n $K8S_NAMESPACE rollout status daemonset metrics-agent ``` ## Result @@ -118,9 +118,9 @@ prometheus.io/path: /myMetrics # optional, configure the path under which the me To try it out, you can install the demo app taken from the tutorial [Expose Custom Metrics in Kyma](https://github.com/kyma-project/examples/tree/main/prometheus/monitoring-custom-metrics) and annotate the workload with the annotations mentioned above. ```bash -kubectl apply -n $KYMA_NS -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/monitoring-custom-metrics/deployment/deployment.yaml -kubectl -n $KYMA_NS annotate service sample-metrics prometheus.io/scrape=true -kubectl -n $KYMA_NS annotate service sample-metrics prometheus.io/port=8080 +kubectl apply -n $K8S_NAMESPACE -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/monitoring-custom-metrics/deployment/deployment.yaml +kubectl -n $K8S_NAMESPACE annotate service sample-metrics prometheus.io/scrape=true +kubectl -n $K8S_NAMESPACE annotate service sample-metrics prometheus.io/port=8080 ``` The workload exposes the metric `cpu_temperature_celsius` at port `8080`, which is then automatically ingested to your configured backend. @@ -140,7 +140,7 @@ spec: containers: - env: - name: OTEL_EXPORTER_OTLP_METRICS_ENDPOINT - value: http://metrics-gateway.$KYMA_NS:4317 + value: http://metrics-gateway.$K8S_NAMESPACE:4317 - name: OTEL_SERVICE_NAME valueFrom: fieldRef: @@ -153,6 +153,6 @@ spec: When you're done, you can remove the example and all its resources from the cluster by calling Helm: ```bash -helm delete -n $KYMA_NS metrics-gateway -helm delete -n $KYMA_NS metrics-agent +helm delete -n $K8S_NAMESPACE metrics-gateway +helm delete -n $K8S_NAMESPACE metrics-agent ``` diff --git a/prometheus/README.md b/prometheus/README.md index 8bdb6c2b..4ec980b8 100644 --- a/prometheus/README.md +++ b/prometheus/README.md @@ -27,11 +27,11 @@ As an alternative, you can install the upstream chart with all customization opt 1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it: ```bash - export KYMA_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 1. If you haven't created the Namespace yet, now is the time to do so: ```bash - kubectl create namespace $KYMA_NS + kubectl create namespace $K8S_NAMESPACE ``` >**Note**: This Namespace must have **no** Istio sidecar injection enabled; that is, there must be no `istio-injection` label present on the Namespace. The Helm chart deploys jobs that will not succeed when Isto sidecar injection is enabled. @@ -51,7 +51,7 @@ As an alternative, you can install the upstream chart with all customization opt 1. Run the Helm upgrade command, which installs the chart if it's not present yet. At the end of the command, change the Grafana admin password to some value of your choice. ```bash - helm upgrade --install -n ${KYMA_NS} ${HELM_PROM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/values.yaml --set grafana.adminPassword=myPwd + helm upgrade --install -n ${K8S_NAMESPACE} ${HELM_PROM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/values.yaml --set grafana.adminPassword=myPwd ``` 2. You can use the [values.yaml](./values.yaml) provided with this tutorial, which contains customized settings deviating from the default settings, or create your own one. @@ -67,13 +67,13 @@ The provided `values.yaml` covers the following adjustments: 1. To configure Prometheus for scraping of the Istio-specific metrics from any istio-proxy running in the cluster, deploy a PodMonitor, which scrapes any Pod that has a port with name `.*-envoy-prom` exposed. ```bash - kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/podmonitor-istio-proxy.yaml + kubectl -n ${K8S_NAMESPACE} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/podmonitor-istio-proxy.yaml ``` 2. Deploy a ServiceMonitor definition for the central metrics of the `istiod` deployment: ```bash - kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/servicemonitor-istiod.yaml + kubectl -n ${K8S_NAMESPACE} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/servicemonitor-istiod.yaml ``` 3. Get the latest versions of the Istio-specific dashboards. @@ -81,8 +81,8 @@ The provided `values.yaml` covers the following adjustments: Either follow the [Istio quick start instructions](https://istio.io/latest/docs/ops/integrations/grafana/#option-1-quick-start), or take the prepared ones with the following command: ```bash - kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-grafana-dashboards.yaml - kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-services-grafana-dashboards.yaml + kubectl -n ${K8S_NAMESPACE} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-grafana-dashboards.yaml + kubectl -n ${K8S_NAMESPACE} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-services-grafana-dashboards.yaml ``` > **NOTE:** This setup collects all Istio metrics on a Pod level, which can lead to cardinality issues. Because metrics are only needed on service level, for setups having a bigger amount of workloads deployed, it is recommended to use a setup based on federation as described in the [Istio documentation](https://istio.io/latest/docs/ops/best-practices/observability/#using-prometheus-for-production-scale-monitoring). @@ -92,11 +92,11 @@ The provided `values.yaml` covers the following adjustments: 1. You should see several Pods coming up in the Namespace, especially Prometheus and Alertmanager. Assure that all Pods have the "Running" state. 2. Browse the Prometheus dashboard and verify that all "Status->Targets" are healthy. The following command exposes the dashboard on `http://localhost:9090`: ```bash - kubectl -n ${KYMA_NS} port-forward $(kubectl -n ${KYMA_NS} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 + kubectl -n ${K8S_NAMESPACE} port-forward $(kubectl -n ${K8S_NAMESPACE} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 ``` 3. Browse the Grafana dashboard and verify that the dashboards are showing data. The user `admin` is preconfigured in the Helm chart; the password was provided in your `helm install` command. The following command exposes the dashboard on `http://localhost:3000`: ```bash - kubectl -n ${KYMA_NS} port-forward svc/${HELM_PROM_RELEASE}-grafana 3000:80 + kubectl -n ${K8S_NAMESPACE} port-forward svc/${HELM_PROM_RELEASE}-grafana 3000:80 ``` ### Deploy a custom workload and scrape it @@ -122,7 +122,7 @@ You can try it out by removing the ServiceMonitor from the previous example and The [alertmanager-values.yaml](./alertmanager-values.yaml) example provides a configuration that sends notifications for alerts with high severity to a Slack channel. To deploy it, download the file, adapt ``, `` and `` to your environment, and run the Helm upgrade command to deploy the configuration: ```bash - helm upgrade --install -n ${KYMA_NS} ${HELM_PROM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/values.yaml -f ./alertmanager-values.yaml --set grafana.adminPassword=myPwd + helm upgrade --install -n ${K8S_NAMESPACE} ${HELM_PROM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/values.yaml -f ./alertmanager-values.yaml --set grafana.adminPassword=myPwd ``` 2. Follow the tutorial [monitoring-alert-rules](./monitoring-alert-rules/) to set up an alerting rule on Prometheus. @@ -136,5 +136,5 @@ Follow the tutorial [monitoring-grafana-dashboard](./monitoring-grafana-dashboar To remove the installation from the cluster, call Helm: ```bash -helm delete -n ${KYMA_NS} ${HELM_PROM_RELEASE} +helm delete -n ${K8S_NAMESPACE} ${HELM_PROM_RELEASE} ``` diff --git a/prometheus/monitoring-alert-rules/README.md b/prometheus/monitoring-alert-rules/README.md index 11b25abe..437d7345 100644 --- a/prometheus/monitoring-alert-rules/README.md +++ b/prometheus/monitoring-alert-rules/README.md @@ -22,7 +22,7 @@ This example shows how to deploy and view alerting rules in Kyma. 2. Run the `port-forward` command on the `monitoring-prometheus` service to access the Prometheus dashboard. ```bash - kubectl -n ${KYMA_NS} port-forward $(kubectl -n ${KYMA_NS} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 + kubectl -n ${K8S_NAMESPACE} port-forward $(kubectl -n ${K8S_NAMESPACE} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 ``` 3. Go to `http://localhost:9090/rules` and find the **pod-not-running** rule. @@ -34,13 +34,13 @@ This example shows how to deploy and view alerting rules in Kyma. 1. Export your Namespace as a variable: ```bash - export KYMA_APPLICATION_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 2. To stop the alert from getting fired, create the Deployment: ```bash - kubectl apply -f https://raw.githubusercontent.com/kyma-project/examples/main/http-db-service/deployment/deployment.yaml -n $KYMA_APPLICATION_NS + kubectl apply -f https://raw.githubusercontent.com/kyma-project/examples/main/http-db-service/deployment/deployment.yaml -n $K8S_NAMESPACE ``` ### Cleanup @@ -56,5 +56,5 @@ Run the following commands to completely remove the example and all its resource 2. Remove the **http-db-service** example and all its resources from the cluster: ```bash - kubectl delete all -l example=http-db-service -n $KYMA_APPLICATION_NS + kubectl delete all -l example=http-db-service -n $K8S_NAMESPACE ``` diff --git a/prometheus/monitoring-custom-metrics/README.md b/prometheus/monitoring-custom-metrics/README.md index 6c4d3f64..05dbdde1 100644 --- a/prometheus/monitoring-custom-metrics/README.md +++ b/prometheus/monitoring-custom-metrics/README.md @@ -19,19 +19,19 @@ This example shows how to expose custom metrics to Prometheus with a Golang serv 1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it: ```bash - export KYMA_APPLICATION_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 2. Ensure that your Namespace has Istio sidecar injection enabled. This example assumes that the metrics are exposed in a strict mTLS mode: ```bash - kubectl label namespace ${KYMA_APPLICATION_NS} istio-injection=enabled + kubectl label namespace ${K8S_NAMESPACE} istio-injection=enabled ``` 3. Deploy the service: ```bash - kubectl apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/monitoring-custom-metrics/deployment/deployment.yaml -n $KYMA_APPLICATION_NS + kubectl apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/monitoring-custom-metrics/deployment/deployment.yaml -n $K8S_NAMESPACE ``` 4. Deploy the ServiceMonitor: @@ -45,7 +45,7 @@ This example shows how to expose custom metrics to Prometheus with a Golang serv 1. Run the `port-forward` command on the `monitoring-prometheus` service: ```bash - kubectl -n ${KYMA_NS} port-forward $(kubectl -n ${KYMA_NS} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 + kubectl -n ${K8S_NAMESPACE} port-forward $(kubectl -n ${K8S_NAMESPACE} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 ``` All the **sample-metrics** endpoints appear as `Targets` under `http://localhost:9090/targets#job-sample-metrics` list. @@ -66,5 +66,5 @@ Run the following commands to completely remove the example and all its resource 2. Run the following command to completely remove the example service and all its resources from the cluster: ```bash - kubectl delete all -l example=monitoring-custom-metrics -n $KYMA_APPLICATION_NS + kubectl delete all -l example=monitoring-custom-metrics -n $K8S_NAMESPACE ``` diff --git a/prometheus/monitoring-grafana-dashboard/README.md b/prometheus/monitoring-grafana-dashboard/README.md index e861f6a0..1eb9d582 100644 --- a/prometheus/monitoring-grafana-dashboard/README.md +++ b/prometheus/monitoring-grafana-dashboard/README.md @@ -65,5 +65,5 @@ When you create a dashboard to monitor one of your applications (Function, micro 4. Restart the Grafana deployment with the following command: ```bash - kubectl -n ${KYMA_NS} rollout restart deployment ${HELM_RELEASE}-grafana + kubectl -n ${K8S_NAMESPACE} rollout restart deployment ${HELM_RELEASE}-grafana ``` diff --git a/service-binding/lambda/README.md b/service-binding/lambda/README.md index 81d6f55c..39bdb542 100644 --- a/service-binding/lambda/README.md +++ b/service-binding/lambda/README.md @@ -25,33 +25,33 @@ Apply a battery of `yaml` files to run the example. 1. Export your Namespace as a variable by replacing the `{namespace}` placeholder in the following command and running it: ```bash - export KYMA_EXAMPLE_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" export KYMA_EXAMPLE_DOMAIN="{kyma domain}" ``` 2. Create a Redis instance: ```bash - kubectl apply -f deployment/redis-instance.yaml -n $KYMA_EXAMPLE_NS + kubectl apply -f deployment/redis-instance.yaml -n $K8S_NAMESPACE ``` 3. Ensure that the Redis instance is provisioned: ```bash - kubectl get serviceinstance/redis-instance -o jsonpath='{ .status.conditions[0].reason }' -n $KYMA_EXAMPLE_NS + kubectl get serviceinstance/redis-instance -o jsonpath='{ .status.conditions[0].reason }' -n $K8S_NAMESPACE ``` 4. Create a Redis client through a lambda function, along with the ServiceBinding, and the ServiceBindingUsage custom resource: ```bash - kubectl apply -f deployment/lambda-function.yaml -n $KYMA_EXAMPLE_NS + kubectl apply -f deployment/lambda-function.yaml -n $K8S_NAMESPACE ``` 5. Ensure that the Redis ServiceBinding works: ```bash - kubectl get servicebinding/redis-instance-binding -o jsonpath='{ .status.conditions[0].reason }' -n $KYMA_EXAMPLE_NS + kubectl get servicebinding/redis-instance-binding -o jsonpath='{ .status.conditions[0].reason }' -n $K8S_NAMESPACE ``` 6. Verify that the lambda function is ready: ```bash - kubeless function ls redis-client -n $KYMA_EXAMPLE_NS + kubeless function ls redis-client -n $K8S_NAMESPACE ``` 7. Trigger the function. @@ -65,7 +65,7 @@ Apply a battery of `yaml` files to run the example. Use this command to remove the example and all its resources from your Kyma cluster: ```bash -kubectl delete all,api,function,servicebinding,serviceinstance,servicebindingusage -l example=service-binding -n $KYMA_EXAMPLE_NS +kubectl delete all,api,function,servicebinding,serviceinstance,servicebindingusage -l example=service-binding -n $K8S_NAMESPACE ``` ## Troubleshooting @@ -73,7 +73,7 @@ kubectl delete all,api,function,servicebinding,serviceinstance,servicebindingusa Make sure the password is injected correctly into the Pod. The password should match the one in the [redis-instance.yaml](./deployment/redis-instance.yaml) ```bash -kubectl exec -n $KYMA_EXAMPLE_NS -it $(kubectl get po -n $KYMA_EXAMPLE_NS -l example=service-binding-lambda --no-headers | awk '{print $1}') bash +kubectl exec -n $K8S_NAMESPACE -it $(kubectl get po -n $K8S_NAMESPACE -l example=service-binding-lambda --no-headers | awk '{print $1}') bash env | grep -i redis_password ``` diff --git a/service-binding/service/README.md b/service-binding/service/README.md index 01f775f4..34cc09aa 100644 --- a/service-binding/service/README.md +++ b/service-binding/service/README.md @@ -23,37 +23,37 @@ Run the following commands to provision a managed MSSQL database, create a bindi 1. Export your Namespace as variable by replacing the `` placeholder in the following command and running it: ```bash - export KYMA_EXAMPLE_NS="" + export K8S_NAMESPACE="" ``` 2. Create a MSSQL instance. ```bash - kubectl apply -f deployment/mssql-instance.yaml -n $KYMA_EXAMPLE_NS + kubectl apply -f deployment/mssql-instance.yaml -n $K8S_NAMESPACE ``` 3. Ensure that the MSSQL instance is provisioned and running. ```bash - kubectl get serviceinstance/mssql-instance -o jsonpath='{ .status.conditions[0].reason }' -n $KYMA_EXAMPLE_NS + kubectl get serviceinstance/mssql-instance -o jsonpath='{ .status.conditions[0].reason }' -n $K8S_NAMESPACE ``` > NOTE: Service instances usually take some minutes to be provisioned. 4. Deploy the service binding, the service binding usage and the http-db-service. ```bash - kubectl apply -f deployment/mssql-binding-usage.yaml -n $KYMA_EXAMPLE_NS + kubectl apply -f deployment/mssql-binding-usage.yaml -n $K8S_NAMESPACE ``` 5. Ensure that the MSSQL service binding is provisioned. ```bash - kubectl get ServiceBinding/mssql-instance-binding -o jsonpath='{ .status.conditions[0].reason }' -n $KYMA_EXAMPLE_NS + kubectl get ServiceBinding/mssql-instance-binding -o jsonpath='{ .status.conditions[0].reason }' -n $K8S_NAMESPACE ``` 6. Verify that the http-db-service is ready. ```bash - kubectl get pods -l example=service-binding -n $KYMA_EXAMPLE_NS + kubectl get pods -l example=service-binding -n $K8S_NAMESPACE ``` 7. Forward the service port to be able to reach the service. ```bash - kubectl port-forward -n $KYMA_EXAMPLE_NS $(kubectl get pod -n $KYMA_EXAMPLE_NS -l example=service-binding | grep http-db-service | awk '{print $1}') 8017 + kubectl port-forward -n $K8S_NAMESPACE $(kubectl get pod -n $K8S_NAMESPACE -l example=service-binding | grep http-db-service | awk '{print $1}') 8017 ``` 8. Check that the service works as expected. @@ -71,7 +71,7 @@ Run the following commands to provision a managed MSSQL database, create a bindi Run the following command to completely remove the example and all its resources from the cluster: ```bash -kubectl delete all,sbu,servicebinding,serviceinstance -l example=service-binding-service -n $KYMA_EXAMPLE_NS +kubectl delete all,sbu,servicebinding,serviceinstance -l example=service-binding-service -n $K8S_NAMESPACE ``` ## Troubleshooting diff --git a/trace-demo/README.md b/trace-demo/README.md index c1aa1703..16c48a2e 100644 --- a/trace-demo/README.md +++ b/trace-demo/README.md @@ -16,16 +16,16 @@ The following instructions install the OpenTelemetry [demo application](https:// 1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it: ```bash - export KYMA_NS="{namespace}" + export K8S_NAMESPACE="{namespace}" ``` 1. If you don't have created a Namespace yet, do it now: ```bash - kubectl create namespace $KYMA_NS + kubectl create namespace $K8S_NAMESPACE ``` 1. To enable Istio injection in your Namespace, set the following label: ```bash - kubectl label namespace $KYMA_NS istio-injection=enabled + kubectl label namespace $K8S_NAMESPACE istio-injection=enabled ``` 1. Export the Helm release name that you want to use. The release name must be unique for the chosen Namespace. Be aware that all resources in the cluster will be prefixed with that name. Run the following command: @@ -65,7 +65,7 @@ To [enable Istio](https://kyma-project.io/#/telemetry-manager/user/03-traces?id= Run the Helm upgrade command, which installs the chart if not present yet. ```bash -helm upgrade --version 0.24.0 --install --create-namespace -n $KYMA_NS $HELM_OTEL_RELEASE open-telemetry/opentelemetry-demo -f https://raw.githubusercontent.com/kyma-project/examples/main/trace-demo/values.yaml +helm upgrade --version 0.24.0 --install --create-namespace -n $K8S_NAMESPACE $HELM_OTEL_RELEASE open-telemetry/opentelemetry-demo -f https://raw.githubusercontent.com/kyma-project/examples/main/trace-demo/values.yaml ``` You can either use the [`values.yaml`](./values.yaml) provided in this `trace-demo` folder, which contains customized settings deviating from the default settings, or create your own `values.yaml` file. The customizations cover the following areas: @@ -80,7 +80,7 @@ To verify that the application is running properly, set up port forwarding and c 1. Verify the frontend: ```bash - kubectl -n $KYMA_NS port-forward svc/$HELM_OTEL_RELEASE-frontend 8080 + kubectl -n $K8S_NAMESPACE port-forward svc/$HELM_OTEL_RELEASE-frontend 8080 ``` ```bash open http://localhost:8080 @@ -88,7 +88,7 @@ To verify that the application is running properly, set up port forwarding and c 2. Verify that traces arrive in the Jaeger backend. If you deployed [Jaeger in your cluster](./../jaeger/) using the same Namespace as used for the demo app, run: ```bash - kubectl -n $KYMA_NS port-forward svc/tracing-jaeger-query 16686 + kubectl -n $K8S_NAMESPACE port-forward svc/tracing-jaeger-query 16686 ``` ```bash open http://localhost:16686 @@ -96,7 +96,7 @@ To verify that the application is running properly, set up port forwarding and c 3. Enable failures with the feature flag service: ```bash - kubectl -n $KYMA_NS port-forward svc/$HELM_OTEL_RELEASE-featureflagservice 8081 + kubectl -n $K8S_NAMESPACE port-forward svc/$HELM_OTEL_RELEASE-featureflagservice 8081 ``` ```bash open http://localhost:8081 @@ -104,7 +104,7 @@ To verify that the application is running properly, set up port forwarding and c 4. Generate load with the load generator: ```bash - kubectl -n $KYMA_NS port-forward svc/$HELM_OTEL_RELEASE-loadgenerator 8089 + kubectl -n $K8S_NAMESPACE port-forward svc/$HELM_OTEL_RELEASE-loadgenerator 8089 ``` ```bash open http://localhost:8089 @@ -123,7 +123,7 @@ To expose the frontend web app and the relevant trace endpoint, define an Istio ```bash export CLUSTER_DOMAIN={my-domain} -cat <