Skip to content

Commit

Permalink
First version on how to bring your custom kube-prometheus-stack (#206)
Browse files Browse the repository at this point in the history
* added small tutorial on how to bring your custom kube-prometheus-stack

* fixes

* fix

* update

* fix

* fix

* Apply suggestions from code review

Co-authored-by: Nina Hingerl <[email protected]>

* added istio support

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* fix

* Apply suggestions from code review

Co-authored-by: Nina Hingerl <[email protected]>

* fix intendation

* fix

* Update README.md

* Update README.md

* Update CODEOWNERS

* fix

* fix

* Update prometheus/README.md

Co-authored-by: Rakesh Garimella <[email protected]>

* Update README.md

* Apply suggestions from code review

Co-authored-by: Nina Hingerl <[email protected]>

* update

* update

* Update prometheus/README.md

Co-authored-by: Nina Hingerl <[email protected]>
Co-authored-by: Rakesh Garimella <[email protected]>
  • Loading branch information
3 people authored Nov 2, 2022
1 parent 55484e4 commit 55e214d
Show file tree
Hide file tree
Showing 9 changed files with 1,255 additions and 8 deletions.
14 changes: 11 additions & 3 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
* @a-thaler @lilitgh @rakesh-garimella @hisarbalik @skhalash @chrkl @lindnerby

# Service binding example
/service-binding/ @Abd4llA @joek @rakesh-garimella @venturasr @lilitgh @k15r @nachtmaar @anishj0shi @montaro @abbi-gaurav @marcobebway @radufa @sayanh
/service-binding/ @rakesh-garimella @lilitgh @k15r @nachtmaar @marcobebway

# Orders service example
/orders-service/ @m00g3n @pPrecel
Expand All @@ -19,11 +19,19 @@
# Custom function runtime image example
/custom-serverless-runtime-image/ @m00g3n @pPrecel @dbadura @kwiatekus @moelsayed @cortey

# Observability
/prometheus/ @a-thaler @rakesh-garimella @skhalash @chrkl @lindnerby @dennis-ge @lilitgh
/kiali/ @a-thaler @rakesh-garimella @skhalash @chrkl @lindnerby @dennis-ge @lilitgh
/loki/ @a-thaler @rakesh-garimella @skhalash @chrkl @lindnerby @dennis-ge @lilitgh
/trace-demo/ @a-thaler @rakesh-garimella @skhalash @chrkl @lindnerby @dennis-ge @lilitgh
/monitoring-custom-metrics/ @a-thaler @rakesh-garimella @skhalash @chrkl @lindnerby @dennis-ge @lilitgh
/monitoring-custom-alerts/ @a-thaler @rakesh-garimella @skhalash @chrkl @lindnerby @dennis-ge @lilitgh

# Scaling functions to zero with KEDA
/scale-to-zero-with-kedam @m00g3n @pPrecel @dbadura @kwiatekus @moelsayed @cortey

# All .md files
*.md @klaudiagrz @mmitoraj @majakurcius @alexandra-simeonova @NHingerl @grego952
*.md @mmitoraj @majakurcius @NHingerl @grego952

# Config file for MILV - milv.config.yaml
milv.config.yaml @m00g3n @aerfio @pPrecel @magicmatatjahu
milv.config.yaml @m00g3n @pPrecel
12 changes: 9 additions & 3 deletions monitoring-custom-metrics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,19 @@ This example shows how to expose custom metrics to Prometheus with a Golang serv
export KYMA_EXAMPLE_NS="{namespace}"
```

2. Deploy the service:
2. Ensure that your Namespace has Istio sidecar injection enabled. This example assumes that the metrics are exposed in a strict mTLS mode:

```bash
kubectl label namespace ${KYMA_EXAMPLE_NS} istio-injection=enabled
```

3. Deploy the service:

```bash
kubectl apply -f deployment/deployment.yaml -n $KYMA_EXAMPLE_NS
```

3. Deploy the ServiceMonitor:
4. Deploy the ServiceMonitor:

```bash
kubectl apply -f deployment/service-monitor.yaml
Expand All @@ -44,7 +50,7 @@ This example shows how to expose custom metrics to Prometheus with a Golang serv
All the **sample-metrics** endpoints appear as the [`Targets`](http://localhost:9090/targets#job-sample-metrics) list.

2. Use either `cpu_temperature_celsius` or `hd_errors_total` in the **expression** field [here](http://localhost:9090/graph).
3. Click the **Execute** button to check the values scrapped by Prometheus.
3. Click the **Execute** button to check the values scraped by Prometheus.

### Cleanup

Expand Down
5 changes: 3 additions & 2 deletions monitoring-custom-metrics/deployment/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ spec:
example: monitoring-custom-metrics
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "8080"
labels:
app: sample-metrics
example: monitoring-custom-metrics
Expand All @@ -29,6 +27,9 @@ spec:
memory: 100Mi
requests:
memory: 32Mi
ports:
- name: http
containerPort: 8080
---
kind: Service
apiVersion: v1
Expand Down
123 changes: 123 additions & 0 deletions prometheus/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# Install a custom kube-prometheus-stack in Kyma

## Overview

The Kyma monitoring stack brings limited configuration options in contrast to the upstream [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack) chart. Modifications might be reset at the next upgrade cycle.

As an alternative, you can install the upstream chart with all customization options in parallel. This tutorial outlines how to set up such installation in co-existence to the Kyma monitoring stack.

> **CAUTION:**
- This tutorial describes a basic setup that you should not use in production. Typically, a production setup needs further configuration, like optimizing the amount of data to scrape and the required resource footprint of the installation. To achieve qualities like [high availability](https://prometheus.io/docs/introduction/faq/#can-prometheus-be-made-highly-available), [scalability](https://prometheus.io/docs/introduction/faq/#i-was-told-prometheus-doesnt-scale), or [durable long-term storage](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage), you need a more advanced setup.
- This example uses the latest Grafana version, which is under AGPL-3.0 and might not be free of charge for commercial usage.

## Prerequisites

- Kyma as the target deployment environment.
- Kubectl > 1.22.x
- Helm 3.x

## Installation

### Preparation
1. If you cluster was installed manually using the Kyma CLI, you must assure that the Kyma monitoring stack running in your cluster is limited to detection of Kubernetes resources only in the `kyma-system` Namespace. To rule out that there are any side effects with the additional custom stack, run:
```bash
kyma deploy --component monitoring --value monitoring.prometheusOperator.namespaces.releaseNamespace=true
```

1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it:

```bash
export KYMA_NS="{namespace}"
```
1. If you haven't created the Namespace yet, now is the time to do so:
```bash
kubectl create namespace $KYMA_NS
```
>**Note**: This Namespace must have **no** Istio sidecar injection enabled; that is, there must be no `istio-injection` label present on the Namespace. The Helm chart deploys jobs that will not succeed when Isto sidecar injection is enabled.
1. Export the Helm release name that you want to use. It can be any name, but be aware that all resources in the cluster will be prefixed with that name. Replace the `{release-name}` placeholder in the following command and run it:
```bash
export HELM_RELEASE="{release-name}"
```
1. Update your Helm installation with the required Helm repository:
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
```
### Install the kube-prometheus-stack
1. Run the Helm upgrade command, which installs the chart if it's not present yet. At the end of the command, change the Grafana admin password to some value of your choice.
```bash
helm upgrade --install -n ${KYMA_NS} ${HELM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/values.yaml --set grafana.adminPassword=myPwd
```

2. You can use the [values.yaml](./values.yaml) provided with this tutorial, which contains customized settings deviating from the default settings, or create your own one.
The provided `values.yaml` covers the following adjustments:
- Parallel operation to a Kyma monitoring stack
- Client certificate injection to support scraping of workload secured with Istio strict mTLS
- Active scraping of workload annotated with @prometheus.io/scrape

### Activate scraping of Istio metrics & Grafana dashboards

1. To configure Prometheus for scraping of the Istio-specific metrics from any istio-proxy running in the cluster, deploy a PodMonitor, which scrapes any Pod that has a port with name `.*-envoy-prom` exposed.

```bash
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/podmonitor-istio-proxy.yaml
```

2. Deploy a ServiceMonitor definition for the central metrics of the `istiod` deployment:

```bash
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/servicemonitor-istiod.yaml
```

3. Get the latest versions of the Istio-specific dashboards.
Grafana is configured to load dashboards dynamically from ConfigMaps in the cluster, so Istio-specific dashboards can be applied as well.
Either follow the [Istio quick start instructions](https://istio.io/latest/docs/ops/integrations/grafana/#option-1-quick-start), or take the prepared ones with the following command:

```bash
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-grafana-dashboards.yaml
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-services-grafana-dashboards.yaml
```

> **NOTE:** This setup collects all Istio metrics on a Pod level, which can lead to cardinality issues. Because metrics are only needed on service level, for setups having a bigger amount of workloads deployed, it is recommended to use a setup based on federation as described in the [Istio documentation](https://istio.io/latest/docs/ops/best-practices/observability/#using-prometheus-for-production-scale-monitoring).

### Verify the installation

1. You should see several Pods coming up in the Namespace, especially Prometheus and Alertmanager. Assure that all Pods have the "Running" state.
2. Browse the Prometheus dashboard and verify that all "Status->Targets" are healthy. The following command exposes the dashboard on `http://localhost:9090`:
```bash
kubectl -n ${KYMA_NS} port-forward $(kubectl -n ${KYMA_NS} get service -l app=kube-prometheus-stack-prometheus -oname) 9090
```
3. Browse the Grafana dashboard and verify that the dashboards are showing data. The user `admin` is preconfigured in the Helm chart; the password was provided in your `helm install` command. The following command exposes the dashboard on `http://localhost:3000`:
```bash
kubectl -n ${KYMA_NS} port-forward svc/${HELM_RELEASE}-grafana 3000:80
```

### Deploy a custom workload and scrape it

Follow the tutorial [monitoring-custom-metrics](./../monitoring-custom-metrics/), but use the steps above to verify that the metrics are collected.

### Scrape workload via annotations

Instead of defining a ServiceMonitor per workload for setting up custom metric scraping, you can use a simplified way based on annotations. The used [values.yaml](./values.yaml) defines an `additionalScrapeConfig`, which scrapes all Pods and services that have the following annotations:

```yaml
prometheus.io/scrape: "true" # mandatory to enable automatic scraping
prometheus.io/scheme: https # optional, default is "http" if no Istio sidecar is used. When using a sidecar (Pod has label `security.istio.io/tlsMode=istio`), the default is "https". Use "https" to scrape workloads using Istio client certificates. Will only work for annotated services (not Pods)
prometheus.io/port: "1234" # optional, configure the port under which the metrics are exposed
prometheus.io/path: /myMetrics # optional, configure the path under which the metrics are exposed
```

You can try it out by removing the ServiceMonitor from the previous example and instead providing the annotations to the Service manifest.

### Cleanup

To remove the installation from the cluster, call Helm:

```bash
helm delete -n ${KYMA_NS} ${HELM_RELEASE}
```
12 changes: 12 additions & 0 deletions prometheus/istio/configmap-istio-grafana-dashboards.yaml

Large diffs are not rendered by default.

Loading

0 comments on commit 55e214d

Please sign in to comment.