-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First version on how to bring your custom kube-prometheus-stack #206
Merged
Merged
Changes from 25 commits
Commits
Show all changes
30 commits
Select commit
Hold shift + click to select a range
380b62e
added small tutorial on how to bring your custom kube-prometheus-stack
a-thaler e1a5e95
fixes
a-thaler 87182e1
fix
a-thaler 98ce84c
update
a-thaler d69eb89
fix
a-thaler 673e74b
fix
a-thaler 627da9b
Apply suggestions from code review
a-thaler 97c84c8
added istio support
a-thaler 2d7d1bf
fix
a-thaler 3f0391f
fix
a-thaler 68ee47c
fix
a-thaler af66a86
fix
a-thaler 0090823
fix
a-thaler c02f66b
fix
a-thaler 43c9510
fix
a-thaler e24303e
fix
a-thaler 5eeabb9
Apply suggestions from code review
a-thaler 771168f
fix intendation
a-thaler 2474608
fix
a-thaler 54dfdeb
Update README.md
a-thaler d0621fe
Update README.md
a-thaler c084393
Update CODEOWNERS
a-thaler 2c4e3c6
fix
a-thaler 7b57d4f
fix
a-thaler 1347793
Update prometheus/README.md
a-thaler 227c825
Update README.md
a-thaler d35df3c
Apply suggestions from code review
a-thaler e3f3ffc
update
a-thaler eaa3aed
update
a-thaler 686599d
Update prometheus/README.md
a-thaler File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Validating CODEOWNERS rules …
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,124 @@ | ||
# Install a custom kube-prometheus-stack in Kyma | ||
|
||
## Overview | ||
|
||
The Kyma monitoring stack often brings limited configuration options in contrast to the upstream [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack) chart. Modifications might be reset at the next upgrade cycle. | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
As an alternative, you can install the upstream chart with all customization options parallel. This tutorial outlines how to set up such installation in co-existence to the Kyma monitoring stack. | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
> **CAUTION:** | ||
- This tutorial describes a basic setup that you should not use in production. Typically, a production setup needs further configuration, like optimizing the amount of data to scrape and the required resource footprint of the installation. To achieve qualities like [high availability](https://prometheus.io/docs/introduction/faq/#can-prometheus-be-made-highly-available), [scalability](https://prometheus.io/docs/introduction/faq/#i-was-told-prometheus-doesnt-scale), or [durable long-term storage](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage), you need a more advanced setup. | ||
- This example uses the latest Grafana version, which is under AGPL-3.0 and might not be free of charge for commercial usage. | ||
|
||
## Prerequisites | ||
|
||
- Kyma as the target deployment environment. | ||
- Kubectl > 1.22.x | ||
- Helm 3.x | ||
|
||
## Installation | ||
|
||
### Preparation | ||
1. Assure that the Kyma monitoring stack running in your cluster is limited to detection of kubernetes resources in the kyma-system namespace only, so that there can be no side-effects with the additional custom stack: | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
> **NOTE**: That step is only needed for clusters installed manually via the Kyma CLI | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
```bash | ||
kyma deploy --component monitoring --value monitoring.prometheusOperator.namespaces.releaseNamespace=true | ||
``` | ||
|
||
1. Export your Namespace as a variable. Replace the `{namespace}` placeholder in the following command and run it: | ||
|
||
```bash | ||
export KYMA_NS="{namespace}" | ||
``` | ||
1. If you haven't created the Namespace yet, now is the time to do so: | ||
```bash | ||
kubectl create namespace $KYMA_NS | ||
``` | ||
>**Note**: This Namespace must have **no** Istio sidecar injection enabled so no `istio-injection` label present on the namespace. The Helm chart deploys jobs that will not succeed when sidecar injection is enabled by default. | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
1. Export the Helm release name that you want to use. It can be any name, but be aware that all resources in the cluster will be prefixed with that name. Replace the `{release-name}` placeholder in the following command and run it: | ||
```bash | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
export HELM_RELEASE="{release-name}" | ||
``` | ||
|
||
1. Update your Helm installation with the required Helm repository: | ||
|
||
```bash | ||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts | ||
helm repo update | ||
``` | ||
|
||
### Install the kube-prometheus-stack | ||
|
||
1. Run the Helm upgrade command, which installs the chart if it's not present yet. At the end of the command, change the Grafana admin password to some value of your choice. | ||
```bash | ||
helm upgrade --install -n ${KYMA_NS} ${HELM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/values.yaml --set grafana.adminPassword=myPwd | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
|
||
You can use the [values.yaml](./values.yaml) provided with this tutorial, which contains customized settings deviating from the default settings, or create your own one. | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
The provided `values.yaml` covers the following adjustments: | ||
- Parallel operation to a Kyma monitoring stack | ||
- Client certificate injection to support scraping of workload secured with Istio strict mTLS | ||
- Active scraping of workload annotated with @prometheus.io/scrape | ||
|
||
### Activate scraping of Istio metrics & Grafana dashboards | ||
|
||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
1. To configure Prometheus for scraping of the Istio-specific metrics from any istio-proxy running in the cluster, deploy a PodMonitor, which scrapes any Pod that has a port with name `.*-envoy-prom` exposed. | ||
|
||
```bash | ||
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/podmonitor-istio-proxy.yaml | ||
``` | ||
|
||
2. Deploy a ServiceMonitor definition for the central metrics of the `istiod` deployment: | ||
|
||
```bash | ||
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/servicemonitor-istiod.yaml | ||
``` | ||
|
||
3. Get the latest versions of the Istio-specific dashboards. | ||
Grafana is configured to load dashboards dynamically from ConfigMaps in the cluster, so Istio-specific dashboards can be applied as well. | ||
Either follow the [Istio quick start instructions](https://istio.io/latest/docs/ops/integrations/grafana/#option-1-quick-start), or take the prepared ones with the following command: | ||
|
||
```bash | ||
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-grafana-dashboards.yaml | ||
kubectl -n ${KYMA_NS} apply -f https://raw.githubusercontent.com/kyma-project/examples/main/prometheus/istio/configmap-istio-services-grafana-dashboards.yaml | ||
``` | ||
|
||
> **NOTE:** This setup collects all Istio metrics on a Pod level, which can lead to cardinality issues. Because metrics are only needed on service level, for setups having a bigger amount of workloads deployed, it is recommended to use a setup based on federation as described in the [Istio documentation](https://istio.io/latest/docs/ops/best-practices/observability/#using-prometheus-for-production-scale-monitoring). | ||
|
||
### Verify the installation | ||
|
||
1. You should see several Pods coming up in the Namespace, especially Prometheus and Alertmanager. Assure that all Pods have the "Running" state. | ||
2. Browse the Prometheus dashboard and verify that all "Status->Targets" are healthy. The following command exposes the dashboard on `http://localhost:9090`: | ||
```bash | ||
kubectl -n ${KYMA_NS} port-forward $(kubectl -n ${KYMA_NS} get service -l app=kube-prometheus-stack-prometheus -oname) 9090 | ||
``` | ||
3. Browse the Grafana dashboard and verify that the dashboards are showing data. The user `admin` is pre-configured in the Helm chart; the password was provided in your `helm install` command. The following command exposes the dashboard on `http://localhost:3000`: | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
```bash | ||
kubectl -n ${KYMA_NS} port-forward svc/${HELM_RELEASE}-grafana 3000:80 | ||
``` | ||
|
||
### Deploy a custom workload and scrape it | ||
|
||
1. Follow the tutorial [monitoring-custom-metrics](./../monitoring-custom-metrics/), but use the steps above to verify that the metrics are collected. | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### Scrape workload via annotations | ||
|
||
Instead of defining a ServiceMonitor per workload for setting up custom metric scraping, you can use a simplified way based on annotations. The used [values.yaml](./values.yaml) defines an `additionalScrapeConfig`, which scrapes all Pods and services that have these annotations: | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```yaml | ||
prometheus.io/scrape: "true" # mandatory to enable automatic scraping | ||
prometheus.io/scheme: https # optional, default is "http" if no Istio sidecar is used. When using a sidecar (Pod has label `security.istio.io/tlsMode=istio`), the default is "https". Use "https" to scrape workloads using Istio client certificates. Will only work for annotated services (not Pods) | ||
prometheus.io/port: "1234" # optional, configure the port under which the metrics are exposed | ||
prometheus.io/path: /myMetrics # optional, configure the path under which the metrics are exposed | ||
``` | ||
|
||
You can try it out by removing the ServiceMonitor from the previous example and instead providing the annotations to the Service manifest. | ||
|
||
### Cleanup | ||
|
||
1. To remove the installation from the cluster, call Helm: | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```bash | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
helm delete -n ${KYMA_NS} ${HELM_RELEASE} | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
a-thaler marked this conversation as resolved.
Show resolved
Hide resolved
|
Large diffs are not rendered by default.
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised that this is rendered correctly; shouldn't the code block end with a couple of backticks in line 28?