Also, this demo was made using the Istio Bookinfo app example.
- 1. Cluster Install
- 2. Observability Stack
- 3. Setup Istio (with Jaeger and Kiali)
- 4. Bookapp Install
- 5. Accessing
- 6. Comments and Improvements
Inside ./cluster directory, there is a terraform code to provisioning a GKE cluster in and all dependencies related.
Some resources are named acording to workspace name from terraform.
- To create the cluster:
terraform init
terraform workspace new dev
terraform apply
- after created, use the suggested command line from ouput
gcloud_credentials_command
to update your kube_config configuration and be able to authenticate in your cluster.
Using the ./monitoring directory, we see the terraform code which is responsible to configure the following tools: Grafana, Loki, Prometheus Operator and some integrations through local helm charts
- The
kube-prometheus-stack
was configured directly using thevalues.yaml
file; - The
loki-stack
, was configured through a template file in terraform, in order to show the possibility of manage all configuration parameters from external variables; - Also, there is a config_map resource which automate grafana dashboards, an interesting way of controlling views in grafana as a code. In this case, we are configuring some istio dashboards.
- to setup the obs stack:
terraform init
terraform apply
It's necessary to install the command line from Istio 1.9 in order to setup te service mesh structure.
- The namespaces managed by Istio, should have the following labels:
kubectl label namespace default istio-injection=enabled
- Installing through
istioctl
: (this is just a demo, so we can use the demo profile ")
istioctl install --set profile=demo -y
- Configuring the Istio monitoring stack using
kubectl
in ./istio-monitoring:
kubectl apply -f istio-monitoring/
The app manifests are available in ./bookapp directory. The images were locally built and published on GCR.
# use -n default parameter in order to deploy in the correct namespace
kubectl apply -f bookapp/ -n default
In this moment, the app should be available through external ip
from istio-ingressgateway
.
It's possible to access the observability tools using port-forward
or exposing it through loadbalancer too.
kubectl port-forward svc/po-grafana 3000:80 -n monitoring
kubectl port-forward svc/po-prometheus 9090:9090 -n monitoring
kubectl port-forward svc/kiali 20001:20001 -n istio-system
From Grafana is possible to access some dashboards with kubernetes and istio metrics, also is possible consult all logs from app and infrastructure using Loki datasource.
From Kiali, is possible to check all application tracing.
-
The Prometheus Operator is configured to obtain metrics from istio using federation, using some specifics scrapping rules. In the future, It can be better to scrape metrics directly from istio pods instead use federate and havingone more prometheus server.
-
No data persistency was used in this demo. In order to grant more stability and scalability, can be interesting to setup a good external data persistency method. Ex: we can use bigtable and buckets to persist data from Prometheus and Loki, beyond the persistent volumes already used by GKE storage layer.