From 799b7ae002d733f91a4d0680ac51ffa166155140 Mon Sep 17 00:00:00 2001 From: aghilish Date: Tue, 14 May 2024 14:35:21 +0000 Subject: [PATCH] deploy: 27759c136825ef9e53b4fa2274a802cfdafecd37 --- print.html | 4 +++- searchindex.js | 2 +- searchindex.json | 2 +- tutorials/extending-k8s.html | 4 +++- 4 files changed, 8 insertions(+), 4 deletions(-) diff --git a/print.html b/print.html index 840b09d..6251a84 100644 --- a/print.html +++ b/print.html @@ -3450,15 +3450,17 @@

Practical Example

In the case of Prometheus, I would have to deploy several separate components to get Prometheus up and running, which is quite complex + deploying everything in the right order.

-

Extending k8s api like a pro.

+

A Crash Course on Extending Kubernetes - Kubebuilder 2024 - 3.14.0.

Links

Learning Resources

  1. Extending k8s
  2. Custom Resources
  3. +
  4. Kubebuilder Docs

Notes

diff --git a/searchindex.js b/searchindex.js index 19366a1..dca7d77 100644 --- a/searchindex.js +++ b/searchindex.js @@ -1 +1 @@ -Object.assign(window.search, {"doc_urls":["overview.html#welcome-to-100-days-of-kubernetes","overview.html#what-can-you-find-in-this-book","overview.html#just-a-note-of-caution","overview.html#100-days-of-kubernetes","overview.html#where-to-get-started","overview.html#contribute","overview.html#structure-of-the-book","overview.html#list-of-example-notes-by-the-community","contributors.html#about-the-contributors","start/intro-to-k8s.html#what-is-kubernetes-and-why-do-we-want-it","start/intro-to-k8s.html#100days-resources","start/intro-to-k8s.html#learning-resources","start/intro-to-k8s.html#example-notes","start/architecture-overview.html#kubernetes-architecture","start/architecture-overview.html#100days-resources","start/architecture-overview.html#learning-resources","start/architecture-overview.html#example-notes","start/architecture-overview.html#main-node","start/pods.html#kubernetes-pods","start/pods.html#100days-resources","start/pods.html#learning-resources","start/pods.html#example-notes","start/set-up-cluster.html#setup-your-first-kubernetes-cluster","start/set-up-cluster.html#100days-resources","start/set-up-cluster.html#learning-resources","start/set-up-cluster.html#example-notes","start/run-pod.html#running-pods","start/run-pod.html#100days-resources","start/run-pod.html#learning-resources","start/run-pod.html#example-notes","start/run-pod.html#practical-example","start/run-pod.html#run-multiple-containers-with-in-a-pod","start/replicas.html#kubernetes-replicaset","start/replicas.html#100days-resources","start/replicas.html#learning-resources","start/replicas.html#example-notes","start/replicas.html#some-practice","start/replicas.html#difference-between-replicaset-and-replication-controller","start/replicas.html#operating-replicasets","start/deployment.html#kubernetes-deployments","start/deployment.html#100days-resources","start/deployment.html#learning-resources","start/deployment.html#example-notes","start/deployment.html#how-is-the-service-and-the-deployment-linked","start/namespaces.html#namespaces","start/namespaces.html#100days-resources","start/namespaces.html#learning-resources","start/namespaces.html#example-notes","start/namespaces.html#practical","start/namespaces.html#deleting-resources","start/configmaps.html#configmaps","start/configmaps.html#100days-resources","start/configmaps.html#learning-resources","start/configmaps.html#example-notes","networking/service.html#kubernetes-service","networking/service.html#100days-resources","networking/service.html#learning-resources","networking/service.html#example-notes","networking/service.html#follow-along","networking/service.html#creating-services-in-a-declarative-format","networking/ingress.html#ingress","networking/ingress.html#100days-resources","networking/ingress.html#learning-resources","networking/ingress.html#example-notes","networking/servicemesh.html#service-mesh","networking/servicemesh.html#100days-resources","networking/servicemesh.html#learning-resources","networking/servicemesh.html#example-notes","networking/servicemesh.html#service-mesh-interface","networking/servicemesh.html#other-service-mesh-examples","storage/volumes.html#kubernetes-volumes","storage/volumes.html#100days-resources","storage/volumes.html#learning-resources","storage/volumes.html#example-notes","kubernetes-clusters/kind.html#kind","kubernetes-clusters/kind.html#100days-resources","kubernetes-clusters/kind.html#learning-resources","kubernetes-clusters/kind.html#example-notes","kubernetes-clusters/kind.html#setting-up-kind-on-windows-and-cluster-comparison","kubernetes-clusters/kind.html#running-a-local-cluster","kubernetes-clusters/kind.html#kubernetes-in-docker","kubernetes-clusters/kind.html#minikube","kubernetes-clusters/kind.html#what-is-kind","kubernetes-clusters/kind.html#microk8s","kubernetes-clusters/kind.html#direct-comparison","kubernetes-clusters/k3s.html#k3s-and-k3sup","kubernetes-clusters/k3s.html#100days-resources","kubernetes-clusters/k3s.html#learning-resources","kubernetes-clusters/k3s.html#example-notes","kubernetes-clusters/k3s.html#what-is-k3s","kubernetes-clusters/k3s.html#install-k3s","kubernetes-clusters/k3s.html#once-installed-access-the-cluster","kubernetes-clusters/k3s.html#installing-with-k3sup","templating/kustomize.html#kustomize","templating/kustomize.html#100days-resources","templating/kustomize.html#learning-resources","templating/kustomize.html#example-notes","templating/kustomize.html#lets-get-started-using-kustomize","templating/kustomize.html#resources","templating/terraform.html#terraform","templating/terraform.html#100days-resources","templating/terraform.html#learning-resources","templating/terraform.html#example-notes","templating/terraform.html#lets-try-out-terraform","templating/crossplane.html#crossplane","templating/crossplane.html#links","templating/crossplane.html#install-crossplane-using-helm","templating/crossplane.html#setting-up-the-hyperscaler-gcp","templating/crossplane.html#setting-up-the-gcp-storage-provider","templating/crossplane.html#creating-a-managed-storage-bucket","templating/crossplane.html#drift-detection","templating/crossplane.html#cleanup","templating/crossplane-compositions.html#crossplane-compositions","templating/crossplane-compositions.html#links","templating/crossplane-compositions.html#start-minikube","templating/crossplane-compositions.html#install-crossplane","templating/crossplane-compositions.html#create-gcp-credentials-secret-for-crossplane","templating/crossplane-compositions.html#clone-playbook","templating/crossplane-compositions.html#install-dependencies","templating/crossplane-compositions.html#wait-for-all-the-packages-to-become-healthy","templating/crossplane-compositions.html#configure-provider","templating/crossplane-compositions.html#apply-xrd","templating/crossplane-compositions.html#apply-composition","templating/crossplane-compositions.html#create-infra-namespace","templating/crossplane-compositions.html#apply-claim","templating/crossplane-compositions.html#verify-resources","templating/crossplane-compositions.html#access-the-gke-cluster","templating/crossplane-compositions.html#destroy-infrastructure","templating/crossplane-composition-functions.html#crossplane-composition-functions","templating/crossplane-composition-functions.html#links","templating/crossplane-composition-functions.html#why-functions","templating/crossplane-composition-functions.html#limitations-of-compositions","templating/crossplane-composition-functions.html#loops","templating/crossplane-composition-functions.html#conditionals","templating/crossplane-composition-functions.html#flexibility","templating/crossplane-composition-functions.html#functions-goals","templating/crossplane-composition-functions.html#how-it-works","templating/crossplane-composition-functions.html#functions-benefits","templating/crossplane-composition-functions.html#functions-internals","templating/crossplane-composition-functions.html#playbook","templating/crossplane-composition-functions.html#clone-the-repo","templating/crossplane-composition-functions.html#build-and-push-the-oci-container-image","templating/crossplane-composition-functions.html#running-locally","templating/crossplane-composition-functions.html#build-runtime-image","templating/crossplane-composition-functions.html#production-deployment","templating/crossplane-composition-functions.html#destroy-infrastructure","helm/partone.html#helm","helm/partone.html#100days-resources","helm/partone.html#learning-resources","helm/partone.html#example-notes","helm/partone.html#we-are-using-helm","helm/partone.html#second-helm-exercise","helm/partone.html#helm-commands","helm/partwo.html#helm-part-2","helm/partthree.html#setting-up-and-modifying-helm-charts","helm/partthree.html#100days-resources","helm/partthree.html#learning-resources","helm/partthree.html#example-notes","tools/k9s.html#k9s","tools/k9s.html#100days-resources","tools/k9s.html#learning-resources","tools/k9s.html#example-notes","tools/k9s.html#what-is-k9s","tools/k9s.html#getting-up-and-running","tools/k9s.html#k9s-interaction-with-kubernetes","tools/k9s.html#k9s-for-further-debugging-and-benchmarking","tools/k9s.html#change-look","tools/knative.html#knative","tools/knative.html#100days-resources","tools/knative.html#learning-resources","tools/knative.html#example-notes","tools/knative.html#installation","tools/knative.html#resources","tools/argocd.html#gitops-and-argo-cd","tools/argocd.html#links","tools/argocd.html#why-argo-cd","tools/argocd.html#pull-vs-push-model-in-gitops","tools/argocd.html#benefits","tools/linkerd.html#linkerd","tools/linkerd.html#100days-resources","tools/linkerd.html#learning-resources","tools/linkerd.html#example-notes","tools/linkerd.html#installation","observability/prometheus.html#prometheus","observability/prometheus.html#100days-resources","observability/prometheus.html#learning-resources","observability/prometheus.html#example-notes","observability/prometheus.html#how-does-it-work","observability/prometheus.html#install","observability/prometheus-exporter.html#prometheus-exporter","observability/prometheus-exporter.html#100days-resources","observability/prometheus-exporter.html#learning-resources","observability/prometheus-exporter.html#example-notes","observability/prometheus-exporter.html#install-prometheus-helm-chart-with-operators-etc","observability/prometheus-exporter.html#have-a-look-at-the-prometheus-resources-to-understand-those-better","observability/prometheus-exporter.html#set-up-mongodb","advanced/operators.html#kubernetes-operators","advanced/operators.html#100days-resources","advanced/operators.html#learning-resources","advanced/operators.html#example-notes","advanced/operators.html#managing-stateful-applications-without-an-operator-vs-with-an-operator","advanced/operators.html#what-is-an-operator","advanced/operators.html#who-is-creating-operators","advanced/operators.html#practical-example","tutorials/extending-k8s.html#extending-k8s-api-like-a-pro","tutorials/extending-k8s.html#links","tutorials/extending-k8s.html#learning-resources","tutorials/extending-k8s.html#notes","tutorials/extending-k8s.html#1-a-look-into-custom-resource-definition-crd-api","tutorials/extending-k8s.html#2-install-kubebuilder-and-create-a-new-project","tutorials/extending-k8s.html#3-create-our-first-api","tutorials/extending-k8s.html#4-a-look-into-kubebuilder-setup","tutorials/extending-k8s.html#5-add-some-logging-to-the-reconcile-function","tutorials/extending-k8s.html#6-implement-the-desired-state-of-the-ghost-operator","tutorials/extending-k8s.html#7-access-the-custom-resource-inside-the-reconcile-function","tutorials/extending-k8s.html#8-implement-the-ghost-operator-logic-part-1---pvc","tutorials/extending-k8s.html#9-implement-the-ghost-operator-logic-part-2---rbac","tutorials/extending-k8s.html#10-implement-the-ghost-operator-logic-part-3---deployment","tutorials/extending-k8s.html#11-implement-the-ghost-operator-logic-part-4---service","tutorials/extending-k8s.html#12-implement-the-final-logic-of-the-reconcile-function","tutorials/extending-k8s.html#13-update-the-ghost-resource","tutorials/extending-k8s.html#14-deleting-the-ghost-resource","tutorials/extending-k8s.html#15-deploy-ghost-operator-to-the-cluster","tutorials/extending-k8s.html#16-bonus-setup-vscode-debugger","tutorials/serverless.html#serverless","tutorials/serverless.html#100days-resources","tutorials/serverless.html#learning-resources","tutorials/serverless.html#example-notes","tutorials/serverless.html#serverless","tutorials/serverless.html#resources","tutorials/ingress-from-scratch.html#ingress-from-scratch","tutorials/ingress-from-scratch.html#100days-resources","tutorials/ingress-from-scratch.html#learning-resources","tutorials/ingress-from-scratch.html#example-notes","tutorials/ingress-from-scratch.html#resources","tutorials/ingress-from-scratch.html#install","tutorials/ingress-from-scratch.html#lets-set-everything-up","tutorials/istio-from-scratch.html#setup-istio-from-scratch","tutorials/istio-from-scratch.html#100days-resources","tutorials/istio-from-scratch.html#learning-resources","tutorials/istio-from-scratch.html#example-notes","tutorials/istio-from-scratch.html#service-mesh-vs-ingress","tutorials/istio-from-scratch.html#installation","tutorials/deploy-to-civo.html#deploy-an-app-to-civo-k3s-cluster-from-scratch","tutorials/deploy-to-civo.html#100days-resources","tutorials/deploy-to-civo.html#learning-resources","tutorials/deploy-to-civo.html#civo-notes","tutorials/deploy-to-civo.html#rancher-shared-service-using-kubernauts--popeye-setup","tutorials/deploy-to-civo.html#resources","troubleshooting/crashloopbackoff.html#crashloopbackoff","troubleshooting/crashloopbackoff.html#100days-resources","troubleshooting/crashloopbackoff.html#learning-resources","troubleshooting/crashloopbackoff.html#example-notes","glossary/terminologyprimer.html#terminology-primer"],"index":{"documentStore":{"docInfo":{"0":{"body":24,"breadcrumbs":5,"title":4},"1":{"body":23,"breadcrumbs":3,"title":2},"10":{"body":11,"breadcrumbs":4,"title":2},"100":{"body":11,"breadcrumbs":3,"title":2},"101":{"body":8,"breadcrumbs":3,"title":2},"102":{"body":199,"breadcrumbs":3,"title":2},"103":{"body":297,"breadcrumbs":5,"title":4},"104":{"body":0,"breadcrumbs":2,"title":1},"105":{"body":123,"breadcrumbs":2,"title":1},"106":{"body":81,"breadcrumbs":5,"title":4},"107":{"body":79,"breadcrumbs":5,"title":4},"108":{"body":80,"breadcrumbs":6,"title":5},"109":{"body":34,"breadcrumbs":5,"title":4},"11":{"body":7,"breadcrumbs":4,"title":2},"110":{"body":37,"breadcrumbs":3,"title":2},"111":{"body":8,"breadcrumbs":2,"title":1},"112":{"body":0,"breadcrumbs":4,"title":2},"113":{"body":41,"breadcrumbs":3,"title":1},"114":{"body":2,"breadcrumbs":4,"title":2},"115":{"body":21,"breadcrumbs":4,"title":2},"116":{"body":52,"breadcrumbs":7,"title":5},"117":{"body":13,"breadcrumbs":4,"title":2},"118":{"body":4,"breadcrumbs":4,"title":2},"119":{"body":3,"breadcrumbs":6,"title":4},"12":{"body":539,"breadcrumbs":4,"title":2},"120":{"body":30,"breadcrumbs":4,"title":2},"121":{"body":4,"breadcrumbs":4,"title":2},"122":{"body":4,"breadcrumbs":4,"title":2},"123":{"body":4,"breadcrumbs":5,"title":3},"124":{"body":6,"breadcrumbs":4,"title":2},"125":{"body":9,"breadcrumbs":4,"title":2},"126":{"body":19,"breadcrumbs":5,"title":3},"127":{"body":8,"breadcrumbs":4,"title":2},"128":{"body":0,"breadcrumbs":6,"title":3},"129":{"body":26,"breadcrumbs":4,"title":1},"13":{"body":0,"breadcrumbs":4,"title":2},"130":{"body":72,"breadcrumbs":4,"title":1},"131":{"body":4,"breadcrumbs":5,"title":2},"132":{"body":10,"breadcrumbs":4,"title":1},"133":{"body":4,"breadcrumbs":4,"title":1},"134":{"body":11,"breadcrumbs":4,"title":1},"135":{"body":30,"breadcrumbs":5,"title":2},"136":{"body":49,"breadcrumbs":4,"title":1},"137":{"body":46,"breadcrumbs":5,"title":2},"138":{"body":62,"breadcrumbs":5,"title":2},"139":{"body":0,"breadcrumbs":4,"title":1},"14":{"body":11,"breadcrumbs":4,"title":2},"140":{"body":13,"breadcrumbs":5,"title":2},"141":{"body":41,"breadcrumbs":8,"title":5},"142":{"body":31,"breadcrumbs":5,"title":2},"143":{"body":22,"breadcrumbs":6,"title":3},"144":{"body":91,"breadcrumbs":5,"title":2},"145":{"body":8,"breadcrumbs":5,"title":2},"146":{"body":0,"breadcrumbs":4,"title":1},"147":{"body":11,"breadcrumbs":5,"title":2},"148":{"body":15,"breadcrumbs":5,"title":2},"149":{"body":153,"breadcrumbs":5,"title":2},"15":{"body":3,"breadcrumbs":4,"title":2},"150":{"body":162,"breadcrumbs":5,"title":2},"151":{"body":0,"breadcrumbs":6,"title":3},"152":{"body":106,"breadcrumbs":5,"title":2},"153":{"body":0,"breadcrumbs":6,"title":3},"154":{"body":0,"breadcrumbs":8,"title":5},"155":{"body":11,"breadcrumbs":5,"title":2},"156":{"body":1,"breadcrumbs":5,"title":2},"157":{"body":548,"breadcrumbs":5,"title":2},"158":{"body":0,"breadcrumbs":2,"title":1},"159":{"body":11,"breadcrumbs":3,"title":2},"16":{"body":18,"breadcrumbs":4,"title":2},"160":{"body":5,"breadcrumbs":3,"title":2},"161":{"body":46,"breadcrumbs":3,"title":2},"162":{"body":80,"breadcrumbs":2,"title":1},"163":{"body":105,"breadcrumbs":4,"title":3},"164":{"body":18,"breadcrumbs":4,"title":3},"165":{"body":92,"breadcrumbs":5,"title":4},"166":{"body":33,"breadcrumbs":3,"title":2},"167":{"body":0,"breadcrumbs":2,"title":1},"168":{"body":11,"breadcrumbs":3,"title":2},"169":{"body":1,"breadcrumbs":3,"title":2},"17":{"body":691,"breadcrumbs":4,"title":2},"170":{"body":670,"breadcrumbs":3,"title":2},"171":{"body":164,"breadcrumbs":2,"title":1},"172":{"body":26,"breadcrumbs":2,"title":1},"173":{"body":0,"breadcrumbs":5,"title":3},"174":{"body":17,"breadcrumbs":3,"title":1},"175":{"body":22,"breadcrumbs":4,"title":2},"176":{"body":73,"breadcrumbs":7,"title":5},"177":{"body":70,"breadcrumbs":3,"title":1},"178":{"body":0,"breadcrumbs":2,"title":1},"179":{"body":11,"breadcrumbs":3,"title":2},"18":{"body":0,"breadcrumbs":4,"title":2},"180":{"body":11,"breadcrumbs":3,"title":2},"181":{"body":87,"breadcrumbs":3,"title":2},"182":{"body":287,"breadcrumbs":2,"title":1},"183":{"body":0,"breadcrumbs":2,"title":1},"184":{"body":11,"breadcrumbs":3,"title":2},"185":{"body":56,"breadcrumbs":3,"title":2},"186":{"body":124,"breadcrumbs":3,"title":2},"187":{"body":241,"breadcrumbs":2,"title":1},"188":{"body":199,"breadcrumbs":2,"title":1},"189":{"body":0,"breadcrumbs":4,"title":2},"19":{"body":11,"breadcrumbs":4,"title":2},"190":{"body":11,"breadcrumbs":4,"title":2},"191":{"body":10,"breadcrumbs":4,"title":2},"192":{"body":0,"breadcrumbs":4,"title":2},"193":{"body":19,"breadcrumbs":8,"title":6},"194":{"body":70,"breadcrumbs":8,"title":6},"195":{"body":317,"breadcrumbs":5,"title":3},"196":{"body":0,"breadcrumbs":4,"title":2},"197":{"body":11,"breadcrumbs":4,"title":2},"198":{"body":6,"breadcrumbs":4,"title":2},"199":{"body":46,"breadcrumbs":4,"title":2},"2":{"body":22,"breadcrumbs":3,"title":2},"20":{"body":5,"breadcrumbs":4,"title":2},"200":{"body":79,"breadcrumbs":9,"title":7},"201":{"body":115,"breadcrumbs":3,"title":1},"202":{"body":101,"breadcrumbs":4,"title":2},"203":{"body":15,"breadcrumbs":4,"title":2},"204":{"body":0,"breadcrumbs":6,"title":4},"205":{"body":3,"breadcrumbs":3,"title":1},"206":{"body":4,"breadcrumbs":4,"title":2},"207":{"body":16,"breadcrumbs":3,"title":1},"208":{"body":551,"breadcrumbs":9,"title":7},"209":{"body":113,"breadcrumbs":8,"title":6},"21":{"body":453,"breadcrumbs":4,"title":2},"210":{"body":224,"breadcrumbs":6,"title":4},"211":{"body":107,"breadcrumbs":6,"title":4},"212":{"body":108,"breadcrumbs":7,"title":5},"213":{"body":141,"breadcrumbs":8,"title":6},"214":{"body":226,"breadcrumbs":9,"title":7},"215":{"body":280,"breadcrumbs":10,"title":8},"216":{"body":88,"breadcrumbs":10,"title":8},"217":{"body":245,"breadcrumbs":10,"title":8},"218":{"body":150,"breadcrumbs":10,"title":8},"219":{"body":349,"breadcrumbs":8,"title":6},"22":{"body":0,"breadcrumbs":7,"title":4},"220":{"body":44,"breadcrumbs":6,"title":4},"221":{"body":34,"breadcrumbs":6,"title":4},"222":{"body":56,"breadcrumbs":7,"title":5},"223":{"body":65,"breadcrumbs":7,"title":5},"224":{"body":0,"breadcrumbs":2,"title":1},"225":{"body":11,"breadcrumbs":3,"title":2},"226":{"body":1,"breadcrumbs":3,"title":2},"227":{"body":34,"breadcrumbs":3,"title":2},"228":{"body":199,"breadcrumbs":2,"title":1},"229":{"body":18,"breadcrumbs":2,"title":1},"23":{"body":10,"breadcrumbs":5,"title":2},"230":{"body":0,"breadcrumbs":4,"title":2},"231":{"body":11,"breadcrumbs":4,"title":2},"232":{"body":1,"breadcrumbs":4,"title":2},"233":{"body":27,"breadcrumbs":4,"title":2},"234":{"body":47,"breadcrumbs":3,"title":1},"235":{"body":36,"breadcrumbs":3,"title":1},"236":{"body":137,"breadcrumbs":6,"title":4},"237":{"body":0,"breadcrumbs":5,"title":3},"238":{"body":11,"breadcrumbs":4,"title":2},"239":{"body":49,"breadcrumbs":4,"title":2},"24":{"body":6,"breadcrumbs":5,"title":2},"240":{"body":0,"breadcrumbs":4,"title":2},"241":{"body":85,"breadcrumbs":6,"title":4},"242":{"body":334,"breadcrumbs":3,"title":1},"243":{"body":0,"breadcrumbs":8,"title":6},"244":{"body":6,"breadcrumbs":4,"title":2},"245":{"body":17,"breadcrumbs":4,"title":2},"246":{"body":353,"breadcrumbs":4,"title":2},"247":{"body":56,"breadcrumbs":9,"title":7},"248":{"body":4,"breadcrumbs":3,"title":1},"249":{"body":0,"breadcrumbs":2,"title":1},"25":{"body":239,"breadcrumbs":5,"title":2},"250":{"body":11,"breadcrumbs":3,"title":2},"251":{"body":6,"breadcrumbs":3,"title":2},"252":{"body":280,"breadcrumbs":3,"title":2},"253":{"body":369,"breadcrumbs":4,"title":2},"26":{"body":0,"breadcrumbs":4,"title":2},"27":{"body":11,"breadcrumbs":4,"title":2},"28":{"body":13,"breadcrumbs":4,"title":2},"29":{"body":0,"breadcrumbs":4,"title":2},"3":{"body":51,"breadcrumbs":4,"title":3},"30":{"body":243,"breadcrumbs":4,"title":2},"31":{"body":101,"breadcrumbs":6,"title":4},"32":{"body":0,"breadcrumbs":4,"title":2},"33":{"body":11,"breadcrumbs":4,"title":2},"34":{"body":4,"breadcrumbs":4,"title":2},"35":{"body":138,"breadcrumbs":4,"title":2},"36":{"body":95,"breadcrumbs":3,"title":1},"37":{"body":8,"breadcrumbs":7,"title":5},"38":{"body":36,"breadcrumbs":4,"title":2},"39":{"body":0,"breadcrumbs":4,"title":2},"4":{"body":35,"breadcrumbs":2,"title":1},"40":{"body":11,"breadcrumbs":4,"title":2},"41":{"body":4,"breadcrumbs":4,"title":2},"42":{"body":215,"breadcrumbs":4,"title":2},"43":{"body":99,"breadcrumbs":5,"title":3},"44":{"body":0,"breadcrumbs":2,"title":1},"45":{"body":11,"breadcrumbs":3,"title":2},"46":{"body":4,"breadcrumbs":3,"title":2},"47":{"body":57,"breadcrumbs":3,"title":2},"48":{"body":263,"breadcrumbs":2,"title":1},"49":{"body":27,"breadcrumbs":3,"title":2},"5":{"body":7,"breadcrumbs":2,"title":1},"50":{"body":0,"breadcrumbs":2,"title":1},"51":{"body":11,"breadcrumbs":3,"title":2},"52":{"body":6,"breadcrumbs":3,"title":2},"53":{"body":371,"breadcrumbs":3,"title":2},"54":{"body":0,"breadcrumbs":4,"title":2},"55":{"body":13,"breadcrumbs":4,"title":2},"56":{"body":18,"breadcrumbs":4,"title":2},"57":{"body":61,"breadcrumbs":4,"title":2},"58":{"body":256,"breadcrumbs":4,"title":2},"59":{"body":90,"breadcrumbs":6,"title":4},"6":{"body":76,"breadcrumbs":3,"title":2},"60":{"body":0,"breadcrumbs":3,"title":1},"61":{"body":11,"breadcrumbs":4,"title":2},"62":{"body":4,"breadcrumbs":4,"title":2},"63":{"body":290,"breadcrumbs":4,"title":2},"64":{"body":0,"breadcrumbs":4,"title":2},"65":{"body":11,"breadcrumbs":4,"title":2},"66":{"body":8,"breadcrumbs":4,"title":2},"67":{"body":435,"breadcrumbs":4,"title":2},"68":{"body":84,"breadcrumbs":5,"title":3},"69":{"body":43,"breadcrumbs":5,"title":3},"7":{"body":8,"breadcrumbs":5,"title":4},"70":{"body":0,"breadcrumbs":4,"title":2},"71":{"body":11,"breadcrumbs":4,"title":2},"72":{"body":12,"breadcrumbs":4,"title":2},"73":{"body":310,"breadcrumbs":4,"title":2},"74":{"body":0,"breadcrumbs":3,"title":1},"75":{"body":11,"breadcrumbs":4,"title":2},"76":{"body":2,"breadcrumbs":4,"title":2},"77":{"body":0,"breadcrumbs":4,"title":2},"78":{"body":51,"breadcrumbs":8,"title":6},"79":{"body":71,"breadcrumbs":5,"title":3},"8":{"body":12,"breadcrumbs":2,"title":1},"80":{"body":48,"breadcrumbs":4,"title":2},"81":{"body":116,"breadcrumbs":3,"title":1},"82":{"body":71,"breadcrumbs":3,"title":1},"83":{"body":49,"breadcrumbs":3,"title":1},"84":{"body":17,"breadcrumbs":4,"title":2},"85":{"body":0,"breadcrumbs":4,"title":2},"86":{"body":11,"breadcrumbs":4,"title":2},"87":{"body":4,"breadcrumbs":4,"title":2},"88":{"body":65,"breadcrumbs":4,"title":2},"89":{"body":238,"breadcrumbs":3,"title":1},"9":{"body":0,"breadcrumbs":4,"title":2},"90":{"body":93,"breadcrumbs":4,"title":2},"91":{"body":16,"breadcrumbs":6,"title":4},"92":{"body":77,"breadcrumbs":4,"title":2},"93":{"body":0,"breadcrumbs":2,"title":1},"94":{"body":11,"breadcrumbs":3,"title":2},"95":{"body":4,"breadcrumbs":3,"title":2},"96":{"body":90,"breadcrumbs":3,"title":2},"97":{"body":289,"breadcrumbs":5,"title":4},"98":{"body":16,"breadcrumbs":2,"title":1},"99":{"body":0,"breadcrumbs":2,"title":1}},"docs":{"0":{"body":"100 Days of Kubernetes is the challenge in which we aim to learn something new related to Kubernetes each day across 100 Days!!! You Can Learn Anything A lot of times it is just about finding the right resources and the right learning path.","breadcrumbs":"Introduction » Welcome to 100 Days Of Kubernetes!","id":"0","title":"Welcome to 100 Days Of Kubernetes!"},"1":{"body":"This book provides a list of resources from across the cloud native space to learn about and master Kubernetes. Whether you are just getting started with Kubernetes or you are already using Kubernetes, I am sure that you will find a way to use the resources or contribute :)","breadcrumbs":"Introduction » What can you find in this book","id":"1","title":"What can you find in this book"},"10":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Introduction to Kubernetes » 100Days Resources","id":"10","title":"100Days Resources"},"100":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Terraform » 100Days Resources","id":"100","title":"100Days Resources"},"101":{"body":"https://youtu.be/UleogrJkZn0 https://youtu.be/HmxkYNv1ksg https://youtu.be/l5k1ai_GBDE https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/guides/getting-started","breadcrumbs":"Terraform » Learning Resources","id":"101","title":"Learning Resources"},"102":{"body":"With Terraform, you have a DevOps first approach. It is an open-source tools, original developed by HashiCorp It is an infrastructure provisioning tool that allows you to store your infrastructure set-up as code. Normally, you would have: Provisioning the Infrastructure Usually has top be done within a specific order Deploying the application It has a strong open community and is pluggable by design — meaning that it is used by vast number of organisations Allows you to manage your infrastructure as code in a declarative format. What would be the alternative? Pressing buttons in a UI — the problem with this is that it is not repeatable across machines Especially important in the microservices world! It would otherwise be more time-consuming and error-prone to set-up all the resources needed for every microservice. What is a declarative approach? Current state vs. desired state Define what the end-result should be. When you are first defining your resources, this is less needed. However, once you are adding or changing resources, this is a lot more important. It does not require you to know how many resources and what resources are currently available but it will just compare what is the current state and what has to happen to create the desired state. Different stages of using Terraform: Terraform file that defines our resources e.g. VM, Kubernetes cluster, VPC Plan Phase — Terraform command: Compares the desired state to what currently exists on our cluster if everything looks good: Apply Phase: Terraform is using your API token to spin up your resources When you defined your Terraform resources, you define a provider: Connects you to a cloud provider/infrastructure provider. A provider can also be used to spin up platforms and manage SAAS offerings So, beyond IAAS, you can also manage other platforms and resources through Terraform. You can configure your provider in either of the following ways: Use cloud-specific auth plugins (for example, eks get-token, az get-token, gcloud config) Use oauth2 token Use TLS certificate credentials Use kubeconfig file by setting both [config_path](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#config_path) and [config_context](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#config_context) Use username and password (HTTP Basic Authorization)","breadcrumbs":"Terraform » Example Notes","id":"102","title":"Example Notes"},"103":{"body":"Install Terraform Initialise cloud provider Run \"terraform init\" Set-up a security group Make sure that you set up the file Try whether or not it works with \"terraform plan\" . Here we would expect something to add but not necessarily something to change or to destroy. Don't display your credentials openly, you would probably want to use something like Ansible vault or similar to store your credentials, do not store in plain text Once we have finished setting up our resources, we want to destroy them \"terraform destroy\" tf is the file extension that Terraform uses Note that you can find all available information on a data source. Common question: What is the difference between Ansible and Terraform They are both used for provisioning the infrastructure However, Terraform also has the power to provision the infrastructure Ansible is better for configuring the infrastructure and deploying the application Both could be used in combination I am doing this example on my local kind cluster — have a look at one of the previous videos to see how to set-up kind We are using a mixture of the following tutorials https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/guides/getting-started https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case Create your cluster e.g. with kind kind-config.yaml kind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane extraPortMappings: - containerPort: 30201 hostPort: 30201 listenAddress: \"0.0.0.0\" Then run kind create cluster --name terraform-learn --config kind-config.yaml For our terraform.tfvar file, we need the following information [host](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case#host) corresponds with clusters.cluster.server. [client_certificate](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case#client_certificate) corresponds with users.user.client-certificate. [client_key](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case#client_key) corresponds with users.user.client-key. [cluster_ca_certificate](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/use-case#cluster_ca_certificate) corresponds with clusters.cluster.certificate-authority-data. To get those, run: \"kubectl config view --minify --flatten --context=kind-terraform-learn\" # terraform.tfvars host = \"https://127.0.0.1:32768\"\nclient_certificate = \"LS0tLS1CRUdJTiB...\"\nclient_key = \"LS0tLS1CRUdJTiB...\"\ncluster_ca_certificate = \"LS0tLS1CRUdJTiB...\" This is our terraform .tf file terraform { required_providers { kubernetes = { source = \"hashicorp/kubernetes\" } }\n} variable \"host\" { type = string\n} variable \"client_certificate\" { type = string\n} variable \"client_key\" { type = string\n} variable \"cluster_ca_certificate\" { type = string\n} provider \"kubernetes\" { host = var.host client_certificate = base64decode(var.client_certificate) client_key = base64decode(var.client_key) cluster_ca_certificate = base64decode(var.cluster_ca_certificate)\n} resource \"kubernetes_namespace\" \"test\" { metadata { name = \"nginx\" }\n}\nresource \"kubernetes_deployment\" \"test\" { metadata { name = \"nginx\" namespace = kubernetes_namespace.test.metadata.0.name } spec { replicas = 2 selector { match_labels = { app = \"MyTestApp\" } } template { metadata { labels = { app = \"MyTestApp\" } } spec { container { image = \"nginx\" name = \"nginx-container\" port { container_port = 80 } } } } }\n}\nresource \"kubernetes_service\" \"test\" { metadata { name = \"nginx\" namespace = kubernetes_namespace.test.metadata.0.name } spec { selector = { app = kubernetes_deployment.test.spec.0.template.0.metadata.0.labels.app } type = \"NodePort\" port { node_port = 30201 port = 80 target_port = 80 } }\n} Now we can run terraform init terraform plan terraform apply Make sure that all your resources are within the right path and it should work.","breadcrumbs":"Terraform » Let's try out Terraform","id":"103","title":"Let's try out Terraform"},"104":{"body":"","breadcrumbs":"Crossplane » Crossplane","id":"104","title":"Crossplane"},"105":{"body":"Video By Shahrooz Aghili Crossplane Docs Highlights and Intro: Crossplane is an advanced tool for managing infrastructure in the cloud-native ecosystem. It distinguishes itself from traditional Configuration Management and Infrastructure as Code (IaC) tools. Platforms like Chef, Puppet, and Ansible have become less prevalent, while Terraform and Pulumi, despite their popularity, lack efficient drift detection. This means if a resource is modified outside Terraform, it won't detect the change. Crossplane, on the other hand, excels in detecting drifts and integrates seamlessly with GitOps workflows, allowing infrastructure management directly through kubectl. Benefits : Manages infrastructure in a cloud-native way using kubectl. Supports major cloud providers and is continually updated with new ones. Allows building custom infrastructure abstractions. Based on Custom Resource Definitions (CRD), making it extensible and runnable anywhere. Acts as a single source of truth for infrastructure management. Enables management of policies through custom APIs, simplifying infrastructure complexity. Offers Declarative Infrastructure Configuration: self-healing and accessible via kubectl and YAML. Integrates with leading GitOps tools like Flux & ArgoCD.","breadcrumbs":"Crossplane » Links","id":"105","title":"Links"},"106":{"body":"Crossplane is an extension to the k8s api, therefore we need a cluster (any cluster would do) to install it. This cluster is some time refered to as operations cluster. In this tutorial we are going to use minikube. minikube start Once minikube is up and running, we can install cross-plane by running a helm command. helm repo add \\\ncrossplane-stable https://charts.crossplane.io/stable helm repo update helm install crossplane \\\ncrossplane-stable/crossplane \\\n--namespace crossplane-system \\\n--create-namespace Now, Let's see if crossplane is running without any issues. kubectl get pods -n crossplane-system Crossplane is extending the kubernetes api. let's checkout the new custom resource definitions and api. kubectl api-resources | grep crossplane\nkubectl get crds | grep crossplane","breadcrumbs":"Crossplane » Install Crossplane using Helm","id":"106","title":"Install Crossplane using Helm"},"107":{"body":"Next, we need to enable crossplane access to our Hyperscaler of choice. All we need is a secret in the crossplane-system namespace. The secret should contain a service account key.json. Please note that the service account should be granted proper permissions for resoruce adminstration. export PROJECT_ID=\"YOUR-PROJECT-ID\"\nexport SA_NAME=\"YOUR-SA-NAME\" export SA=\"${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com\" gcloud iam service-accounts \\ create $SA_NAME \\ --project $PROJECT_ID export ROLE=roles/admin gcloud projects add-iam-policy-binding \\ --role $ROLE $PROJECT_ID \\ --member serviceAccount:$SA gcloud iam service-accounts keys \\ create creds.json \\ --project $PROJECT_ID \\ --iam-account $SA kubectl create secret \\\ngeneric gcp-secret \\\n-n crossplane-system \\\n--from-file=creds=./creds.json","breadcrumbs":"Crossplane » Setting up the Hyperscaler (GCP)","id":"107","title":"Setting up the Hyperscaler (GCP)"},"108":{"body":"Next, we install and configure a crossplane provider that specializes in managing storage related services in GCP. cat < \ne.g. helm repo add bitnami https://charts.bitnami.com/bitnami From https://artifacthub.io/packages/helm/bitnami/mysql List repositories all repositories that you have installed helm repo list Search within a repository helm search repo Instead of using the commands, you can also search the chart repository online. To upgrade the charts in your repositories helm repo update Install a specific chart repository helm install stable/mysql --generate-name Note that you can either ask Helm to generate a name with the —generate-name flag, or you can provide the name that you want to give the chart by defining it after install helm install say-my-name stable/mysql Check the entities that got deployed within a specific cluster: List all the charts that you have deployed with the following command helm ls To remove a chart use helm uninstall In the next section, we are going to look at ways that you can customize your chart.","breadcrumbs":"Helm Part 1 » Helm Commands","id":"152","title":"Helm Commands"},"153":{"body":"","breadcrumbs":"Helm Part 2 » Helm Part 2","id":"153","title":"Helm Part 2"},"154":{"body":"","breadcrumbs":"Helm Part 3 » Setting up and modifying Helm Charts","id":"154","title":"Setting up and modifying Helm Charts"},"155":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Helm Part 3 » 100Days Resources","id":"155","title":"100Days Resources"},"156":{"body":"https://helm.sh/docs/chart_template_guide/getting_started/","breadcrumbs":"Helm Part 3 » Learning Resources","id":"156","title":"Learning Resources"},"157":{"body":"Charts are the packages that Helm works with — these are then turned into Kubernetes manifests and installed on your cluster. You can easily go ahead and create a chart template helm create Chart names must be lower case letters and numbers. Words may be separated with dashes (-) YAML files should be indented using two spaces (and never tabs). This chart is based on the nginx image. This chart template is especially useful if you are developing a stateless chart — we will cover other starting points later on. With the following command, you can go ahead and install the chart helm install as we have seen in the previous days. Note that your Kubernetes cluster needs to have access to the Docker Hub to pull the image from there and use it. This chart runs Nginx out of the box — the templates within the charts are used to set-up the Kubernetes manifests. Helm Template Syntax and creating Templates Helm uses the Go text template engine provided as part of the Go standard library. The syntax is used in kubectl (the command-line application for Kubernetes) templates, Hugo (the static site generator), and numerous other applications built in Go. Note that you do not have to know Go to create templates. To pass custom information from the Chart file to the template file, you have to use annotation. You can control the flow of the template generation using: if/else for creating conditional blocks. {{- if .Values.ingress.enabled -}}\n...\n{{- else -}}\n# Ingress not enabled\n{{- end }} The syntax is providing an \"if/then\" logic — which makes it possible to have default values if no custom values are provided within the values.yaml file. with to set a scope to a particular object containers: - name: {{ .Release.name }} {{- with .Values.image }} image: repository: .repo pullPolicy: .pullPolicy tag: .tag {{- end }} while the values.yaml file contains image: repo: my-repo pullPolicy: always tag: 1.2.3 In this example the scope is changed from the current scope which is . to another object which is .Values.image. Hence, my-repo, always and 1.2.3 were referenced without specifying .Values.images.. WARNING: Other objects can not be accessed using . from within a restricted scope. A solution to this scenario will be using variables $. range, which provides a \"for each\"-style loop cars: |- {{- range .Values.cars }} - {{ . }} {{- end }} with the values.yaml file contains cars: - BMW - Ford - Hyundai Note that range too changes the scope. But to what? In each loop the scope becomes a member of the list. In this case, .Values.cars is a list of strings, so each iteration the scope becomes a string. In the first iteration . is set to BMW. The second iteration, it is set to Ford and in the third it is set to Hyundai. Therefore, each item is referenced using . containers: - name: {{ .Release.name }} env: {{- range .Values.env }} - name: {{ .name }} value: {{ .value | quote }} {{- end }} while the values.yaml file contains env: - name: envVar1 value: true - name: envVar2 value: 5 - name: envVar3 value: helmForever In this case, .Values.env is a list of dictonary, so each iteration the scope becomes a dictionary. For example, in the first iteration, the . is the first dictionary {name: envVar1, value: true}. In the second iteration the scope is the dictionary {name: envVar2, value: 5} and so on. In addition, you can perform a pipeline on the value of .name or .value as shown. Special Functions With the include function, objects from one template can be used within another template; the first argument is the name of the template, the \".\" that is passed in refers to the second argument in the root object. metadata: name: {{ include \"helm-example.fullname\" . }} With the required function, you can set a parameter as required. In case it's not passed a custom error message will be prompted. image: repository: {{ required \"An image repository is required\" .Values.image.repository }} You can also add your own variables to tempaltes {{ $var := .Values.character }} The \"toYaml\" function turns data into YAML syntax Helm has the ability to build a chart archive. Each chart archive is a gzipped TAR file with the extension .tgz. Any tool that can create, extract, and otherwise work on gzipped TAR files will work with Helm’s chart archives. Source. Learning Helm Book Pipelines In Helm, a pipeline is a chain of variables, commands and functions which is used to alter a value before placing it in the values.yaml file. The value of a variable or the output of a function is used as the input to the next function in a pipeline. The output of the final element of a pipeline is the output of the pipeline. The following illustrates a simple pipeline: character: {{ .Values.character | default \"Sylvester\" | quote }} Writing maintainable templates, here are the suggestions by the maintainers You may go long periods without making structural changes to the templates in a chart and then come back to it. Being able to quickly rediscover the layout will make the processes faster. Other people will look at the templates in charts. This may be team members who create the chart or those that consume it. Consumers can, and sometimes do, open up a chart to inspect it prior to installing it or as part of a process to fork it. When you debug a chart, which is covered in the next section, it is easier to do so with some structure in the templates. You can package your Helm chart with the following command. helm package It is important to think of a Helm chart as a package. This package can be published to a public or private repository. Helm repositories can also be hosted in GitHub, GitLab pages, object storage and using Chartmuseum. You can also find charts hosted in many distributed repositories hosted by numerous people and organizations through Helm Hub (aka Artifact Hub ).","breadcrumbs":"Helm Part 3 » Example Notes","id":"157","title":"Example Notes"},"158":{"body":"","breadcrumbs":"k9s » K9s","id":"158","title":"K9s"},"159":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"k9s » 100Days Resources","id":"159","title":"100Days Resources"},"16":{"body":"We can divide the responsibilities within a Kubernetes cluster between a main node and worker nodes. Note that in small clusters we may have one node that takes the responsibilities of both.","breadcrumbs":"Kubernetes Architecture » Example Notes","id":"16","title":"Example Notes"},"160":{"body":"The GitHub repository Overview Medium Post","breadcrumbs":"k9s » Learning Resources","id":"160","title":"Learning Resources"},"161":{"body":"Managing your kubernetes cluster through kubectl commands is difficult. You have to understand how the different components interact, specifically, what resources is linked to which other resource Using a UI usually abstracts all the underlying resources, not allowing for the right learning experience to take place + forcing your to click buttons You do not have full insights on the resources within your cluster. These are some of the aspects that K9s can help with. This is a short intro and tutorial with examples on how you can use K9s to navigate your cluster","breadcrumbs":"k9s » Example Notes","id":"161","title":"Example Notes"},"162":{"body":"K9s is a terminal-based tool that visualises the resources within your cluster and the connection between those. It helps you to access, observe, and manage your resources. \"K9s continually watches Kubernetes for changes and offers subsequent commands to interact with your observed resources.\" — Taken from their GitHub repository Installation options: Homebrew MacOS and Linux OpenSUSE Arch Linux Chocolatey for Windows Install via Go Install from source Run from Docker container As you can see there are multiple different options and I am sure you will find the right one for you and your OS. Some additional features: Customise the color settings Key Bindings Node Shell Command Aliases HotKeySpport Resource Custom Columns Plugins Benchmark your Application Have a look at their website for more comprehensive information","breadcrumbs":"k9s » What is k9s?","id":"162","title":"What is k9s?"},"163":{"body":"The configuration for K9s is kept in your home directory under .k9s $HOME/.k9s/config.yml. You can find a detailed explanation on what the file is here: https://k9scli.io/topics/config/ Note that the definitions may change over time with new releases. To enter the K9s, just type k9s into your terminal. Show everything that you can do with K9s, just type ? Or press: ctrl-a Which will show a more comprehensive list. Search for specific resources type : The name could for instance refer to \"pods\", \"nodes\", \"rs\" (for ReplicaSet) and other Kubernetes resources that you have already been using. Once you have selected a resource, for instance, a namespace, you can search for specific namespaces using \"/\" Have a look at deployments :dp To switch between your Kubernetes context type: :ctx You can also add the context name after the command if you want to view your Kubernetes context and then switch. To delete a resource type press: ctrl-d To kill a resource, use the same command but with k: ctrl-k Change how resourc4es are displayed: :xray RESOURCE To exist K9s either type :q Or press: ctrl-c","breadcrumbs":"k9s » Getting up and running","id":"163","title":"Getting up and running"},"164":{"body":"If you are changing the context or the namespace with kubectl, k9s will automatically know about it and show you the resources within the namespace. Alternatively, you can also specify the namespace through K9s like detailed above.","breadcrumbs":"k9s » k9s interaction with Kubernetes","id":"164","title":"k9s interaction with Kubernetes"},"165":{"body":"K9s integrates with Hey, which is a CLI tool used to benchmark HTTP endpoints. It currently supports benchmarking port-forwards and services ( Source ) To port forward, you will need to selects a pod and container that exposes a specific port within the PodView. With SHIFT-F a dialog will pop up and allows you to select the port to forward to. Once you have selected that, you can use :pf to navigate to the PortForward view and list out all active port-forward. Selecting port-forward + using CTRL-B will run a benchmark on that http endpoint. You can then view the results of the benchmark through :be Keep in mind that once you exit the K9s session, the port-forward will be removed, forwards only last for the duration of the session. Each cluster has its own bench-config that can be found at $HOME/.k9s/bench-.yml You can find further information here . You can debug processes using k9s -l debug","breadcrumbs":"k9s » k9s for further debugging and benchmarking","id":"165","title":"k9s for further debugging and benchmarking"},"166":{"body":"You can change the look of your K9s by changing the according YAML in your .k9s folder. Here is where the default skin lives: skin.yml You can find example skin files in the skins directory: https://github.com/derailed/k9s/tree/master/skins View all the color definitions here: https://k9scli.io/topics/skins/ For further information on how to optimise K9s , check-out their video tutorials.","breadcrumbs":"k9s » Change Look","id":"166","title":"Change Look"},"167":{"body":"","breadcrumbs":"Knative » Knative","id":"167","title":"Knative"},"168":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Knative » 100Days Resources","id":"168","title":"100Days Resources"},"169":{"body":"Their documentation","breadcrumbs":"Knative » Learning Resources","id":"169","title":"Learning Resources"},"17":{"body":"Where does the orchestration from Kubernetes come in? These are some characteristics that make up Kubernetes as a container orchestration system: Managed by several operators and controllers — will look at operators and controllers later on. Operators make use of custom resources to manage an application and their components. \"Each controller interrogates the kube-apiserver for a particular object state, modifying the object until the declared state matches the current state.\" In short, controllers are used to ensure a process is happening in the desired way. \"The ReplicaSet is a controller which deploys and restarts containers, Docker by default, until the requested number of containers is running.\" In short, its purpose is to ensure a specific number of pods are running. Note that those concepts are details in further sections of the book. There are several other API objects which can be used to deploy pods. A DaemonSet will ensure that a single pod is deployed on every node. These are often used for logging and metrics. A StatefulSet can be used to deploy pods in a particular order, such that following pods are only deployed if previous pods report a ready status. API objects can be used to know What containerized applications are running (and on which nodes) The resources available to those applications The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance kube-apiserver Provides the front-end to the cluster's shared state through which all components interact Is central to the operation of the Kubernetes cluster. Handles internal and external traffic The only agent that connects to the etcd database Acts as the master process for the entire cluster Provides the out-wards facing state for the cluster's state Each API call goes through three steps: authentication, authorization, and several admission controllers. kube-scheduler The Scheduler sees a request for running a container and will run the container in the best suited node When a new pod has to be deployed, the kube-scheduler determines through an algorithm to which node the pod should be deployed If the pod fails to be deployed, the kube-scheduler will try again based on the resources available across nodes A user could also determine which node the pod should be deployed to —this can be done through a custom scheduler Nodes that meet scheduling requirements are called feasible nodes. You can find more details about the scheduler on GitHub. etcd Database The state of the cluster, networking, and other persistent information is kept in an etcd database etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data Note that this database does not change; previous entries are not modified and new values are appended at the end. Once an entry can be deleted, it will be labelled for future removal by a compaction process. It works with curl and other HTTP libraries and provides reliable watch queries. Requests to update the database are all sent through the kube api-server; each request has its own version number which allows the etcd to distinguish between requests. If two requests are sent simultaneously, the second request would then be flagged as invalid with a 409 error and the etcd will only update as per instructed by the first request. Note that it has to be specifically configured Other Agents The kube-controller-manager is a core control loop daemon which interacts with the kube-apiserver to determine the state of the cluster. If the state does not match, the manager will contact the necessary controller to match the desired state. It is also responsible to interact with third-party cluster management and reporting. The cluster has several controllers in use, such as endpoints, namespace, and replication. The full list has expanded as Kubernetes has matured. Remaining in beta as of v1.16, the cloud-controller-manager interacts with agents outside of the cloud. It handles tasks once handled by kube-controller-manager. This allows faster changes without altering the core Kubernetes control process. Each kubelet must use the --cloud-provider-external settings passed to the binary. There are several add-ons which have become essential to a typical production cluster, such as DNS services. Others are third-party solutions where Kubernetes has not yet developed a local component, such as cluster-level logging and resource monitoring. \"Each node in the cluster runs two processes: a kubelet and a kube-proxy.\" Kubelet: handles requests to the containers, manages resources and looks after the local nodes The Kube-proxy creates and manages networking rules — to expose container on the network kubelet Each node has a container runtime e.g. the Docker engine installed. The kubelet is used to interact with the Docker Engine and to ensure that the containers that need to run are actually running. Additionally, it does a lot of the work for the worker nodes; such as accepting API calls for the Pods specifications that are either provided in JSON or YAML. Once the specifications are received, it will take care of configuring the nodes until the specifications have been met Should a Pod require access to storage , Secrets or ConfigMaps , the kubelet will ensure access or creation. It also sends back status to the kube-apiserver for eventual persistence. kube-proxy The kube-proxy is responsible for managing the network connectivity to containers. To do that is has iptables. \" iptables is a user-space utility program that allows a system administrator to configure the IP packet filter rules of the Linux kernel firewall, implemented as different Netfilter modules.\" Additional options are the use of namespaces to monitor services and endpoints, or ipvs to replace the use of iptables To easily manage thousands of Pods across hundreds of nodes can be a difficult task to manage. To make management easier, we can use labels, arbitrary strings which become part of the object metadata. These can then be used when checking or changing the state of objects without having to know individual names or UIDs. Nodes can have taints to discourage Pod assignments, unless the Pod has a toleration in its metadata. Multi-tenancy When multiple-users are able to access the same cluster Additional security measures can be taken through either of the following: Namespaces : Namespaces can be used to \"divide the cluster\"; additional permissions can be set on each namespace; note that two objects cannot have the same name in the same namespace Context : A combination of user, cluster name and namespace; this allows you to restrict the cluster between permissions and restrictions. This information is referenced by ~/.kube/config Can be checked with kubectl config view Resource Limits: Provide a way to limit the resources that are provided for a specific pod Pod Security Policies : \"A policy to limit the ability of pods to elevate permissions or modify the node upon which they are scheduled. This wide-ranging limitation may prevent a pod from operating properly. The use of PSPs may be replaced by Open Policy Agent in the future.\" Network Policies: The ability to have an inside-the-cluster firewall. Ingress and Egress traffic can be limited according to namespaces and labels as well as typical network traffic characteristics.","breadcrumbs":"Kubernetes Architecture » Main Node","id":"17","title":"Main Node"},"170":{"body":"Kubernetes-based platform to deploy and manage modern serverless workloads. Knative's serving component incorporates Istio, which is an open source tool developed by IBM, Google, and ridesharing company Lyft to help manage tiny, container-based software services known as microservices. Introduce event-driven and serverless capabilities to Kubernetes clusters. Knative combines two interesting concepts Serverless and Kubernetes Container Orchestration. With Kubernetes, developers have to set-up a variety of different tools to make everything work together, this is time consuming and difficult for many. Knative wants to bring the focus back on writing code instead of managing infrastructure. Knative allows us to make it super easy to deploy long-running stateless application on top of Kubernetes. What is Knative? A Kubernetes extension consistent of custom controllers and custom resource definitions that enable new use cases on top of Kubernetes. A platform installed on top of Kubernetes that brings serverless capabilities to Kubernetes — with its additional features, it makes it super easy to go serverless on top of Kubernetes. The Goal: Making microservice deployments on Kubernetes really easy.+ Serverless style user experience that lives on top of Kubernetes. It consists of three major components: Note that this part has been deprecated but you will still find it in a lot of tutorials. Build: Every developer has code — then turn it into a container — either one step or consistent of multiple step. Next, push the image to a cloud registry. These are a lot of steps — Knative can do all of this within the cluster itself, making iterative development possible. Serve: Knative comes with Istio components, traffic management, automatic scaling. It consists of the Route and the Config — every revision of our service is managed by the config Event: You need triggers that are responded to by the platform itself — it allows you to set-up triggers. Also, you can integrate the eventing with your ci/cd flow to kick off your build and serve stages. Note that you can use other Kubernetes management tools together with Knative Features: Focused API with higher level abstractions for common app use-cases. Stand up a scalable, secure, stateless service in seconds. Loosely coupled features let you use the pieces you need. Pluggable components let you bring your own logging and monitoring, networking, and service mesh. Knative is portable: run it anywhere Kubernetes runs, never worry about vendor lock-in. Idiomatic developer experience, supporting common patterns such as GitOps, DockerOps, ManualOps. Knative can be used with common tools and\tframeworks such as Django, Ruby on Rails, Spring, and many more. Knative offers several benefits for Kubernetes users wanting to take their use of containers to the next level: Faster iterative development: Knative cuts valuable time out of the container building process, which enables you to develop and roll out new container versions more quickly. This makes it easier to develop containers in small, iterative steps, which is a key tenet of the agile development process. Focus on code: DevOps may empower developers to administer their own environments, but at the end of the day, coders want to code. You want to focus on building bug-free software and solving development problems, not on configuring message bus queues for event triggering or managing container scalability. Knative enables you to do that. Quick entry to serverless computing: Serverless environments can be daunting to set up and manage manually. Knative allows you to quickly set up serverless workflows. As far as the developers are concerned, they’re just building a container—it’s Knative that runs it as a service or a serverless function behind the scenes. There are two core Knative components that can be installed and used together or independently to provide different functions: Knative Serving : Provides request-driven compute that can scale to 0. The Serving component is responsible for running/hosting your application. Easily manage stateless services on Kubernetes by reducing the developer effort required for autoscaling, networking, and rollouts. Knative Eventing : Management and delivery of events — manage the event infrastructure of your application. Easily route events between on-cluster and off-cluster components by exposing event routing as configuration rather than embedded in code. These components are delivered as Kubernetes custom resource definitions (CRDs), which can be configured by a cluster administrator to provide default settings for developer-created applications and event workflow components. Additionally, knative keeps track of your revisions. Revisions Revisions of your application are used to scale up the resources once you receive more requests If you are deploying a change/update, revisions can also be used to gradually move traffic from revision 1 to revision 2 You can also have revisions that are not part of the networking scheme — in which case, they have a dedicate name and endpoint. Prerequisites Kubernetes cluster with v1.17 or newer, note that most have 1.18 already by default but you might want to check Kubectl that is connected to your cluster The resources that are going to be deployed through Serving Knative Serving defines a set of objects as Kubernetes Custom Resource Definitions (CRDs). These objects are used to define and control how your serverless workload behaves on the cluster: Service : The service.serving.knative.dev resource automatically manages the whole lifecycle of your workload. It controls the creation of other objects to ensure that your app has a route, a configuration, and a new revision for each update of the service. Service can be defined to always route traffic to the latest revision or to a pinned revision. Route : The route.serving.knative.dev resource maps a network endpoint to one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes. Configuration : The configuration.serving.knative.dev resource maintains the desired state for your deployment. It provides a clean separation between code and configuration and follows the Twelve-Factor App methodology. Modifying a configuration creates a new revision. Revision : The revision.serving.knative.dev resource is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as useful. Knative Serving Revisions can be automatically scaled up and down according to incoming traffic. See Configuring the Autoscaler for more information. https://github.com/knative/serving/raw/master/docs/spec/images/object_model.png With the Service resource, a deployed service will automatically have a matching route and configuration created. Each time the Service is updated, a new revision is created. Configuration creates and tracks snapshots of how to run the user's software called Revisions. Since it keeps track of revisions, it allows you to roll back to prevision revisions should you encounter an issue within new deployments. Data-path/Life of a Request","breadcrumbs":"Knative » Example Notes","id":"170","title":"Example Notes"},"171":{"body":"Installing the knative serving component kubectl apply --filename https://github.com/knative/serving/releases/download/v0.20.0/serving-crds.yaml Installing the core serving component kubectl apply --filename https://github.com/knative/serving/releases/download/v0.20.0/serving-core.yaml Installing Istio for Knative https://knative.dev/docs/install/installing-istio/ Install the Knative Istio controller: kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.20.0/release.yaml Fetch the External IP or CNAME: kubectl --namespace istio-system get service istio-ingressgateway Issue to delete webhooks https://github.com/knative/serving/issues/8323 Configure DNS Head over to the docs https://knative.dev/docs/install/ Monitor the Knative components until all of the components show a STATUS of Running or Completed: kubectl get pods --namespace knative-serving You can also use knative with their cli-tool. Note: Make sure that you have enough resources/capacity of your cluster. If you do receive an error message, increase the capacity of your cluster and rerun commands. So the process is now The client opens the application Which will then forward the request to the Loadbalancer that has been created when we installed Istio (this will only be created on a 'proper cluster') The LoadBalancer will then forward our request to the Istio Gateway — which is responsible for fulfilling the request connected to our application. For a stateless application, there should be as a minimum, the following resources: Deployment ReplicaSet Pod Pod Scalar to ensure the adequate number of pods are running We need a Service so that other Pods/Services can access the application If the application should be used outside of the cluster, we need an Ingress or similar So what makes our application serverless? When Knative realises that our application is not being used for a while, it will remove the pods needed to run the application ⇒ Scaling the app to 0 Replicas Knative is a solution for Serverless workloads, it not only scales our application but also queues our requests if there are no pods to handle our requests.","breadcrumbs":"Knative » Installation","id":"171","title":"Installation"},"172":{"body":"Website https://knative.dev/ Documentation https://knative.dev/docs/ YouTube video https://youtu.be/28CqZZFdwBY If you set a limit of how many requests one application can serve, you can easier see the scaling functionality of Knative in your cluster. kn service update --concurrency-limit=1","breadcrumbs":"Knative » Resources","id":"172","title":"Resources"},"173":{"body":"","breadcrumbs":"GitOps and Argo » GitOps and Argo CD","id":"173","title":"GitOps and Argo CD"},"174":{"body":"Video by Shahrooz Aghili on Argo CD Argo CD Docs The DevOps Toolkit has several great explanations and tutorials around GitOps and ArgoCD","breadcrumbs":"GitOps and Argo » Links","id":"174","title":"Links"},"175":{"body":"Argo CD represents a significant leap in the domain of DevOps, particularly in the adoption of GitOps principles. Argo CD is a part of the Argo project and a CNCF Graduated project . Pull vs. Push","breadcrumbs":"GitOps and Argo » Why Argo CD","id":"175","title":"Why Argo CD"},"176":{"body":"Enhanced Security and Autonomy : Pull model enhances security by reducing attack vectors. Allows for cluster autonomy by applying changes internally. Self-Management and Consistency : Clusters are self-managed, updating without external dependencies. Ensures a consistent state with the repository. Resilience and Reduced Credential Exposure : Pull systems are resilient, not relying on external services. Minimizes credential exposure outside the cluster. Scalability and Dynamic Updates : Better scalability by offloading work to individual clusters. Supports event-driven, dynamic updates. Operational Efficiency : Reduces overhead on CI systems, with the cluster managing deployment work. Leads to lower operational complexity.","breadcrumbs":"GitOps and Argo » Pull vs. Push Model in GitOps","id":"176","title":"Pull vs. Push Model in GitOps"},"177":{"body":"Robust Security and Automated Synchronization : Utilizes Git's security features for safe deployments. Automatically syncs states as defined in Git. Ease of Operations : Facilitates easy rollbacks. Encourages declarative infrastructure setup. Simplifies complex deployments. Cost-Effective and Community-Driven : Open-source nature allows free usage and community contributions. Real-Time Oversight and Multi-Cluster Management : Offers real-time application status monitoring. Manages deployments across multiple clusters. Self-Healing and Community Support : Automatically corrects state drifts. Benefits from the support and innovation of the GitOps community. Hands on Exercise","breadcrumbs":"GitOps and Argo » Benefits","id":"177","title":"Benefits"},"178":{"body":"","breadcrumbs":"Linkerd » Linkerd","id":"178","title":"Linkerd"},"179":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Linkerd » 100Days Resources","id":"179","title":"100Days Resources"},"18":{"body":"","breadcrumbs":"Kubernetes Pods » Kubernetes Pods","id":"18","title":"Kubernetes Pods"},"180":{"body":"The Linkerd getting started guide Video by Saiyam Pathak Have a look at the intro to Service Mesh","breadcrumbs":"Linkerd » Learning Resources","id":"180","title":"Learning Resources"},"181":{"body":"If you want to get started with Linkerd, for the most part, just follow their Documentation. :) In some of the previous videos, we looked as Service Mesh, and specifically at Istio. Since many people struggle to set-up Istio I thought it would be great to take another look at a different Service Mesh. In this case, at Linkerd. Note that Istio is still one of the most popular. Thus, it is always good to still know about that but as an alternative, let's also take a look at Linkerd. Benefits of Linkerd It works as it is — Saiyam First service mesh project that introduced the term In production for about 4 years Open governance model and hosted by the CNCF Super extensible and easy to install add-ons Easy to install It is a community project Kept small, you should spend the least amount of resources More information Control plane written in Go and data plane written in Rust.","breadcrumbs":"Linkerd » Example Notes","id":"181","title":"Example Notes"},"182":{"body":"LinkerD has a really neat getting-started documentation that you can follow step-by-step. First, we are going to follow the onboarding documentation. Once we have everything set-up and working, we want to see metrics of our application through Prometheus and Grafana. Usually I would copy paste commands to make sure everyone knows the order but since their documentation is pretty straight-forward, I am not going to do that this time. Let's follow the steps detailed in the getting-started documentation https://linkerd.io/2.10/getting-started/ In our case, we are going to try it out on our Docker for Desktop Kubernetes cluster but you could also use another cluster such as KinD. Small side-note, I absolutely love that they have a checker command to see whether LinkerD can actually be installed on your cluster linkerd check --pre How amazing is that? And then, they have a separate command to see whether all the resources are ready linkerd check I am already in love with them! We are going to keep following the guide, because we want to have a fancy dashboard with metrics in the end linkerd viz install | kubectl apply -f - linkerd jaeger install | kubectl apply -f - Then we are going to check that everything is ready linkerd check Access the dashboard **linkerd viz dashboard &** this will give you the localhost port to a beautiful Dashboard We are going to use the following GitHub respository to install our application https://github.com/AnaisUrlichs/react-article-display After clone, cd into react-article-display Now you want to have Helm istalled since this application relies on a Helm Chart helm install react-article ./charts/example-chart Once installed, we can port-forward to access our application kubectl por-forward service/react-article-example-chart 5000:80 Now we just have to connect our Service Mesh with the application kubectl get deploy -o yaml \\ | linkerd inject - \\ | kubectl apply -f - Then we can go ahead and run checks again linkerd check --proxy We can see the live stats of our application through linkerd viz stat deploy And see the stream of requests to our services throug linkerd viz tap deploy/web Now let's go ahead and see how we can do traffic splitting for Canary deployments with Linkerd. For this, we are going to follow the documentation https://linkerd.io/2.10/tasks/canary-release/ Clone the following repository Apply the first application kubectl apply -f app-one.yaml You can access it through kubectl port forwarding kubectl port-forward service/app-one 3000:80 Install Flagger kubectl apply -k github.com/fluxcd/flagger/kustomize/linkerd Deploy the deployment resource kubectl create ns test && \\ kubectl apply -f https://run.linkerd.io/flagger.yml Ensure they are ready kubectl -n test rollout status deploy podinfo Access the service kubectl -n test port-forward svc/podinfo 9898 kubectl -n test get ev --watch","breadcrumbs":"Linkerd » Installation","id":"182","title":"Installation"},"183":{"body":"","breadcrumbs":"Prometheus » Prometheus","id":"183","title":"Prometheus"},"184":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Prometheus » 100Days Resources","id":"184","title":"100Days Resources"},"185":{"body":"Overview video https://youtu.be/nk3sk1LO7Bo quick Overview video https://youtu.be/sYMTY-SciUQ long The Prometheus documentation The Grafana documentation Have a look at the different yaml files that encompass Pormetheus https://phoenixnap.com/kb/prometheus-kubernetes-monitoring Challenges using Prometheus at scale https://sysdig.com/blog/challenges-scale-prometheus/ Another quick tutorial https://dev.to/rayandasoriya/kubernetes-monitoring-with-prometheus-2l7k This is a really useful series by Sysdig https://sysdig.com/blog/kubernetes-monitoring-prometheus/ https://sysdig.com/blog/kubernetes-monitoring-with-prometheus-alertmanager-grafana-pushgateway-part-2/ https://sysdig.com/blog/kubernetes-monitoring-prometheus-operator-part3/","breadcrumbs":"Prometheus » Learning Resources","id":"185","title":"Learning Resources"},"186":{"body":"Prometheus is a time-series database that came out of soundcloud to monitor our processes that might span various different systems. It is basically a tool used to monitor the performance of your infrastructure. DevOps needs more automation. Want to be notified about storage issues. Part of the CNCF landscape If you have hundred of microservices, you have to have complex monitoring in place. You want to know when your application is not running as expected. Note that what we want to measure and how we want to measure it highly depends on the type of our application. Why do we want to use Prometheus for our monitoring on Kubernetes? Key-value pair based — similar to how Kubernetes metadata organises labels \"Prometheus is a good fit for microservices because you just need to expose a metrics port, and don’t need to add too much complexity or run additional services. Often, the service itself is already presenting a HTTP interface, and the developer just needs to add an additional path like /metrics.\" Source At its core component Prometheus server Storage: Time series database Data Retrieval Worker Http server: accepts queries Prometheus has an Altertmanager : \"The AlertManager receives metrics from the Prometheus server and then is responsible for grouping and making sense of the metrics and then forwarding an alert to your chosen notification system\"","breadcrumbs":"Prometheus » Example Notes","id":"186","title":"Example Notes"},"187":{"body":"Prometheus operates under the pull-based model. By sending http requests to the endpoint it is supposed to monitor, the target, it scrapes information from that target. The response to this scrape request is stored and parsed in storage along with the metrics for the scrape itself. Metrics have type and help attributes help: what the metrics is type: metrics type Prometheus pulls data from end-points — can easily detect whether a service is up and running e.g. if the end-point does not respond. Vs. push system — applications and servers are responsible for pushing data — this creates a high traffic load. In this case, you might be overloading your infrastructure. If Pormetheus is unable to access the labels of your application, an exporter can be used. There are a lot of different exporters. The exporter you use will depend on the type of application that you want to expose data of. Here is one overview from 2018 — the overview provided on the official documentation looks a bit more complex than this one. Note the direction of the arrows. Source: https://youtu.be/sYMTY-SciUQ Additional notes You define what targets it is supposed to scape and the interval it should scape at in the yaml file. Expects metrics to be available at \"/metrics\". Usually, the yaml file has a default job that can be overwritten with custom jobs. Prometheus has something called prompt ql — which is the prometheus query language. Grafana is used to pull data out of prometheus. By default Prometheus comes with a UI that can be opened on port 9090 Some labels are provided from the application itself and others come directly from the Prometheus server since it knows what it scrapes. Things can go wrong for a variety of reasons, such as Disk Full : No new data can be store Software Bug : Request Errors High Temperature : Hardware Failure Network Outage : Services cannot communicate Low Memory Utilization : Money wasted Note that you can integrate long-term storage with Prometheus but usually if a node fails, the data received until that point will be gone too. You can expose to Prometheus the following node endpoint service pod ingress Some of the downsides using Prometheus at scale \"It doesn’t allow global visibility. Several production clusters would make this a new requirement.\" \"Operations can be cumbersome and time consuming.\" \"This model can complicate governance and regulatory compliance\" \"Prometheus is not designed to be scaled horizontally. Once you hit the limit of vertical scaling, you’re done.\" Source","breadcrumbs":"Prometheus » How does it work?","id":"187","title":"How does it work?"},"188":{"body":"To learn more about Prometheus, this repository is really great https://github.com/yolossn/Prometheus-Basics Using Prometheus without Kubernetes on a node.js application https://github.com/csantanapr/prometheus-nodejs-tutorial Following this tutorial to set-up both Prometheus and Grafana https://grafana.com/docs/grafana-cloud/quickstart/prometheus_operator/ Prometheus is based on a Kubernetes Operator. Let's take a closer look at Kubernetes Operators. In short, they are used to develop tools and resources on top of Kubernetes. You can use Kubernetes Operators to create new custom resources. Installation The Prometheus operator is provided. A deployment, a required ClusterRole with associated ClusterRoleBinding and a ServiceAccount are defined. We can create all of those through the following tutorial. Blog post on using Prometheus and Grafana https://medium.com/devops-dudes/install-prometheus-on-ubuntu-18-04-a51602c6256b Install the Prometheus operator — I am going to use Helm because it is probably the most straightforward to use vs. any of the other options. helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prom prometheus-community/kube-prometheus-stack To access the Prometheus UI, use the following kubectl port-forward prometheus-prometheus-prometheus-oper-prometheus-0 9090 Note that the prometheus might not be in namespace — if you have installed it in a namespace, make sure to specify the namespace. Now do the same with the Grafana Dashboard kubectl port-forward prometheus-grafana-5c5885d488-b9mlj 3000 Here again, the resource might or might not be in a namespace. To login, you have to get your password etc. kubectl get secret --namespace prometheus prometheus-grafana -o yaml username: admin password: prom-operator Note again that you might have to replace the values. Now you can access the metrics. Have a look at the following video for a detailed overview: https://youtu.be/QoDqxm7ybLc We want to monitor an application, which we are going to do by following this Readme: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md To access the deployment in yaml format kubectl describe deployment prometheus-prometheus-oper-operator > oper.yaml","breadcrumbs":"Prometheus » Install","id":"188","title":"Install"},"189":{"body":"","breadcrumbs":"Prometheus Exporter » Prometheus Exporter","id":"189","title":"Prometheus Exporter"},"19":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kubernetes Pods » 100Days Resources","id":"19","title":"100Days Resources"},"190":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Prometheus Exporter » 100Days Resources","id":"190","title":"100Days Resources"},"191":{"body":"Setup Prometheus Monitoring on Kubernetes using Helm and Prometheus Operator | Part 1","breadcrumbs":"Prometheus Exporter » Learning Resources","id":"191","title":"Learning Resources"},"192":{"body":"","breadcrumbs":"Prometheus Exporter » Example Notes","id":"192","title":"Example Notes"},"193":{"body":"First off, we are going to follow the commands provided in the previous day — Day 28 — on Prometheus to have our Helm Chart with all the Prometheus related resources installed. Day 28: What is Prometheus","breadcrumbs":"Prometheus Exporter » Install Prometheus Helm Chart with Operators etc.","id":"193","title":"Install Prometheus Helm Chart with Operators etc."},"194":{"body":"Prometheus uses ServiceMonitors to discover endpoints. You can get a list of all the ServiceMonitors through: kubectl get servicemonitor Now have a look at one of those ServiceMonitors: kubectl get servicemonitor prometheus-kube-prometheus-grafana -o yaml This will display the ServiceMonitor definition in pure YAML inside our terminal. Look through the YAML file and you will find a label called: \"release: prometheus\" This label defines the ServiceMonitors that Prometheus is supposed to scrape. Like mentioned in the Previous video, operators, such as the Prometheus Operator rely on CRD. Have a look at the CRDs that Prometheus uses through: kubectl get crd You can take a look at a specific one as follows. kubectl get prometheus.monitoring.coreos.com -o yaml","breadcrumbs":"Prometheus Exporter » Have a look at the Prometheus Resources to understand those better","id":"194","title":"Have a look at the Prometheus Resources to understand those better"},"195":{"body":"Now, we want to install a MongoDB image on our cluster and tell Prometheus to monitor it's endpoint. However, MongoDB is one of those images that relies on an exporter for its service to be visible to Prometheus. Think about it this way, Prometheus needs the help of and Exporter to know where MongoDB is in our cluster — like a pointer. You can learn more about those concepts in my previous videos Prometheus on Kubernetes: Day 28 of #100DaysOfKubernetes Kubernetes Operators: Day 29 of #100DaysOfKubernetes First off, we are going to install the MongoDB deployment and the MongoDB service; here is the YAML needed for this: deployment.yaml apiVersion: apps/v1\nkind: Deployment\nmetadata: name: mongodb-deployment labels: app: mongodb\nspec: replicas: 2 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo ports: - containerPort: 27017 service.yaml apiVersion: v1\nkind: Service\nmetadata: name: mongodb-service\nspec: selector: app: mongodb ports: - protocol: TCP port: 27017 targetPort: 27017 Use the follow to apply both to your cluster kubectl apply -f deployment.yaml\nkubectl apply -f service.yaml You can check that both are up and running through kubectl get all # or kubectl get service\nkubectl get pods Now we want to tell Prometheus to monitor that endpoint — for this, we are going to use the following Helm Chart https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-mongodb-exporter You can find a list of Prometheus exporters and integrations here: https://prometheus.io/docs/instrumenting/exporters/ Next, we are going to add the Helm Mongo DB exporter helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm show values prometheus-community/prometheus-mongodb-exporter > values.yaml We need to mondify the values provided in the values.yaml file as follows: mongodb: uri: \"mongodb://mongodb-service:27017\" serviceMonitor: enabled: true additionalLabels: release: prometheus Basically, replace the values.yaml file created in the helm show command above with this file. In this case, we are going to tell the helm chart the mongodb endpoint and then the additional label for the ServiceMonitor. Next, let's install the Helm Chart and pass in the values.yaml helm install mongodb-exporter prometheus-community/prometheus-mongodb-exporter -f values.yaml You can see whether the chart got installed correctly through helm ls Now have a look at the pods and the services again to see whether everything is running correctly: kubectl get all # or kubectl get service\nkubectl get pods Lastly make sure that we have the new ServiceMonitor in our list: kubectl get servicemonitor Have a look at the prometheus label within the ServiceMonitor kubectl get servicemonitor mongodb-exporter-prometheus-mongodb-exporter -o yaml Now access the service of the mongodb-exporter to see whether it scrapes the metrics of our MongoDB properly: kubectl port-forward service/mongodb-exporter-prometheus-mongodb-exporter 9216 and open the Prometheus service: kubectl port-forward service/prometheus-kube-prometheus-prometheus 9090 On localhost:9090 , go to Status - Targets and you can see all of our endpoints that Prometheus currently knows about.","breadcrumbs":"Prometheus Exporter » Set-up MongoDB","id":"195","title":"Set-up MongoDB"},"196":{"body":"","breadcrumbs":"Kubernetes Operators » Kubernetes Operators","id":"196","title":"Kubernetes Operators"},"197":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kubernetes Operators » 100Days Resources","id":"197","title":"100Days Resources"},"198":{"body":"https://youtu.be/ha3LjlD6g7g https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator https://kubernetes.io/docs/concepts/extend-kubernetes/operator/","breadcrumbs":"Kubernetes Operators » Learning Resources","id":"198","title":"Learning Resources"},"199":{"body":"There is so much great content around Kubernetes Operators, and I have mentioned it several times across the previous videos. However, we have not looked at Kubernetes Operators yet. Today, we are going to explore What are Kubernetes Operators How do Operators work Why are they important Operators are mainly used for Stateful Applications Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop — Source","breadcrumbs":"Kubernetes Operators » Example Notes","id":"199","title":"Example Notes"},"2":{"body":"The notes in this book depend on the community to improve and become accurate. Everything detailed so far is someone's personal understanding at each point in time learning about the respective topics. Help us improve the notes.","breadcrumbs":"Introduction » Just a note of caution","id":"2","title":"Just a note of caution"},"20":{"body":"https://www.vmware.com/topics/glossary/content/kubernetes-pods https://cloud.google.com/kubernetes-engine/docs/concepts/pod https://kubernetes.io/docs/concepts/workloads/pods/","breadcrumbs":"Kubernetes Pods » Learning Resources","id":"20","title":"Learning Resources"},"200":{"body":"If you are deploying an application, you usually use Service Deployment ConfigMap Kubernetes knows what the desired state of our cluster is because we told it through our configuration files. It aims to match the actual state of our cluster to our desired state. Now for a stateful application the process is a bit more difficult — it does not allow Kubernetes to automatically scale etc. E.g. SQL databases are not identical replicas. There has to be a constant communication for the data to be consistent + other factors. Each database has its own workaround —> this makes it difficult to use Kubernetes to automate any workaround. A lot of times stateful applications will require manual intervention i.e. human operators. However, having to manually update resources in Kubernetes goes against its So there is a need for an alternative to manage stateful applications. This alternative is a Kubernetes Operator.","breadcrumbs":"Kubernetes Operators » Managing Stateful Applications without an operator vs. with an operator.","id":"200","title":"Managing Stateful Applications without an operator vs. with an operator."},"201":{"body":"Replace all the manual tasks that the human operator would do. It takes care of Deploying the app Creating replicas Ensuring recovery in case of failure With this, an Operator is basically an application-specific controller that extends the functionality of the Kubernetes API to take care of the management of complex applications. This is making tasks automated and reusable. Operators rely on a control loop mechanism. If one replica of our database dies, it will create a new one. If the image version of the database get updated, it will deploy the new version. Additionally, Operators rely on Kubernetes Custom Resource Definitions (CRDs). CRDs are custom resources created on top of Kubernetes. CRDs allow Operators to have specific knowledge of the application that they are supposed to manage. You can find a list of Kubernetes Operators in the Kubernetes Operator Hub https://operatorhub.io/ AND you can find several awesome operators in the wild https://github.com/operator-framework/awesome-operators 😄 Once you have created an operator, it will take high-level inputs and translate them into low level actions and tasks that are needed to manage the application. Once the application is deployed, the Operator will continue to monitor the application to ensure that it is running smoothly.","breadcrumbs":"Kubernetes Operators » What is an Operator?","id":"201","title":"What is an Operator?"},"202":{"body":"Those if the insights and know-how of the application that the operator is supposed to run. There is an Operator Framework that basically provides the building blocks that can be used to The Operator Framework includes: Operator SDK: Enables developers to build operators based on their expertise without requiring knowledge of Kubernetes API complexities. Operator Lifecycle Management: Oversees installation, updates, and management of the lifecycle of all of the operators running across a Kubernetes cluster. Operator Metering: Enables usage reporting for operators that provide specialized services. Some of the things that you can use an operator to automate include: deploying an application on demand taking and restoring backups of that application's state handling upgrades of the application code alongside related changes such as database schemas or extra configuration settings publishing a Service to applications that don't support Kubernetes APIs to discover them simulating failure in all or part of your cluster to test its resilience choosing a leader for a distributed application without an internal member election process","breadcrumbs":"Kubernetes Operators » Who is creating Operators?","id":"202","title":"Who is creating Operators?"},"203":{"body":"In the case of Prometheus, I would have to deploy several separate components to get Prometheus up and running, which is quite complex + deploying everything in the right order.","breadcrumbs":"Kubernetes Operators » Practical Example","id":"203","title":"Practical Example"},"204":{"body":"","breadcrumbs":"Extending Kubernetes » Extending k8s api like a pro.","id":"204","title":"Extending k8s api like a pro."},"205":{"body":"Video by Shahrooz Aghili","breadcrumbs":"Extending Kubernetes » Links","id":"205","title":"Links"},"206":{"body":"Extending k8s Custom Resources","breadcrumbs":"Extending Kubernetes » Learning Resources","id":"206","title":"Learning Resources"},"207":{"body":"Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code. Extension Points Details","breadcrumbs":"Extending Kubernetes » Notes","id":"207","title":"Notes"},"208":{"body":"Declarative APIs Imperative APIs Your API consists of a relatively small number of relatively small objects (resources). The client says \"do this\", and then gets a synchronous response back when it is done. The objects define configuration of applications or infrastructure. The client says \"do this\", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request. The objects are updated relatively infrequently. You talk about Remote Procedure Calls (RPCs). Humans often need to read and write the objects. Directly storing large amounts of data; for example, > a few kB per object, or > 1000s of objects. The main operations on the objects are CRUD-y (creating, reading, updating and deleting). High bandwidth access (10s of requests per second sustained) needed. Transactions across objects are not required: the API represents a desired state, not an exact state. Store end-user data (such as images, PII, etc.) or other large-scale data processed by applications. The natural operations on the objects are not CRUD-y. The API is not easily modeled as objects. You chose to represent pending operations with an operation ID or an operation object. Kubernetes is designed to be automated by writing client programs. Any program that reads and/or writes to the Kubernetes API can provide useful automation. Automation can run on the cluster or off it. There is a specific pattern for writing client programs that work well with Kubernetes called the controller pattern. Controllers typically read an object's .spec, possibly do things, and then update the object's .status. kind create cluster let's start by using the custom resource definition api of kubernetes. let's create a custom resource called CronTab kubectl apply -f - << EOF\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata: # name must match the spec fields below, and be in the form: . name: crontabs.stable.example.com\nspec: # group name to use for REST API: /apis// group: stable.example.com # list of versions supported by this CustomResourceDefinition versions: - name: v1 # Each version can be enabled/disabled by Served flag. served: true # One and only one version must be marked as the storage version. storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer additionalPrinterColumns: - name: Spec type: string description: The cron spec defining the interval a CronJob is run jsonPath: .spec.cronSpec - name: Replicas type: integer description: The number of jobs launched by the CronJob jsonPath: .spec.replicas - name: Age type: date jsonPath: .metadata.creationTimestamp # either Namespaced or Cluster scope: Namespaced names: # plural name to be used in the URL: /apis/// plural: crontabs # singular name to be used as an alias on the CLI and for display singular: crontab # kind is normally the CamelCased singular type. Your resource manifests use this. kind: CronTab # shortNames allow shorter string to match your resource on the CLI shortNames: - ct\nEOF now our CronTab resource type is created. kubectl get crd A new namespaced RESTful API endpoint is created at: /apis/stable.example.com/v1/namespaces/*/crontabs/... Let's verify the k8s api extension by looking at the api server logs: kubectl -n kube-system logs -f kube-apiserver-kind-control-plane | grep example.com Now we can create custom objects of our new custom resource defintion. In the following example, the cronSpec and image custom fields are set in a custom object of kind CronTab. The kind CronTab comes from the spec of the CustomResourceDefinition object you created above. kubectl apply -f - << EOF\napiVersion: \"stable.example.com/v1\"\nkind: CronTab\nmetadata: name: my-new-cron-object\nspec: cronSpec: \"* * * * */5\" image: my-awesome-cron-image\nEOF kubectl get crontab Let's see how our object is being persisted at the etcd database of k8s. kubectl exec etcd-kind-control-plane -n kube-system -- sh -c \"ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/server.key --cert /etc/kubernetes/pki/etcd/server.crt get / --prefix --keys-only\" | grep example.com kubectl exec etcd-kind-control-plane -n kube-system -- sh -c \"ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/server.key --cert /etc/kubernetes/pki/etcd/server.crt get /registry/apiextensions.k8s.io/customresourcedefinitions/crontabs.stable.example.com --prefix -w json\" | jq \".kvs[0].value\" | cut -d '\"' -f2 | base64 --decode | yq > crd.yml kubectl exec etcd-kind-control-plane -n kube-system -- sh -c \"ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/server.key --cert /etc/kubernetes/pki/etcd/server.crt get /registry/apiregistration.k8s.io/apiservices/v1.stable.example.com --prefix -w json\" | jq \".kvs[0].value\" | cut -d '\"' -f2 | base64 --decode | yq > api-registration.yml kubectl exec etcd-kind-control-plane -n kube-system -- sh -c \"ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/server.key --cert /etc/kubernetes/pki/etcd/server.crt get /registry/stable.example.com/crontabs/default/my-new-cron-object --prefix -w json\" | jq \".kvs[0].value\" | cut -d '\"' -f2 | base64 --decode | yq > mycron.yml Delete custom resource kubectl delete CronTab my-new-cron-object //TODO: add some text why we need kubebuilder and what is an operator application and the relationship between a controller and an operator etc.","breadcrumbs":"Extending Kubernetes » 1. A Look into Custom Resource Definition (CRD) API","id":"208","title":"1. A Look into Custom Resource Definition (CRD) API"},"209":{"body":"let's start by installing kubebuilder # download kubebuilder and install locally.\ncurl -L -o kubebuilder \"https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)\"\nchmod +x kubebuilder && mv kubebuilder /usr/local/bin/ let's scaffold a kubebuilder application mkdir operator-tutorial\ncd operator-tutorial\nkubebuilder init --repo example.com let's have a closer look at the make file first. make targets are the commands that are used for different development lifecycle steps make help to run your kubebuilder application locally make run now let's have a look at the run target and all the prerequisite comamnds that need to run it looks something like this .PHONY: run\nrun: manifests generate fmt vet ## Run a controller from your host. go run ./cmd/main.go so the targets that need to run before we can run our applications are manifests and generate which both have controller-gen as prerequisite and generate some golang code and yaml manifests the code is formatted by fmt validated by vet run will run the go application by refering to the application entrypoint at ./cmd/main.go","breadcrumbs":"Extending Kubernetes » 2. Install Kubebuilder and Create a New Project","id":"209","title":"2. Install Kubebuilder and Create a New Project"},"21":{"body":"Overview Pods are the smallest unit in a Kubernetes cluster; which encompass one or more application ⇒ it represents processes running on a cluster. Pods are used to manage your application instance. Here is a quick summary of what a Pod is and its responsibilities: In our nodes, and within our Kubernetes cluster, the smallest unit that we can work with are pods. Containers are part of a larger object, which is the pod. We can have one or multiple containers within a pod. Each container within a pod share an IP address, storage and namespace — each container usually has a distinct role inside the pod. Note that pods usually operate on a higher level than containers; they are more of an abstraction of the processes within a container than the container itself. A pod can also run multiple containers; all containers are started in parallel ⇒ this makes it difficult to know which process started before another. Usually, one pod is used per container process; reasons to run two containers within a pod might be logging purposes. initContainers can be used to ensure some containers are ready before others in a pod. To support a single process running in a container, you may need logging, a proxy, or special adapter. These tasks are often handled by other containers in the same Pod. Usually each pod has one IP address. You may find the term sidecar for a container dedicated to performing a helper task, like handling logs and responding to requests, as the primary application container may have this ability. Running multiple containers in one pod An example for running multiple containers within a pod would be an app server pod that contains three separate containers: the app server itself, a monitoring adapter, and a logging adapter. Resulting, all containers combines will provide one service. In this case, the logging and monitoring container should be shared across all projects within the organisation. Replica sets Each pod is supposed to run a single instance of an application. If you want to scale your application horizontally, you can create multiple instance of that pod. It is usually not recommended to create pods manually but instead use multiple instances of the same application; these are then identical pods, called replicas. Such a set of replicated Pods are created and managed by a controller, such as a Deployment. Connection All the pods in a cluster are connected. Pods can communicate through their unique IP address. If there are more containers within one pod, they can communicate over localhost. Pods are not forever Pods are not \"forever\"; instead, they easily die in case of machine failure or have to be terminated for machine maintenance. When a pod fails, Kubernetes automatically (unless specified otherwise) spins it up again. Additionally, a controller can be used to ensure that the pod is \"automatically\" healing. In this case, the controlled will monitor the stat of the pod; in case the desired state does not fit the actual state; it will ensure that the actual state is moved back towards the desired state. It is considered good practice to have one process per pod; this allows for easier analysis and debugging. Each pod has: a unique IP address (which allows them to communicate with each other) persistent storage volumes (as required) (more on this later on another day) configuration information that determine how a container should run Pod lifecycle (copied from Google) Each Pod has a PodStatus API object, which is represented by a Pod's status field. Pods publish their phase to the status: phase field. The phase of a Pod is a high-level summary of the Pod in its current state. When you run kubectl get pod Link to inspect a Pod running on your cluster, a Pod can be in one of the following possible phases: Pending: Pod has been created and accepted by the cluster, but one or more of its containers are not yet running. This phase includes time spent being scheduled on a node and downloading images. Running: Pod has been bound to a node, and all of the containers have been created. At least one container is running, is in the process of starting, or is restarting. Succeeded: All containers in the Pod have terminated successfully. Terminated Pods do not restart. Failed: All containers in the Pod have terminated, and at least one container has terminated in failure. A container \"fails\" if it exits with a non-zero status. Unknown: The state of the Pod cannot be determined. Limits Pods by themselves do not have a memory or CPU limit. However, you can set limits to control the amount of CPU or memory your Pod can use on a node. A limit is the maximum amount of CPU or memory that Kubernetes guarantees to a Pod. Termination Once the process of the pod is completed, it will terminate. Alternatively, you can also delete a pod.","breadcrumbs":"Kubernetes Pods » Example Notes","id":"21","title":"Example Notes"},"210":{"body":"Let's imagine we are a working at company where our colleagues are heavy users of the ghost blogging application. Our job is to provide them with ghost instances whenever and whereever they want it. We are infra gurus and through years of experience have learned that building an automation for such a task can save us a lot of toil and manual labor. Our operator will take care of the following: create a new instance of the ghost application as a website in our cluster if our cluster doesn't have it already update our ghost application when our ghost application custom resource is updated. delete the ghost application upon request Kubebuilder provides a command that allows us to create a custom resource and a process that keeps maintaing (reconciling) that resouce. If we choose to create a new resouces (let's call it Ghost) kubebuilder will create a blog controller for it automatically. If we want to attach our own controllers to the exisiting k8s resources say Pods that's posssible too! :D kubebuilder create api \\ --kind Ghost \\ --group blog \\ --version v1 \\ --resource true \\ --controller true At this stage, Kubebuilder has wired up two key components for your operator: A Resource in the form of a Custom Resource Definition (CRD) with the kind Ghost. A Controller that runs each time a Ghost CRD is create, changed, or deleted. The command we ran added a Golang representation of the Ghost Custom Resource Definition (CRD) to our operator scaffolding code. To view this code, navigate to your Code editor tab under api > v1 > ghost_types.go. Let's have a look at the type GhostSpec struct. This is the code definition of the Kubernetes object spec. This spec contains a field named foo which is defined in api/v1/ghost_types.go:32. There is even a helpful comment above the field describing the use of foo. now let's see how kubebuilder can generate a yaml file for our Custom Resource Definition make manifests you will find the generated crd at config/crd/bases/blog.example.com_ghosts.yaml see how kubebuilder did all the heavylifting we had to previously do for the crontab example! lovely! let's notice the difference by looking at our kubernetes crds kubectl get crds now let's install the crd we generated onto the cluster make install and run the get the crds again kubectl get crds","breadcrumbs":"Extending Kubernetes » 3. Create Our First API","id":"210","title":"3. Create Our First API"},"211":{"body":"When you selected to create a operator along with the Ghost Resource, Kubebuilder took care of some key setup: Starts the operator process during application boot Implements a custom Reconcile function to run on each Ghost resource event Configures the operator to know which resource events to listen to To see the start process, navigate to cmd/main.go:125. You will see a section that starts the ghost operator: if err = (&controllers.WebsiteReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(),\n}).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Website\") os.Exit(1)\n} This is a call to the function SetupWithManager(mgr) defined in the file internal/controller/ghost_controller.go. Navigate to internal/controller/ghost_controller.go:58 to view this function. It is already configured to know about the CRD api/v1/ghost_types.go or the generated yaml represenation at crd/bases/blog.example.com_ghosts. The most important function inside the controller is the Reconcile function internal/controller/ghost_controller.go:49. Reconcile is part of the main kubernetes reconciliation loop which aims to move the current state of the cluster closer to the desired state. It is triggered anytime we change the cluster state related to our custom resource internal/controller/ghost_controller.go:49.","breadcrumbs":"Extending Kubernetes » 4. A look into kubebuilder setup","id":"211","title":"4. A look into kubebuilder setup"},"212":{"body":"let's add some logs to the reconcile function and run the operator application and change the state of the cluster. let's paste this code into the Reconcile function. log := log.FromContext(ctx)\nlog.Info(\"Reconciling Ghost\")\nlog.Info(\"Reconciliation complete\")\nreturn ctrl.Result{}, nil and run the application make run next we need to modify the generated custom resource yaml file navigate to config/samples/blog_v1_ghost.yaml and add a foo: bar under spec. The custom resource should look like apiVersion: blog.example.com/v1\nkind: Ghost\nmetadata: name: ghost-sample\nspec: foo: bar don't forget to save the file. Now in other terminal window, let's apply it on the cluster. kubectl apply -f config/samples/blog_v1_ghost.yaml Tada! checkout the logs showing up! INFO Reconciling Ghost\nINFO Reconciliation complete now let's try deleting the resource. kubectl delete -f config/samples/blog_v1_ghost.yaml Same logs showed up again. So basically anytime you interact with your Ghost resource a new event is triggered and your controller will print the logs.","breadcrumbs":"Extending Kubernetes » 5. Add Some Logging to the Reconcile Function","id":"212","title":"5. Add Some Logging to the Reconcile Function"},"213":{"body":"Now let us replace the default GhostSpec with a meaningful declartion of our desired state. Meaning we want our custom resource reflect the desired state for our Ghost application. replace GhostSpec api/v1/ghost_types.go:27 with the following snippet type GhostSpec struct { //+kubebuilder:validation:Pattern=`^[-a-z0-9]*$` ImageTag string `json:\"imageTag\"`\n} This code has three key parts: //+kubebuilder is a comment prefix that will trigger kubebuilder generation changes. In this case, it will set a validation of the ImageTag value to only allow dashes, lowercase letters, or digits. The ImageTag is the Golang variable used throughout the codebase. Golang uses capitalized public variable names by convention. json:\"imageTag\" defines a \"tag\" that Kubebuilder uses to generate the YAML field. Yaml parameters starts with lower case variable names by convention. If omitempty is used in a json tag, that field will be marked as optional, otherwise as mandatory. Before we generete the new crd and install them on the cluster let's do the following, let's have a look at the existing crd kubectl get crd ghosts.blog.example.com --output jsonpath=\"{.spec.versions[0].schema['openAPIV3Schema'].properties.spec.properties}{\\\"\\n\\\"}\" | jq the output should be like { \"foo\": { \"description\": \"Foo is an example field of Ghost. Edit ghost_types.go to remove/update\", \"type\": \"string\" }\n} now, let us install the new crd make install and see the changes kubectl get crd ghosts.blog.example.com --output jsonpath=\"{.spec.versions[0].schema['openAPIV3Schema'].properties.spec.properties}{\\\"\\n\\\"}\" | jq the output should be { \"imageTag\": { \"pattern\": \"^[-a-z0-9]*$\", \"type\": \"string\" }\n}","breadcrumbs":"Extending Kubernetes » 6. Implement the Desired State of the Ghost Operator","id":"213","title":"6. Implement the Desired State of the Ghost Operator"},"214":{"body":"now let's try to access our custom resource in the reconcile function. first off, let us reflect our new fields in our cutom resource. let us replace config/samples/blog_v1_ghost.yaml with the following apiVersion: blog.example.com/v1\nkind: Ghost\nmetadata: name: ghost-sample namespace: marketing\nspec: imageTag: latest kubectl create namespace marketing\nkubectl apply -f config/samples/blog_v1_ghost.yaml next, let us replace the reconcile code with the following snippet: log := log.FromContext(ctx)\nghost := &blogv1.Ghost{}\nif err := r.Get(ctx, req.NamespacedName, ghost); err != nil { log.Error(err, \"Failed to get Ghost\") return ctrl.Result{}, client.IgnoreNotFound(err)\n} log.Info(\"Reconciling Ghost\", \"imageTag\", ghost.Spec.ImageTag, \"team\", ghost.ObjectMeta.Namespace)\nlog.Info(\"Reconciliation complete\")\nreturn ctrl.Result{}, nil let us anlyze the above snippet line by line. line 1 assings a logger instance to the variable log variable. line 2 creates an instance of our Ghost data structure. line 3 tries to read a ghost instance from the reconciler client. Please note that the r which is a reference to the GhostReconciler has a k8s client interface and that interface which implements the Get method which is an equivalent golang implementation of the kubectl get. on succesful Get the resouce will be written to our ghost variable. in case of error, client logs the error. if the error is of type (not found) the controller won't return an error. error not found will happen if we run kubectl delete -f config/samples/blog_v1_ghost.yaml now we can start our application again: make run so far our reconcile function is not run yet but if we apply our custom resource in another terminal window: kubectl apply -f config/crd/samples/blog_v1_ghost.yaml we start to see the logs of our reconcile function INFO Reconciling Ghost {\"controller\": \"ghost\", \"controllerGroup\": \"blog.example.com\", \"controllerKind\": \"Ghost\", \"Ghost\": {\"name\":\"ghost-sample\",\"namespace\":\"marketing\"}, \"namespace\": \"marketing\", \"name\": \"ghost-sample\", \"reconcileID\": \"9faf1c4f-6dcf-42d5-9f16-fbebb453b4ed\", \"imageTag\": \"latest\", \"team\": \"marketing\"}\n2024-04-29T15:54:05+02:00 INFO Reconciliation complete {\"controller\": \"ghost\", \"controllerGroup\": \"blog.example.com\", \"controllerKind\": \"Ghost\", \"Ghost\": {\"name\":\"ghost-sample\",\"namespace\":\"marketing\"}, \"namespace\": \"marketing\", \"name\": \"ghost-sample\", \"reconcileID\": \"9faf1c4f-6dcf-42d5-9f16-fbebb453b4ed\"} cool! next stop, we will implement the actual controller logic for our ghost operator.","breadcrumbs":"Extending Kubernetes » 7. Access the Custom Resource Inside the Reconcile Function","id":"214","title":"7. Access the Custom Resource Inside the Reconcile Function"},"215":{"body":"Before we start coding the ghost operator, we need to know what resources we need in order to deploy ghost to our cluster. let's consult the docker hub page for ghost. https://hub.docker.com/_/ghost As we would like to persist ghost data to a persistent volume, we can try to convert this docker command to a k8s deployment. docker run -d \\ --name some-ghost \\ -e NODE_ENV=development \\ -e database__connection__filename='/var/lib/ghost/content/data/ghost.db' \\ -p 3001:2368 \\ -v some-ghost-data:/var/lib/ghost/content \\ ghost:alpine The deployment would look something like apiVersion: apps/v1\nkind: Deployment\nmetadata: name: ghost-deployment\nspec: replicas: 1 # You can adjust the number of replicas as needed selector: matchLabels: app: ghost template: metadata: labels: app: ghost spec: containers: - name: ghost image: ghost:alpine env: - name: NODE_ENV value: development - name: database__connection__filename value: /var/lib/ghost/content/data/ghost.db ports: - containerPort: 2368 volumeMounts: - name: ghost-data mountPath: /var/lib/ghost/content volumes: - name: ghost-data persistentVolumeClaim: claimName: ghost-data-pvc # Define your PVC or use an existing one As you can see this deployment expects an existing persistent volume claim called ghost-data-pvc We can define it with this yaml: apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata: name: ghost-data-pvc\nspec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi In our operator, each team's ghost instance will be deployed to the team's corresponding namespace. Let us try to code the pvc provisiong into our controller. For that we need to copy the following snippet to our controller. internal/controller/ghost_controller.go func (r *GhostReconciler) addPvcIfNotExists(ctx context.Context, ghost *blogv1.Ghost) error { log := log.FromContext(ctx) pvc := &corev1.PersistentVolumeClaim{} team := ghost.ObjectMeta.Namespace pvcName := pvcNamePrefix + team err := r.Get(ctx, client.ObjectKey{Namespace: ghost.ObjectMeta.Namespace, Name: pvcName}, pvc) if err == nil { // PVC exists, we are done here! return nil } // PVC does not exist, create it desiredPVC := generateDesiredPVC(ghost, pvcName) if err := controllerutil.SetControllerReference(ghost, desiredPVC, r.Scheme); err != nil { return err } if err := r.Create(ctx, desiredPVC); err != nil { return err } r.recoder.Event(ghost, corev1.EventTypeNormal, \"PVCReady\", \"PVC created successfully\") log.Info(\"PVC created\", \"pvc\", pvcName) return nil\n} func generateDesiredPVC(ghost *blogv1.Ghost, pvcName string) *corev1.PersistentVolumeClaim { return &corev1.PersistentVolumeClaim{ ObjectMeta: metav1.ObjectMeta{ Name: pvcName, Namespace: ghost.ObjectMeta.Namespace, }, Spec: corev1.PersistentVolumeClaimSpec{ AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, Resources: corev1.VolumeResourceRequirements{ Requests: corev1.ResourceList{ corev1.ResourceStorage: resource.MustParse(\"1Gi\"), }, }, }, }\n} Let's also add const pvcNamePrefix = \"ghost-data-pvc-\"\nconst deploymentNamePrefix = \"ghost-deployment-\"\nconst svcNamePrefix = \"ghost-service-\" right after our GhostReconciler struct. (around line 40). The addPvcIfNotExists function, checks whether the pvc is already created and if not, it will create it in the right namespace.","breadcrumbs":"Extending Kubernetes » 8. Implement the Ghost Operator Logic, Part 1 - PVC","id":"215","title":"8. Implement the Ghost Operator Logic, Part 1 - PVC"},"216":{"body":"Next, we need to specify the kubebuilder markers for RBAC. After we created our apis there are 3 markers generated by default. //+kubebuilder:rbac:groups=blog.example.com,resources=ghosts,verbs=get;list;watch;create;update;patch;delete\n//+kubebuilder:rbac:groups=blog.example.com,resources=ghosts/status,verbs=get;update;patch\n//+kubebuilder:rbac:groups=blog.example.com,resources=ghosts/finalizers,verbs=update These markers with //+kubebuilder prefix are picked up by make manfists where a ClusterRole manifests is generated and assiged to the operator manager application. When we CRUD other APIs such as deployment, services and Persistent Volume Claims, we need to add those related markers, otherwise our operator will be unauthorized to perform those operations. In case of our operator, we need to additional markers right below the default ones at internal/controller/ghost_controller.go. //+kubebuilder:rbac:groups=blog.example.com,resources=ghosts/events,verbs=get;list;watch;create;update;patch\n//+kubebuilder:rbac:groups=\"\",resources=persistentvolumeclaims,verbs=get;list;watch;create;update;patch;delete\n//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete\n//+kubebuilder:rbac:groups=\"\",resources=services,verbs=get;list;watch;create;update;patch;delete Please note the first one, is needed when we later introduce a function to persist operator events in the ghost resource. To generate RBAC manfiests, we can run make manifests The generated manifest for the manager cluster role, will be generated at config/rbac/role.yaml","breadcrumbs":"Extending Kubernetes » 9. Implement the Ghost Operator Logic, Part 2 - RBAC","id":"216","title":"9. Implement the Ghost Operator Logic, Part 2 - RBAC"},"217":{"body":"Next, we add the deployment create and update logic to our controller. For that we copy the following snippet to our controller. The logic is very similar to the previous snippet. However there is one key difference and that is that addOrUpdateDeployment can also update a deployment in case the deployed imageTag for the ghost image is different from the one coming from the ghost.Spec aka. desired state. func (r *GhostReconciler) addOrUpdateDeployment(ctx context.Context, ghost *blogv1.Ghost) error { log := log.FromContext(ctx) deploymentList := &appsv1.DeploymentList{} labelSelector := labels.Set{\"app\": \"ghost-\" + ghost.ObjectMeta.Namespace} err := r.List(ctx, deploymentList, &client.ListOptions{ Namespace: ghost.ObjectMeta.Namespace, LabelSelector: labelSelector.AsSelector(), }) if err != nil { return err } if len(deploymentList.Items) > 0 { // Deployment exists, update it existingDeployment := &deploymentList.Items[0] // Assuming only one deployment exists desiredDeployment := generateDesiredDeployment(ghost) // Compare relevant fields to determine if an update is needed if existingDeployment.Spec.Template.Spec.Containers[0].Image != desiredDeployment.Spec.Template.Spec.Containers[0].Image { // Fields have changed, update the deployment existingDeployment.Spec = desiredDeployment.Spec if err := r.Update(ctx, existingDeployment); err != nil { return err } log.Info(\"Deployment updated\", \"deployment\", existingDeployment.Name) r.recoder.Event(ghost, corev1.EventTypeNormal, \"DeploymentUpdated\", \"Deployment updated successfully\") } else { log.Info(\"Deployment is up to date, no action required\", \"deployment\", existingDeployment.Name) } return nil } // Deployment does not exist, create it desiredDeployment := generateDesiredDeployment(ghost) if err := controllerutil.SetControllerReference(ghost, desiredDeployment, r.Scheme); err != nil { return err } if err := r.Create(ctx, desiredDeployment); err != nil { return err } r.recoder.Event(ghost, corev1.EventTypeNormal, \"DeploymentCreated\", \"Deployment created successfully\") log.Info(\"Deployment created\", \"team\", ghost.ObjectMeta.Namespace) return nil\n} func generateDesiredDeployment(ghost *blogv1.Ghost) *appsv1.Deployment { replicas := int32(1) // Adjust replica count as needed return &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ GenerateName: deploymentNamePrefix, Namespace: ghost.ObjectMeta.Namespace, Labels: map[string]string{ \"app\": \"ghost-\" + ghost.ObjectMeta.Namespace, }, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: map[string]string{ \"app\": \"ghost-\" + ghost.ObjectMeta.Namespace, }, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{ \"app\": \"ghost-\" + ghost.ObjectMeta.Namespace, }, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{ { Name: \"ghost\", Image: \"ghost:\" + ghost.Spec.ImageTag, Env: []corev1.EnvVar{ { Name: \"NODE_ENV\", Value: \"development\", }, { Name: \"database__connection__filename\", Value: \"/var/lib/ghost/content/data/ghost.db\", }, }, Ports: []corev1.ContainerPort{ { ContainerPort: 2368, }, }, VolumeMounts: []corev1.VolumeMount{ { Name: \"ghost-data\", MountPath: \"/var/lib/ghost/content\", }, }, }, }, Volumes: []corev1.Volume{ { Name: \"ghost-data\", VolumeSource: corev1.VolumeSource{ PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ ClaimName: \"ghost-data-pvc-\" + ghost.ObjectMeta.Namespace, }, }, }, }, }, }, }, }\n} Let's make sure apps/v1 import statement is added to the import section. appsv1 \"k8s.io/api/apps/v1\"","breadcrumbs":"Extending Kubernetes » 10. Implement the Ghost Operator Logic, Part 3 - Deployment","id":"217","title":"10. Implement the Ghost Operator Logic, Part 3 - Deployment"},"218":{"body":"And Lastly we need to add a service for our deployment. For now let's choose a service of type NodePort apiVersion: v1\nkind: Service\nmetadata: name: ghost-service\nspec: type: NodePort ports: - port: 80 # Exposed port on the service targetPort: 2368 # Port your application is listening on inside the pod nodePort: 30001 # NodePort to access the service externally selector: app: ghost Next, we need to implement a go funtion that creates such service for us. func (r *GhostReconciler) addServiceIfNotExists(ctx context.Context, ghost *blogv1.Ghost) error { log := log.FromContext(ctx) service := &corev1.Service{} err := r.Get(ctx, client.ObjectKey{Namespace: ghost.ObjectMeta.Namespace, Name: svcNamePrefix + ghost.ObjectMeta.Namespace}, service) if err != nil && client.IgnoreNotFound(err) != nil { return err } if err == nil { // Service exists return nil } // Service does not exist, create it desiredService := generateDesiredService(ghost) if err := controllerutil.SetControllerReference(ghost, desiredService, r.Scheme); err != nil { return err } // Service does not exist, create it if err := r.Create(ctx, desiredService); err != nil { return err } r.recoder.Event(ghost, corev1.EventTypeNormal, \"ServiceCreated\", \"Service created successfully\") log.Info(\"Service created\", \"service\", desiredService.Name) return nil\n} func generateDesiredService(ghost *blogv1.Ghost) *corev1.Service { return &corev1.Service{ ObjectMeta: metav1.ObjectMeta{ Name: \"ghost-service-\" + ghost.ObjectMeta.Namespace, Namespace: ghost.ObjectMeta.Namespace, }, Spec: corev1.ServiceSpec{ Type: corev1.ServiceTypeNodePort, Ports: []corev1.ServicePort{ { Port: 80, TargetPort: intstr.FromInt(2368), NodePort: 30001, }, }, Selector: map[string]string{ \"app\": \"ghost-\" + ghost.ObjectMeta.Namespace, }, }, }\n}","breadcrumbs":"Extending Kubernetes » 11. Implement the Ghost Operator Logic, Part 4 - Service","id":"218","title":"11. Implement the Ghost Operator Logic, Part 4 - Service"},"219":{"body":"Next we need to call our function in our reconcile function. We start by calling the functions we added one by one. In case there is an error we update the status of our ghost deployment. For that, we need to make a couple of adjustments first. First we replace GhostStatus in api/v1/ghost_types.go with the following type GhostStatus struct { Conditions []metav1.Condition `json:\"conditions,omitempty\"`\n} and we add two helper functions to our controller. internal/controller/ghost_controller.go // Function to add a condition to the GhostStatus\nfunc addCondition(status *blogv1.GhostStatus, condType string, statusType metav1.ConditionStatus, reason, message string) { for i, existingCondition := range status.Conditions { if existingCondition.Type == condType { // Condition already exists, update it status.Conditions[i].Status = statusType status.Conditions[i].Reason = reason status.Conditions[i].Message = message status.Conditions[i].LastTransitionTime = metav1.Now() return } } // Condition does not exist, add it condition := metav1.Condition{ Type: condType, Status: statusType, Reason: reason, Message: message, LastTransitionTime: metav1.Now(), } status.Conditions = append(status.Conditions, condition)\n} // Function to update the status of the Ghost object\nfunc (r *GhostReconciler) updateStatus(ctx context.Context, ghost *blogv1.Ghost) error { // Update the status of the Ghost object if err := r.Status().Update(ctx, ghost); err != nil { return err } return nil\n} And finally our reconcile function should be replaced with the following snippet. func (r *GhostReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { log := log.FromContext(ctx) ghost := &blogv1.Ghost{} if err := r.Get(ctx, req.NamespacedName, ghost); err != nil { log.Error(err, \"Failed to get Ghost\") return ctrl.Result{}, client.IgnoreNotFound(err) } // Initialize completion status flags // Add or update the namespace first pvcReady := false deploymentReady := false serviceReady := false log.Info(\"Reconciling Ghost\", \"imageTag\", ghost.Spec.ImageTag, \"team\", ghost.ObjectMeta.Namespace) // Add or update PVC if err := r.addPvcIfNotExists(ctx, ghost); err != nil { log.Error(err, \"Failed to add PVC for Ghost\") addCondition(&ghost.Status, \"PVCNotReady\", metav1.ConditionFalse, \"PVCNotReady\", \"Failed to add PVC for Ghost\") return ctrl.Result{}, err } else { pvcReady = true } // Add or update Deployment if err := r.addOrUpdateDeployment(ctx, ghost); err != nil { log.Error(err, \"Failed to add or update Deployment for Ghost\") addCondition(&ghost.Status, \"DeploymentNotReady\", metav1.ConditionFalse, \"DeploymentNotReady\", \"Failed to add or update Deployment for Ghost\") return ctrl.Result{}, err } else { deploymentReady = true } // Add or update Service if err := r.addServiceIfNotExists(ctx, ghost); err != nil { log.Error(err, \"Failed to add Service for Ghost\") addCondition(&ghost.Status, \"ServiceNotReady\", metav1.ConditionFalse, \"ServiceNotReady\", \"Failed to add Service for Ghost\") return ctrl.Result{}, err } else { serviceReady = true } // Check if all subresources are ready if pvcReady && deploymentReady && serviceReady { // Add your desired condition when all subresources are ready addCondition(&ghost.Status, \"GhostReady\", metav1.ConditionTrue, \"AllSubresourcesReady\", \"All subresources are ready\") } log.Info(\"Reconciliation complete\") if err := r.updateStatus(ctx, ghost); err != nil { log.Error(err, \"Failed to update Ghost status\") return ctrl.Result{}, err } return ctrl.Result{}, nil\n} now, let us run our operator application. before we do that let's make sure we are starting from scratch. kubectl delete namespace marketing make run we can see the logs and see that our operator application is up and running, in another termainl we create a ghost resource. kubectl create namespace marketing\nkubectl apply -f config/samples/blog_v1_ghost.yaml We start to see our reconciliation logs showing up and our subresources being created. We can inspect them by running k9s. We can perform a portforward on the service to see our ghost application in a browser. Let's have a look at our ghost resource as well. kubectl describe -n marketing ghosts.blog.example.com ghost-sample","breadcrumbs":"Extending Kubernetes » 12. Implement the Final Logic of the Reconcile Function","id":"219","title":"12. Implement the Final Logic of the Reconcile Function"},"22":{"body":"","breadcrumbs":"Setup Kubernetes Cluster » Setup your first Kubernetes Cluster","id":"22","title":"Setup your first Kubernetes Cluster"},"220":{"body":"let us perform an update on our resource and use the alpine image tag instead of latest. So, let us replace config/samples/blog_v1_ghost.yaml with the following and apply it. apiVersion: blog.example.com/v1\nkind: Ghost\nmetadata: name: ghost-sample namespace: marketing\nspec: imageTag: alpine kubectl apply -f config/samples/blog_v1_ghost.yaml We can see that our deployment subresource is being updated and the update logs are showing up in the console. We can confirm this by inspecting the deployment in k9s.","breadcrumbs":"Extending Kubernetes » 13. Update the Ghost Resource","id":"220","title":"13. Update the Ghost Resource"},"221":{"body":"If perform a delete operation on our resource, all the subresouces will be deleted too, as we set their owner to be the ghost resource. Please notice the controllerutil.SetControllerReference usage, before creating the subresources. Let us perform the delete and see the effect. kubectl delete ghosts.blog.example.com -n marketing ghost-sample We can see all the subresources are deleted. kubectl get all -n marketing","breadcrumbs":"Extending Kubernetes » 14. Deleting the ghost resource","id":"221","title":"14. Deleting the ghost resource"},"222":{"body":"Your operator is an application, so it needs to be packaged as a OCI compliant container image just like any other container you want to deploy. We need to run the right make command to build our OCI image and then Deploy it. Build # please use your own tag here! :D export IMG=c8n.io/aghilish/ghost-operator:latest\nmake docker-build Push make docker-push Deploy make deploy Undeploy make undeploy And we can look around and inspect the logs of our manager when we CRUD operations with our ghost API. kubectl get all -n ghost-operator-system","breadcrumbs":"Extending Kubernetes » 15. Deploy Ghost Operator to the Cluster","id":"222","title":"15. Deploy Ghost Operator to the Cluster"},"223":{"body":"Being able to run our operator application in debug mode is definitely a nice thing. Fortutanly we can simply do this on vscode. Let's click on the create a launch.json file in the Run and Debug. Next we select Go and Go Launch Package. In the generated json file we need to adjust the program argument and set it to the main.go file of our application which is at cmd/main.go. { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 \"version\": \"0.2.0\", \"configurations\": [ { \"name\": \"Launch Package\", \"type\": \"go\", \"request\": \"launch\", \"mode\": \"auto\", \"program\": \"${fileDirname}/cmd/main.go\" } ]\n}","breadcrumbs":"Extending Kubernetes » 16. [Bonus] Setup VSCode Debugger","id":"223","title":"16. [Bonus] Setup VSCode Debugger"},"224":{"body":"","breadcrumbs":"Serverless » Serverless","id":"224","title":"Serverless"},"225":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Serverless » 100Days Resources","id":"225","title":"100Days Resources"},"226":{"body":"TODO","breadcrumbs":"Serverless » Learning Resources","id":"226","title":"Learning Resources"},"227":{"body":"There is no such thing as serverless, there is always a server How do we think about Serverless? In the cloud, functions written in JavaScript, response to event/triggers ⇒ highly narrowed view Why would you want Serverless? You want to try out cool things You want to stop over- or under-provisioning of your infrastructure You want to try out cool things","breadcrumbs":"Serverless » Example Notes","id":"227","title":"Example Notes"},"228":{"body":"Serverless: Popular operating model — infrastructure requirements are provisioned just before the Serverless workload is executed — The infrastructure resources are just needed during the execution, so there is no need to keep the infrastructure during low to now usage. The serverless platform will likely run on containers. No Server Management is necessary You only pay for the execution time They can scale to 0 = no costs to keep the infrastructure up Serverless functions are stateless, this promotes scalability Auto-scalability ⇒ usually taken care of by the cloud provider Reduced operational costs It does not mean that there are no servers involved. Instead it is the process of ensuring that the infrastructure is provisioned automatically. In this case, the developer is less involved to managing infrastructure. FAAS is a compute platform for your service — they are essentially processing your code. These functions are called by events, events in this case are any trigger. Within Serverless, cloud providers provision event-driven architecture. When you build your app, you look for those events and you add them to your code. Issues with serverless Functions/processes of your code will not finish before the infrastructure downgrades Latency issues i.e. because of cold starts Moving from IAAS to PAAS to FAAS Two distinctions Not using Servers FaaS (Function as a Service): Is essentially small pieces of code that are run in the cloud on stateless containers One is about hiding operations from us Serverless options Google: Google Cloud Functions AWS: Lambda For Kubernetes: Knative What is a cold start? Serverless functions can be really slow the first time that they start. This can be a problem. These articles offer some comparisons https://mikhail.io/serverless/coldstarts/aws/ https://mikhail.io/serverless/coldstarts/big3/ Discussion https://youtu.be/AuMeockiuLs AWS has something called Provisioned Concurrenc y that allows you to keep on x functions that are always on and ready. When would you not want to use Serverless: When you have a consistent traffic Be careful, Different serverless resources have different pricing models, meaning it could easily happen that you accidentally leave your serverless function running and it will eat up your pocket If you depend on a Serverless feature, it is easy to get vendorlocked.","breadcrumbs":"Serverless » Serverless","id":"228","title":"Serverless"},"229":{"body":"How to think about Serverless https://youtu.be/_1-5YFfJCqM I am going to be looking in the next days at Knative: Kubernetes based open source building blocks for serverless Faasd: https://github.com/openfaas/faasd","breadcrumbs":"Serverless » Resources","id":"229","title":"Resources"},"23":{"body":"Video by Anais Urlichs Kubernetes cluster in your local machine using Docker Desktop","breadcrumbs":"Setup Kubernetes Cluster » 100Days Resources","id":"23","title":"100Days Resources"},"230":{"body":"","breadcrumbs":"Ingress from scratch » Ingress from scratch","id":"230","title":"Ingress from scratch"},"231":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Ingress from scratch » 100Days Resources","id":"231","title":"100Days Resources"},"232":{"body":"TODO","breadcrumbs":"Ingress from scratch » Learning Resources","id":"232","title":"Learning Resources"},"233":{"body":"If you are new to 100 Days Of Kubernetes, have a look at the previous days to fill in the gaps and fundamentals for today: Day 4: Looking at Services Day 11: More exercises on Services Day 13: Ingress Day 31: Service Mesh","breadcrumbs":"Ingress from scratch » Example Notes","id":"233","title":"Example Notes"},"234":{"body":"Do I need a service mesh or is Ingress enough? https://www.nginx.com/blog/do-i-need-a-service-mesh/ Check the specific installation guide https://kubernetes.github.io/ingress-nginx/deploy/ Tutorial setup https://medium.com/analytics-vidhya/configuration-of-kubernetes-k8-services-with-nginx-ingress-controller-5e2c5e896582 Comprehensive Tutorial by Red Hat https://redhat-scholars.github.io/kubernetes-tutorial/kubernetes-tutorial/ingress.html Using Ingress on Kind cluster https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx More on installing ingress on kind https://dustinspecker.com/posts/test-ingress-in-kind/","breadcrumbs":"Ingress from scratch » Resources","id":"234","title":"Resources"},"235":{"body":"The installation will slightly depend on the type of cluster that you are using. For this tutorial, we are going to be using the docker-desktop cluster We need two main components Ingress Controller Ingress resources Note that depending on the Ingress controller that you are using for your Ingress, the Ingress resource that has to be applied to your cluster will be different. In this tutorial, I am using the Ingress Nginx controller.","breadcrumbs":"Ingress from scratch » Install","id":"235","title":"Install"},"236":{"body":"We are going to be using the NGINX Ingress Controller. With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption First off, let's clone this repository: https://github.com/AnaisUrlichs/ingress-example git clone cd ingress-example cd app-one now build your Docker image docker build -t anaisurlichs/flask-one:1.0 . You can test it out through docker run -p 8080:8080 anaisurlichs/flask-one:1.0 And then we are going to push the image to our Docker Hub docker push anaisurlichs/flask-one:1.0 We are going to do the same in our second example application cd ..\ncd app-two build the docker image docker build -t anaisurlichs/flask-two:1.0 . push it to your Docker Hub docker push anaisurlichs/flask-two:1.0 Now apply the deployment-one.yaml and the deployment-tow.yaml kubectl apply -f deployment-one.yaml\nkubectl apply -f deployment-two.yaml Make sure they are running correctly kubectl get all Installing Ingress Controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml Installing Ingress Resource kubectl apply -f ingress.yaml make sure that everything is running correctly kubectl get all -n ingress-nginx Making paths happen https://github.com/kubernetes/ingress-nginx/issues/3762","breadcrumbs":"Ingress from scratch » Let's set everything up.","id":"236","title":"Let's set everything up."},"237":{"body":"","breadcrumbs":"Istio from scratch » Setup Istio from scratch","id":"237","title":"Setup Istio from scratch"},"238":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Istio from scratch » 100Days Resources","id":"238","title":"100Days Resources"},"239":{"body":"Istio cli \"istioctl\" https://istio.io/latest/docs/setup/install/istioctl/ To use kind and ingress together have a look at this blog post https://mjpitz.com/blog/2020/10/21/local-ingress-domains-kind/ Using the first part of this blog post https://www.arthurkoziel.com/running-knative-with-istio-in-kind/ (if you want to set-up knative on your kind cluster, be my guest, I also have a whole video on using kind) Day 27: Knative However, this is not the focus today. Have a look at the previous day on Service Mesh Day 31: Service Mesh","breadcrumbs":"Istio from scratch » Learning Resources","id":"239","title":"Learning Resources"},"24":{"body":"What is Kubernetes: https://youtu.be/VnvRFRk_51k Kubernetes architecture explained: https://youtu.be/umXEmn3cMWY","breadcrumbs":"Setup Kubernetes Cluster » Learning Resources","id":"24","title":"Learning Resources"},"240":{"body":"","breadcrumbs":"Istio from scratch » Example Notes","id":"240","title":"Example Notes"},"241":{"body":"We would not want to create a LoadBalancer for all of our Services to access each and every individual Service. This is where Ingress comes in. Ingress allows us to configure the traffic to all of our microservice applications. This way, we can easier manage the traffic. Now to manage the connection between services, we would want to configure a Service Mesh such as Istio. Istio will then take care of the communication between our microservice applications within our cluster. However, it will not take care of external traffic automatically. Instead, what Istio will do is to use an Istio Ingress Gateway that will allow users to access applications from outside the Kubernetes cluster. The difference here is using the Kubernetes Ingress Resource vs. the Istio Ingress Resource. Have a look at the following blog post that provides comprehensive detail on the differences: https://medium.com/@zhaohuabing/which-one-is-the-right-choice-for-the-ingress-gateway-of-your-service-mesh-21a280d4a29c","breadcrumbs":"Istio from scratch » Service Mesh vs Ingress","id":"241","title":"Service Mesh vs Ingress"},"242":{"body":"Prerequisites Docker desktop installed https://www.docker.com/products/docker-desktop Kind binary installed so that you can run kind commands https://kind.sigs.k8s.io/docs/user/quick-start/ Set-up the kind cluster for using Ingress In the previous tutorial on Ingress, we used Docker Desktop as our local cluster. You can also set-up Ingress with a kind cluster. You can configure your kind cluster through a yaml file. In our case, we want to be able to expose some ports through the cluster, so we have to specify this within the config. Before we create a kind cluster, save the following configurations in a kind-config.yaml file. kind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane kubeadmConfigPatches: - | kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: \"ingress-ready=true\" extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP This file is going to be used in the following command when we create out kind cluster. kind create cluster --config kind-config.yaml --name istio-test Now you should be on your cluster istio-test. In your kubectl config context the cluster will be called kind-istio-test. kubectl config get-contexts\nkind get clusters You can then follow the documentation on setting up different Ingress Controllers on top Install Istio Install Istio with the following commands if you are on Linux, check out other installation options. curl -L https://istio.io/downloadIstio | sh - Now you have to move the package into your usr/bin directory so you can access it through the command line. sudo mv /Downloads/istio-1.9.1/bin/istioctl /usr/local/bin/ When you run now the following, you should see the version of istioctl istioctl version Awesome! we are ready to install istio on our kind cluster. We are going to use the following Istio configurations and then going to use istioctl to install everything on our kind cluster. istioctl install --set profile=demo Note that you can use another istio profile, other than the default one. Let's check that everything is set-up correctly kubectl get pods -n istio-system You should have two pods running right now. You can check whether everything is working properly through the following command istioctl manifest generate --set profile=demo | istioctl verify-install -f - By default, Istio will set the Ingress type to Loadbalancer. However, on a local kind cluster, we cannot use Loadbalancer. Instead, we have to use NodePort and set the host configurations. Create a file called patch-ingressgateway-nodeport.yaml with the following content: spec: type: NodePort ports: - name: http2 nodePort: 32000 port: 80 protocol: TCP targetPort: 80 And then apply the patch with kubectl patch service istio-ingressgateway -n istio-system --patch \"$(cat patch-ingressgateway-nodeport.yaml)\" However, in our Docker Desktop cluster, we are connected to localhost automatically so in that case, we do not have to change anything to NodePort. Next, we have to allow Istio to communicated with our default namespace: kubectl label namespace default istio-injection=enabled Set-up Ingress Gateway From here onwards, we are just following the documentation Setting-up the example application: https://istio.io/latest/docs/examples/bookinfo/ Setting-up metrics: https://istio.io/latest/docs/tasks/observability/metrics/tcp-metrics/ If you feel ready for a challenge, try out canary updates with Istio https://istio.io/latest/docs/setup/upgrade/canary/","breadcrumbs":"Istio from scratch » Installation","id":"242","title":"Installation"},"243":{"body":"","breadcrumbs":"Deploy to CIVO » Deploy an app to CIVO k3s cluster from scratch","id":"243","title":"Deploy an app to CIVO k3s cluster from scratch"},"244":{"body":"Video by Rajesh Radhakrishnan Rajesh' Kitchen repo","breadcrumbs":"Deploy to CIVO » 100Days Resources","id":"244","title":"100Days Resources"},"245":{"body":"Video by Rajesh Radhakrishnan CIVO cli essentials CIVO marketplace Part 1 - Full Stack deployment Part 2 - MicroFrontend deployment","breadcrumbs":"Deploy to CIVO » Learning Resources","id":"245","title":"Learning Resources"},"246":{"body":"Ideas to production, CIVO helped me to make it happen... A rough sketch on the application I came up with an application that help us create a book cover and add chapters to it. In order to build this, I thought of having an angular frontend that communicates to three .net core backend services. Login and Create a K3s cluster in CIVO I am using my Windows Linux System to learn K8s, also in CIVO creating a k3s cluster is really fast. Once you have the cluster, using the CIVO cli, merge the kubconfig to interact with the cluster using 'kubectl' command. Also K9s tools is also very useful to navigate through the custer. A a few handy commands I will share it to set it up. Once you download the cli, refer to the Learning resource section with useful links: civo apikey add \ncivo apikey current \ncivo region current NYC1\ncivo kubernetes ls\ncivo kubernetes config \"PROJECT\"-infra --save --merge kubectl config get-contexts\nkubectl config set-context \"PROJECT\"-infra\nkubectl config use-context \"PROJECT\"-infra Dockerize the microservices 1. Create a docker file in each microservices.\n2. Build & push the images to docker Hub.\n3. Create a CI/CD pipeline to deploy to CIVO docker build . -f Dockerfile -t PROJECT-web:local\ndocker tag \"PROJECT\"-web:local / \"PROJECT\"-web:v.0.2\ndocker push / PROJECT-web:v.0.2 Deploy using openfaas 1. Install the Openfaas from the CIVO marketplace.\n2. Download the openfaas cli and connect to CIVO repo.\n3. Push the image to the openfaas repo. curl -sLSf https://cli.openfaas.com | sudo sh\nexport OPENFAAS_PREFIX=\"/\"\nexport DNS=\".k8s.civo.com\" # As per dashboard\nexport OPENFAAS_URL=http://$DNS:31112\nPASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath=\"{.data.basic-auth-password}\" | base64 --decode; echo)\necho -n $PASSWORD | faas-cli login --username admin --password-stdin faas-cli new --lang dockerfile api\nfaas-cli build\nfaas-cli push -f stack.yml # Contains all the image deployment\nfaas-cli deploy -f stack.idserver.yml # individual deployment\nfaas-cli deploy -f stack.web.yml Deploy using helm 1. Create a helm chart\n2. Mention the docker image for the installation\n3. Helm install to the CIVO cluster helm upgrade --install \"PROJECT\"-frontend /\"PROJECT\"-web/conf/charts/\"PROJECT\"-ui --namespace PROJECT --set app.image=/\"PROJECT\"-web:latest\nhelm uninstall \"PROJECT\"-frontend -n \"PROJECT\" Setup the SSL certificates us Let's Encrypt 1. Create an LetsEncrypt PROD issuer.\n2. Deploy the ingress.\n3. Troubleshoot and verify the certificated is issued. kubectl apply -f ./cert-manager/civoissuer.stage.yaml\nissuer.cert-manager.io/letsencrypt-stage created\nkubectl apply -f ingress-cert-civo.yaml kubectl get issuer -n kitchen kubectl get ing -n kitchen\nkubectl get certificates -n kitchen\nkubectl get certificaterequest -n kitchen kubectl describe order -n kitchen\nkubectl describe challenges -n kitchen Show off I used to document the process and steps I have done during the development & deployment. It will help us to review and refine it as we progress through our development","breadcrumbs":"Deploy to CIVO » CIVO Notes","id":"246","title":"CIVO Notes"},"247":{"body":"Get the free life time access to RaaS from https://kubernautic.com/ kubectl apply -f https://rancher.kubernautic.com/v3/import/.yaml\nclusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created\nclusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created\nnamespace/cattle-system created\nserviceaccount/cattle created\nclusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created\nsecret/cattle-credentials-c9b86c5 created\nclusterrole.rbac.authorization.k8s.io/cattle-admin created\ndeployment.apps/cattle-cluster-agent created wget https://github.com/derailed/popeye/releases/download/v0.9.0/popeye_Linux_x86_64.tar.gz\ntar -xvf popeye_Linux_x86_64.tar.gz\nmv popeye /usr/local/bin/ POPEYE_REPORT_DIR=/mnt/e/Kubernetes/ popeye --save --out html --output-file report.html","breadcrumbs":"Deploy to CIVO » Rancher Shared Service using Kubernauts & Popeye setup","id":"247","title":"Rancher Shared Service using Kubernauts & Popeye setup"},"248":{"body":"Rancher Shared Service popeye","breadcrumbs":"Deploy to CIVO » Resources","id":"248","title":"Resources"},"249":{"body":"","breadcrumbs":"CrashLoopBackOff » CrashLoopBackOff","id":"249","title":"CrashLoopBackOff"},"25":{"body":"Today, I will get started with the book : The DevOps 2.3 Toolkit; and will work my way through the book. Chapter 1 provides an introduction to Kubernetes; I will use it to optimise the notes from the previous day. Chapter 2 provides a walkthrough on how to set-up a local Kubernetes cluster using minikube or microk8s. Alternatively, kind could also be used to create a local Kubernetes cluster or if you have Docker Desktop you could use directly the single node cluster included . Prerequisites Have Docker installed (if not go ahead and do it): https://docs.docker.com/ Install kubectl Here is how to install kubectl If you have the Homebrew package manager installed, you can use that: brew install kubectl On Linux, the commands are going to be: curl -LO [https://storage.googleapis.com/kubernetes-release/release/$](https://storage.googleapis.com/kubernetes-release/release/$)(curl -s https:/\\\n/storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubec\\\ntl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl To make sure you have kubectl installed, you can run kubectl version --output=yaml Install local cluster To install minikube, you require a virtualisation technology such as VirtualBox. If you are on windows, you might want to use hyperv instead. Minikube provides a single node instance that you can use in combination with kubectl. It supports DNS, Dashboards, CNI, NodePorts, Config Maps, etc. It also supports multiple hypervisors, such as Virtualbox, kvm, etc. In my case I am going to be using microk8s since I had several issues getting started with minikube. However, please don't let this put you off. Please look for yourself into each tool and decide which one you like the best. Microk8s provides a lightweight Kubernetes installation on your local machine. Overall, it is much easier to install on Linux using snap since it does not require any virtualization tools. sudo snap install microk8s --classic However, also the Windows and Mac installation are quite straightforward so have a look at those on their website. Make sure that kubectl has access directly to your cluster. If you have multiple clusters configured, you can switch between them using your kubectl commands To show the different clusters available: kubectl config get-contexts to switch to a different cluster: kubectl config use-context Once we are connected to the right cluster, we can ask kubectl to show us our nodes kubectl get nodes Or you could see the current pods that are running on your cluster — if it is a new cluster, you likely don't have any pods running. kubectl get pods In the case of minikube and microk8s, we have only one node","breadcrumbs":"Setup Kubernetes Cluster » Example Notes","id":"25","title":"Example Notes"},"250":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"CrashLoopBackOff » 100Days Resources","id":"250","title":"100Days Resources"},"251":{"body":"What is Kubernetes: https://youtu.be/VnvRFRk_51k Kubernetes architecture explained: https://youtu.be/umXEmn3cMWY","breadcrumbs":"CrashLoopBackOff » Learning Resources","id":"251","title":"Learning Resources"},"252":{"body":"Today I actually spent my time first learning about ReplicaSets. However, when looking at my cluster, I have noticed something strange. There was one pod still running that I thought I had deleted yesterday. The status of the pod indicated \"CrashLoopBackOff\"; meaning the pod would fail every time it was trying to start. It will start, fail, start fail, start f... This is quite common, and in case the restart policy of the pod is set to always, Kubernetes will try to restart the pod every time it has an error. There are several reasons why the pod would end up in this poor state: Something is wrong within our Kubernetes cluster The pod is configured correctly Something is wrong with the application In this case, it was easier to identify what had gotten wrong that resulted in this misbehaved pod. Like mentioned in previous days, there are multiple ways that one can create a pod. This can be categorized roughly into Imperative: We have to tell Kubernetes each step that it has to do within our cluster Declarative: We provide Kubernetes with any resource definition that we want to set-up within our pod and it will figure out the steps that are needed to make it happen As part of yesterdays learning, I tried to set-up a container image inside a pod and inside our cluster with the following command: kubectl create deployment --image= this will create a deployment resource, based on which a pod is created, based on which a container is run within the created pod. Going back to reasons this might have resulted in a CrashLoopBackOff; I don't have reasons to believe that something is wrong within our cluster since the same image was created using a pod definition in a declarative format — that worked. Next, the pod could be configured correctly. This could have been the case — However, since we did not tell Kubernetes explicitly how it should create the pod, we don't have much control over this aspect. Lastly, the application inside of the container is wrong. I have reason to believe that this is the case. When parsing the container image into the Kubernetes cluster, we did not provide any arguments. However, the container image would have needed an argument to know which image it is actually supposed to run. Thus, I am settling for this explanation. Now how do we get rid of the pod or correct this? I first tried to delete the pod with kubectl delete pod However, this just meant that the current instance of the pod was deleted and a new one created each time — counting the restarts from 0. So it must be that there is another resource that tells Kubernetes \"create this pod and run the container inside\" Let's take a look at the original command: kubectl create deployment --image= I had literally told Kubernetes to create a deployment. SO let's check for deployment: kubectl get deployment and tada, here is the fish. Since we have already tried to delete the pod, we will now delete the deployment itself. kubectl delete deployment When we are now looking at our pods kubectl get pods We should not see any more pods listed.","breadcrumbs":"CrashLoopBackOff » Example Notes","id":"252","title":"Example Notes"},"253":{"body":"This section is intended as a quick primer to get you up to speed with the various terms and acronyms that you are likely to encounter as you begin your Kubernetes journey. This is by no means an exhaustive list and it will certainly evolve. Containers: Containers are a standard package of software that allows you to bundle (or package) an application's code, dependencies, and configuration into a single object which can then be deployed in any environment. Containers isolate the software from its underlying environment. Container Image: According to Docker , a Container Image \"is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.\" Container Images become Containers at runtime. Container Registry: A container registry is a repository for storing all of your container images. Examples of containers registries are: 'Azure Container Registry', 'Docker Hub', and 'Amazon Elastic Container Registry'. Docker: Docker is an open-source platform used for automating the deployment of containerized applications. Kubernetes: Also know as K8s , Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Please see kubernetes.io for more details. It could also be thought of as a 'Container Orchestrator'. Control Plane: The control plane is the container orchestration layer. It exposes an API that allows you to manage your cluster and its resources. Namespaces: Namespaces are units of the organization. They allow you to group related resources. Nodes: Nodes are worker machines in Kubernetes. Nodes can be virtual or physical. Kubernetes runs workloads by placing containers into Pods that run on Nodes. You will typically have multiple nodes in your Kubernetes cluster. Pods: Pods are the smallest deployable unit of computing that you can create and manage in Kubernetes. A pod is a group of one or more containers. Service: A Service in Kubernetes is a networking abstraction for Pod access. It handles the network traffic to a Pod or set of Pods. Cluster: A Kubernetes Cluster is a set of nodes. Replica Sets: A Replica Set works to ensure that the defined number of Pods are running in the Cluster at all times. Kubectl: Kubectl is a command-line tool for interacting with a Kubernetes API Server to manage your Cluster. etcd: Etcd is a key-value store that Kubernetes uses to store Cluster data. Ingress: An object that manages external access to the services running on the Cluster. K3s: K3s is a lightweight Kubernetes distribution designed for IoT or Edge computing scenarios. GitOps: According to Codefresh , \"GitOps is a set of best-practices where the entire code delivery process is controlled via Git, including infrastructure and application definition as code and automation to complete updates and rollbacks.\" Containerd: Containerd is a container runtime that manages the complete container lifecycle. Service Mesh: According to Istio , a Service Mesh describes the network of micro-services within an application and the communications between them. AKS: Azure Kubernetes Service (AKS) is a managed, hosted, Kubernetes service provided by Microsoft Azure. A lot of the management of your Kubernetes cluster is abstracted away and managed by Azure. Find out more here . EKS: Elastic Kubernetes Service (EKS), provided by Amazon AWS, is a managed, hosted, Kubernetes service. Similar to AKS, much of the management is abstracted away and managed by the cloud provider. Find out more here GKE: Google Kubernetes Engine (GKE), is another managed Kubernetes service. This time from Google. Find out more here . Helm: Helm can be thought of as a Package Manager for Kubernetes. Helm Chart: Helm Charts are YAML files that define, install and upgrade Kubernetes applications.","breadcrumbs":"Terminology Primer » Terminology Primer","id":"253","title":"Terminology Primer"},"26":{"body":"","breadcrumbs":"Run Pods » Running Pods","id":"26","title":"Running Pods"},"27":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Run Pods » 100Days Resources","id":"27","title":"100Days Resources"},"28":{"body":"How Pods and the Pod Lifecycle work in Kubernetes Pods and Containers - Kubernetes Networking | Container Communication inside the Pod","breadcrumbs":"Run Pods » Learning Resources","id":"28","title":"Learning Resources"},"29":{"body":"","breadcrumbs":"Run Pods » Example Notes","id":"29","title":"Example Notes"},"3":{"body":"So what is this challenge all about? The idea (and credit) goes to two existing communities : 100DaysOfCode 100DaysOfCloud The idea is that you publicly commit to learning something new. This way, the community can support and motivate you. Every day that you are doing the challenge, just post a tweet highlighting what you learned with the #100DaysOfKubernetes hashtag. Additionally, creating your own content based on what you are learning can be highly valuable to the community. For example, once you wrote a blog post on Kubernetes ReplicaSets or similar, you could add it to this book. The goal is to make this a community-driven project.","breadcrumbs":"Introduction » 100 Days Of Kubernetes","id":"3","title":"100 Days Of Kubernetes"},"30":{"body":"Fork the following repository: https://github.com/vfarcic/k8s-specs git clone https://github.com/vfarcic/k8s-specs.git cd k8s-specs Create a mongo DB database kubectl run db --image mongo \\\n--generator \"run-pod/v1\" If you want to confirm that the pod was created do: kubectl get pods Note that if you do not see any output right away that is ok; the mongo image is really big so it might take a while to get the pod up and running. Confirm that the image is running in the cluster docker container ls -f ancestor=mongo To delete the pod run kubectl delete pod db Delete the pod above since it was not the best way to run the pod. Pods should be created in a declarative format. However, in this case, we created it in an imperative way — BAD! To look at the pod definition: cat pod/db.yml apiVersion: v1 // means the version 1 of the Kubernetes pod API; API version and kind has to be provided -- it is mandatory\nkind: Pod\nmetadata: // the metadata provides information on the pod, it does not specifiy how the pod behaves\nname: db\nlabels:\ntype: db\nvendor: MongoLabs // I assume, who has created the image\nspec:\ncontainers:\n- name: db\nimage: mongo:3.3 // image name and tag\ncommand: [\"mongod\"]\nargs: [\"--rest\", \"--httpinterface\"] // arguments, defined in an array In the case of controllers, the information provided in the metadata has a practical purpose. However, in this case, it merely provides descriptive information. All arguments that can be used in pods are defined in https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#pod-v1-core With the following command, we can create a pod that is defined in the pod.yml file kubectl create -f pod/db.yml to view the pods (in json format) kubectl get pods -o json We can see that the pod went through several stages (stages detailed in the video on pods) In the case of microk8s, both master and worker nodes run on the same machine. To verify that the database is running, we can go ahead an run kubectl exec -it db sh // this will start a terminal inside the running container\necho 'db.stats()'\nexit Once we do not need a pod anymore, we should delete it kubectl delete -f pod/db.yml Kubernetes will first try to stop a pod gracefully; it will have 30s to shut down. After the \"grace period\" a kill signal is sent Additional notes Pods cannot be split across nodes Storage within a pod (volumes) can be accessed by all the containers within a pod","breadcrumbs":"Run Pods » Practical Example","id":"30","title":"Practical Example"},"31":{"body":"Most pods should be made of a single container; multiple containers within one pod is not common nor necessarily desirable Look at cat pod/go-demo-2.yml of the closed repository (the one cloned at the beginning of these notes) The yml defines the use of two containers within one pod kubectl create -f pod/go-demo-2.yml kubectl get -f pod/go-demo-2.yml To only retrieve the names of the containers running in the pod kubectl get -f pod/go-demo-2.yml \\\n-o jsonpath=\"{.spec.containers[*].name}\" Specify the name of the container for which we want to have the logs kubectl logs go-demo-2 -c db livenessProbes are used to check whether a container should be running Have a look at cat pod/go-demo-2-health.yml within the cloned repository. Create the pod kubectl create \\\n-f pod/go-demo-2-health.yml wait a minute and look at the output kubectl describe \\\n-f pod/go-demo-2-health.yml","breadcrumbs":"Run Pods » Run multiple containers with in a pod","id":"31","title":"Run multiple containers with in a pod"},"32":{"body":"","breadcrumbs":"Replica Set » Kubernetes ReplicaSet","id":"32","title":"Kubernetes ReplicaSet"},"33":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Replica Set » 100Days Resources","id":"33","title":"100Days Resources"},"34":{"body":"Kubernetes ReplicaSet official documentation","breadcrumbs":"Replica Set » Learning Resources","id":"34","title":"Learning Resources"},"35":{"body":"ReplicaSets It is usually not recommended to create pods manually but instead use multiple instances of the same application; these are then identical pods, called replicas. You can specify the desired number of replicas within the ReplicaSet. A ReplicaSet ensures that a certain number of pods are running at any point in time. If there are more pods running than the number specified by the ReplicaSet, the ReplicaSet will kill the pods. Similarly, if any pod dies and the total number of pods is fewer than the defined number of pods, the ReplicaSet will spin up more pods. Each pod is supposed to run a single instance of an application. If you want to scale your application horizontally, you can create multiple instances of that pod. The pod ReplicaSet is used for scaling pods in your Kubernetes cluster. Such a set of replicated Pods are created and managed by a controller, such as a Deployment. As long as the primary conditions are met: enough CPU and memory is available in the cluster, the ReplicaSet is self-healing; it provides fault tolerance and high availibility. It's only purpose is to ensure that the specified number of replicas of a service is running. All pods are managed through Controllers and Services. They know about the pods that they have to manage through the in-YAML defined Labels within the pods and the selectors within the Controllers/Services. Remember the metadata field from one of the previous days — in the case of ReplicaSets, these labels are used again.","breadcrumbs":"Replica Set » Example Notes","id":"35","title":"Example Notes"},"36":{"body":"Clone the following repository: https://github.com/vfarcic/k8s-specs and enter into the root folder cd k8s-specs Looking at the following example cat rs/go-demo-2.yml The selector is used to specify which pods should be included in the replicaset ReplicaSets and Pods are decoupled If the pods that match the replicaset, it does not have to do anything Similar to how the ReplicaSet would scale pods to match the definition provided in the yaml, it will also terminate pods if there are too many the spec.template.spec defines the pod Next, create the pods kubectl create -f rs/go-demo-2.yml We can see further details of your running pods through the kubectl describe command kubectl describe -f rs/go-demo-2.yml To list all the pods, and to compare the labels specified in the pods match the ReplicaSet kubectl get pods --show-labels You can call the number of replicasets by running kubectl get replicasets ReplicaSets are named using the same naming convention as used for pods.","breadcrumbs":"Replica Set » Some Practice","id":"36","title":"Some Practice"},"37":{"body":"They both serve the same purpose — the Replication Controller is being deprecated.","breadcrumbs":"Replica Set » Difference between ReplicaSet and Replication Controller","id":"37","title":"Difference between ReplicaSet and Replication Controller"},"38":{"body":"You can delete a ReplicaSet without deleting the pods that have been created by the replicaset kubectl delete -f rs/go-demo-2.yml \\\n--cascade=false And then the ReplicaSet can be created again kubectl create -f rs/go-demo-2.yml \\\n--save-config the —save-config flag ensures that our configurations are saved, which allows us to do more specific tasks later on.","breadcrumbs":"Replica Set » Operating ReplicaSets","id":"38","title":"Operating ReplicaSets"},"39":{"body":"","breadcrumbs":"Kubernetes Deployment » Kubernetes Deployments","id":"39","title":"Kubernetes Deployments"},"4":{"body":"Fork the example repository We suggest you to fork the journey repository. Every day that you work on the challenge, you can make changes to the repository to detail what you have been up to. The progress will then be tracked on your GitHub. Tweet about your progress Share your learnings and progress with the #100DaysOfKubernetes on Twitter. Join the community We have a channel in this Discord channel -- come say hi, ask questions, and contribute!","breadcrumbs":"Introduction » Where to get started","id":"4","title":"Where to get started"},"40":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kubernetes Deployment » 100Days Resources","id":"40","title":"100Days Resources"},"41":{"body":"Kubernetes Deployments official documentation","breadcrumbs":"Kubernetes Deployment » Learning Resources","id":"41","title":"Learning Resources"},"42":{"body":"This little exercise will be based on the following application: https://github.com/anais-codefresh/react-article-display Then we will create a deployment apiVersion: apps/v1\nkind: Deployment\nmetadata: name: react-application\nspec: replicas: 2 selector: matchLabels: run: react-application template: metadata: labels: run: react-application spec: containers: - name: react-application image: anaisurlichs/react-article-display:master ports: - containerPort: 80 imagePullPolicy: Always More information on Kubernetes deployments A deployment is a Kubernetes object that makes it possible to manage multiple, identical pods Using deployments, it is possible to automate the process of creating, modifying and deleting pods — it basically manages the lifecycle of your application Whenever a new object is created, Kubernetes will ensure that this object exist If you try to set-up pods manually, it can lead to human error. On the other hand, using deployments is a better way to prevent human errors. The difference between a deployment and a service is that a deployment ensures that a set of pods keeps running by creating pods and replacing broken prods with the resource defined in the template. In comparison, a service is used to allow a network to access the running pods. Deployments allow you to Deploy a replica set or pod Update pods and replica sets Rollback to previous deployment versions Scale a deployment Pause or continue a deployment Create deployment kubectl create -f deployment.yaml Access more information on the deployment kubectl describe deployment Create the service yml apiVersion: v1\nkind: Service\nmetadata: name: react-application labels: run: react-application\nspec: type: NodePort ports: - port: 8080 targetPort: 80 protocol: TCP name: http selector: run: react-application Creating the service with kubectl expose kubectl expose deployment/my-nginx This will create a service that is highly similar to our in yaml defined service. However, if we want to create the service based on our yaml instead, we can run: kubectl create -f my-pod-service.yml","breadcrumbs":"Kubernetes Deployment » Example Notes","id":"42","title":"Example Notes"},"43":{"body":"The targetPort in the service yaml links to the container port in the deployment. Thus, both have to be, for example, 80. We can then create the deployment and service based on the yaml, when you look for \"kubectl get service\", you will see the created service including the Cluster-IP. Take that cluster IP and the port that you have defined in the service e.g. 10.152.183.79:8080 basically : and you should be able to access the application through NodePort. However, note that anyone will be able to access this connection. You should be deleting these resources afterwards. kubectl get service https://s3-us-west-2.amazonaws.com/secure.notion-static.com/3f2ba295-c81d-4836-8b46-eadbc70b2eb7/Screenshot_from_2021-01-06_21-54-40.png Alternatively, for more information of the service kubectl get svc -o yaml -o yaml: the data should be displayed in yaml format Delete the resources by kubectl delete service react-application\nkubectl delete deployment react-application // in this case, your pods are still running, so you would have to remove them individually Note: replace react-application with the name of your service.","breadcrumbs":"Kubernetes Deployment » How is the Service and the Deployment linked?","id":"43","title":"How is the Service and the Deployment linked?"},"44":{"body":"","breadcrumbs":"Namespaces » Namespaces","id":"44","title":"Namespaces"},"45":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Namespaces » 100Days Resources","id":"45","title":"100Days Resources"},"46":{"body":"TechWorld with Nana explanation on Namespaces","breadcrumbs":"Namespaces » Learning Resources","id":"46","title":"Learning Resources"},"47":{"body":"In some cases, you want to divide your resources, provide different access rights to those resources and more. This is largely driven by the fear that something could happen to your precious production resources. However, with every new cluster, the management complexity will scale — the more clusters you have, the more you have to manage — basically, the resource overhead of many small clusters is higher than of one big one. Think about this in terms of houses, if you have one big house, you have to take care of a lot but having several small houses, you have x the number of everything + they will be affected by different conditions.","breadcrumbs":"Namespaces » Example Notes","id":"47","title":"Example Notes"},"48":{"body":"Let's get moving, you only learn by doing. Clone this repository https://github.com/vfarcic/k8s-specs cd k8s-specs and then, we will use this application cat ns/go-demo-2.yml Then we do a nasty work-around to specify the image tag used in the pod IMG=vfarcic/go-demo-2 TAG=1.0 cat ns/go-demo-2.yml \\\n| sed -e \\\n\"s@image: $IMG@image: $IMG:$TAG@g\" \\\n| kubectl create -f - When the -f argument is followed with a dash (-), kubectl uses standard input (stdin) instead of a file. To confirm that the deployment was successful kubectl rollout status \\\n2 deploy go-demo-2-api Which will get us the following output hello, release 1.0! Almost every service are Kubernetes Objects. kubectl get all Gives us a full list of all the resources that we currently have up and running. The system-level objects within our cluster are usually not visible, only the objects that we created. Within the same namespace, we cannot have twice the same object with exactly the same name. However, we can have the same object in two different namespaces. Additionally, you could specify within a cluster permissions, quotas, policies, and more — will look at those sometimes later within the challenge. We can list all of our namespaces through kubectl get ns Create a new namespaces kubectl create namespace testing Note that you could also use 'ns' for 'namespace' Kubernetes puts all the resources needed to execute Kubernetes commands into the kube-system namespace kubectl --namespace kube-system get all Now that we have a namespace testing, we can use it for new deployments — however, specifying the namespace with each command is annoying. What we can do instead is kubectl config set-context testing \\ --namespace testing \\ --cluster docker-desktop \\ --user docker-desktop In this case, you will have to change the command according to your cluster. The created context uses the same cluster as before — just a different namespace. You can view the config with the following command kubectl config view Once we have a new context, we can switch to that one kubectl config use-context testing Once done, all of our commands will be automatically executed in the testing namespace. Now we can deploy the same resource as before but specify a different tag. TAG=2.0 DOM=go-demo-2.com cat ns/go-demo-2.yml \\\n| sed -e \\\n\"s@image: $IMG@image: $IMG:$TAG@g\" \\\n| sed -e \\\n\"s@host: $DOM@host: $TAG\\.$DOM@g\" \\\n| kubectl create -f - to confirm that the rollout has finished kubectl rollout status \\\ndeployment go-demo-2-api Now we can send requests to the different namespaces curl -H \"Host: 2.0.go-demo-2.com\" \\\n2 \"http://$(minikube ip)/demo/hello\"","breadcrumbs":"Namespaces » Practical","id":"48","title":"Practical"},"49":{"body":"It can be really annoying to have to delete all objects one by one. What we can do instead is to delete all resources within a namespace all at once kubectl delete ns testing The real magic of namespaces is when we combine those with authorization logic, which we are going to be looking at in later videos.","breadcrumbs":"Namespaces » Deleting resources","id":"49","title":"Deleting resources"},"5":{"body":"You can find more information on contributions in the README of this GitHub repository.","breadcrumbs":"Introduction » Contribute","id":"5","title":"Contribute"},"50":{"body":"","breadcrumbs":"ConfigMaps » ConfigMaps","id":"50","title":"ConfigMaps"},"51":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"ConfigMaps » 100Days Resources","id":"51","title":"100Days Resources"},"52":{"body":"https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html","breadcrumbs":"ConfigMaps » Learning Resources","id":"52","title":"Learning Resources"},"53":{"body":"ConfigMaps make it possible to keep configurations separately from our application images by injecting configurations into your container. The content/injection might be configuration files or variables. It is a type of Volume. ConfigMaps=Mount a source to a container It is a directory or file of configuration settings. Environment variables can be used to configure new applications. They are great unless our application is too complex. If the application configuration is based on a file, it is best to make the file part of our Docker image. Additionally, you want to use ConfigMap with caution. If you do not have any variations between configurations of your app, you do not need a ConfigMap. ConfigMaps let you easily fall into the trap of making specific configuration — which makes it harder to move the application and to automate its set-up. Resulting, if you do use ConfigMaps, you would likely have one for each environment. So what could you store within a ConfigMap A ConfigMap stores configuration settings for your code. Store connection strings, public credentials, hostnames, and URLs in your ConfigMap. Source So make sure to not store any sensitive information within ConfigMap. We first create a ConfigMap with kubectl create cm my-config \\\n--from-file=cm/prometheus-conf.yml Taking a look into that resource **kubectl describe cm my-config** ConfigMap is another volume that, like other volumes, need to mount cat cm/alpine.yml The volume mount section is the same, no matter the type of volume that we want to mount. We can create a pod and make sure it is running kubectl create -f cm/alpine.yml kubectl get pods And then have a look inside the pod kubectl exec -it alpine -- \\\nls /etc/config You will then see a single file that is correlated to the file that we stored in the ConfigMap To make sure the content of both files is indeed the same, you can use the following command kubectl exec -it alpine -- \\\ncat /etc/config/prometheus-conf.yml The —from-file argument in the command at the beginning can be used with files as well as directories. In case we want to create a ConfigMap with a directory kubectl create cm my-config \\\n--from-file=cm and have a look inside kubectl describe cm my-config The create a pod that mounts to the ConfigMap kubectl create -f cm/alpine.yml kubectl exec -it alpine -- \\\nls /etc/config Make sure to delete all the files within your cluster afterwards kubectl delete -f cm/alpine.yml kubectl delete cm my-config Furthermore, like every other Kubernetes resource, you can define ConfigMaps through Kubernetes YAML files. This actually (probably the easiest way) — write the ConfigMap in YAML and mount it as a Volume We can show one of our existing ConfigMaps in YAML kubectl get cm my-config -o yaml Additionally we can take a look at this file within our repository that has both a Deployment object and a ConfigMap cat cm/prometheus.yml kind: ConfigMap apiVersion: v1 metadata: name: example-configmap data: # Configuration values can be set as key-value properties database: mongodb database_uri: mongodb://localhost:27017 # Or set as complete file contents (even JSON!) keys: | image.public.key=771 rsa.public.key=42 And then create the ConfigMap like any other resource kubectl apply -f config-map.yaml kind: Pod apiVersion: v1 metadata: name: pod-using-configmap spec: # Add the ConfigMap as a volume to the Pod volumes: # `name` here must match the name # specified in the volume mount - name: example-configmap-volume # Populate the volume with config map data configMap: # `name` here must match the name # specified in the ConfigMap's YAML name: example-configmap containers: - name: container-configmap image: nginx:1.7.9 # Mount the volume that contains the configuration data # into your container filesystem volumeMounts: # `name` here must match the name # from the volumes section of this pod - name: example-configmap-volume mountPath: /etc/config","breadcrumbs":"ConfigMaps » Example Notes","id":"53","title":"Example Notes"},"54":{"body":"","breadcrumbs":"Kubernetes Service » Kubernetes Service","id":"54","title":"Kubernetes Service"},"55":{"body":"Video by Anais Urlichs One and Two Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kubernetes Service » 100Days Resources","id":"55","title":"100Days Resources"},"56":{"body":"Official Documentation https://kubernetes.io/docs/reference/kubectl/cheatsheet/ https://katacoda.com/courses/kubernetes Guides and interactive tutorial within the Kubernetes docs https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/ Kubernetes by example https://kubernetesbyexample.com/ created by OpenShift","breadcrumbs":"Kubernetes Service » Learning Resources","id":"56","title":"Learning Resources"},"57":{"body":"Pods are formed, destroyed and never repaired. You would not repair an existing, running pod but rather deploy a new, healthy one. Controllers, along with the Scheduler inside your Kubernetes cluster are making sure that pods are behaving correctly, they are monitoring the pods. So far, only containers within the same pod can talk to each other through localhost. This prevents us from scaling our application. Thus we want to enable communication between pods. This is done with Kubernetes Services. Kubernetes Services provide addresses through which associated Pods can be accessed. A service is usually created on top of an existing deployment.","breadcrumbs":"Kubernetes Service » Example Notes","id":"57","title":"Example Notes"},"58":{"body":"Clone the following repository: https://github.com/vfarcic/k8s-specs And prepare our minikube, microk8s or whatever you are using as your local cluster 🙂 cd k8s-specs git pull minikube start --vm-driver=virtualbox kubectl config current-context For this exercise, we are going to create the following ReplicaSet, similar to what we have done in the previous video. The definition will look as follows: cat svc/go-demo-2-rs.yml Now create the ReplicaSet: kubectl create -f svc/go-demo-2-rs.yml // get the state of it\nkubectl get -f svc/go-demo-2-rs.yml Before continuing with the next exercises, make sure that both replicas are ready With the kubectl expose command, we can tell Kubernetes that we want to expose a resource as service in our cluster kubectl expose rs go-demo-2 \\ --name=go-demo-2-svc \\ --target-port=28017 \\ // this is the port that the MongoDB interface is listening to --type=NodePort We can have by default three different types of Services ClusterIP ClusterIP is used by default. It exposes the service only within the cluster. By default you want to be using ClusterIP since that prevents any external communication and makes your cluster more secure. NodePort Allows the outside world to access the node IP and LoadBalancer The LoadBalancer is only useful when it is combined with the LoadBalancer of your cloud provider. The process when creating a new Service is something like this: First, we tell our API server within the master node in our cluster it should create a new service — in our case, this is done through kubectl commands. Within our cluster, inside the master node, we have an endpoint controller. This controller will watch our API server to see whether we want to create a new Service. Once it knows that we want to create a new service, it will create an endpoint object. The kube-proxy watches the cluster for services and enpoints that it can use to configure the access to our cluster. It will then make a new entry in its iptable that takes note of the new information. The Kube-DNS realises that there is a new service and will add the db’s record to the dns server (skydns). Taking a look at our newly created service: kubectl describe svc go-demo-2-svc All the pods in the cluster can access the targetPort The NodePort automatically creates the clusterIP Note that if you have multiple ports defined within a service, you have to name those ports Let’s see whether the Service indeed works. PORT=$(kubectl get svc go-demo-2-svc -o jsonpath=\"{.spec.ports[0].nodePort}\") IP=$(minikube ip) open \"http://$IP:$PORT\"","breadcrumbs":"Kubernetes Service » Follow along","id":"58","title":"Follow along"},"59":{"body":"cat svc/go-demo-2-svc.yml The service is of type NodePort - making it available within the cluster TCP is used as default protocol The selector is used by the service to know which pods should receive requests (this works the same way as the selector within the ReplicaSet) With the following command, we create the service and then get the sevrice kubectl create -f svc/go-demo-2-svc.yml kubectl get -f svc/go-demo-2-svc.yml We can look at our endpoint through kubectl get ep go-demo-2 -o yaml The subset responds to two pods, each pod has its own IP address Requests are distributed between these two nodes Make sure to delete the Service and ReplicaSet at the end kubectl delete -f svc/go-demo-2-svc.yml kubectl delete -f svc/go-demo-2-rs.yml","breadcrumbs":"Kubernetes Service » Creating Services in a Declarative format","id":"59","title":"Creating Services in a Declarative format"},"6":{"body":"The book is divided into several higher-level topics. Each topic has several sub-topics that are individual pages or chapters. Those chapters have a similar structure: Title The title of the page 100Days Resources This section highlights a list of community resources specific to the topics that is introduced. Additionally, this is where you can include your own content, videos and blog articles, from your 100DaysOfKubernetes challenge. Learning Resources A list of related learning resources. Different to '100Days Resources', these do not have to be specific to 100DaysOfKubernetes. Example Notes This section provides an introduction to the topics. The goal is to advance each topics over time. When you are first time learning about a topic, it is usually best to take your own notes but sometimes having a starting point and examples is helpful.","breadcrumbs":"Introduction » Structure of the book","id":"6","title":"Structure of the book"},"60":{"body":"","breadcrumbs":"Kubernetes Ingress » Ingress","id":"60","title":"Ingress"},"61":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kubernetes Ingress » 100Days Resources","id":"61","title":"100Days Resources"},"62":{"body":"Ingress Tutorial by Anais Urlichs","breadcrumbs":"Kubernetes Ingress » Learning Resources","id":"62","title":"Learning Resources"},"63":{"body":"Ingress is responsible for managing the external access to our cluster. Whereby it manages forwarding rules based on paths and domains SSl termination and several other features. The API provided by Ingress allows us to replace an external proxy with a loadbalancer. Things we want to resolve using Ingress Not having to use a fixed port — if we have to manage multiple clusters, we would have a hard time managing all those ports We need standard HTTPS(443) or HTTP (80) ports through a predefined path When we open an application, the request to the application is first received by the service and LoadBalancer, which is the responsible for forwarding the request to either of the pods it is responsible for. To make our application more secure, we need a place to store the application's HTTPS certificate and forwarding. Once this is implemented, we have a mechanism that accepts requests on specific ports and forwards them to our Kubernetes Service. The Ingress Controller can be used for this. Unlike other Kubernetes Controllers, it is not part of our cluster by default but we have to install it separately. If you are using minikube, you can check the available addons through minikube addons list And enable ingress (in case it is not enabled) minikube addons enable ingress If you are on microk8s, you can enable ingress through microk8s enable ingress You can check whether it is running through — if the pods are running for nginx-ingress, they will be listed kubectl get pods --all-namespaces | grep nginx-ingress If you receive an empty output, you might have to wait a little bit longer for Ingress to start. Here is the YAML definition of our Ingress resource apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata: name: react-application annotations: kubernetes.io/ingress.class: \"nginx\" ingress.kubernetes.io/ssl-redirect: \"false\" nginx.ingress.kubernetes.io/ssl-redirect: \"false\"\nspec: rules: - http: paths: - path: /demo pathType: ImplementationSpecific backend: service: name: react-application port: number: 8080 The annotation section is used to provide additional information to the Ingress controller. The path is the path after the You can find a list of annotations and the controllers that support them on this page: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md We have to set the ssl redirect to false since we do not have an ssl certificate. You can create the resource through kubectl create \\\n-f If your application's service is set to NodePort, you will want to change it back into ClusterIP since there is no need anymore for NodePort. What happens when we create a new Ingress resource? kubectl will send a request to the API Server of our cluster requesting the creation of a new Ingress resource The ingress controller is consistently checking the cluster to see if there is a new ingress resource Once it sees that there is a new ingress resource, it will configure its loadbalancer Ingress is a kind of service that runs on all nodes within your cluster. As long as requests match any of the rules defined within Ingress, Ingress will forward the request to the respective service. To view the ingress running inside your cluster, use kubectl get ing Note that it might not work properly on microk8s.","breadcrumbs":"Kubernetes Ingress » Example Notes","id":"63","title":"Example Notes"},"64":{"body":"","breadcrumbs":"Service Mesh » Service Mesh","id":"64","title":"Service Mesh"},"65":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Service Mesh » 100Days Resources","id":"65","title":"100Days Resources"},"66":{"body":"Microsoft Introduction to Service Mesh Nginx explanation on Service Mesh","breadcrumbs":"Service Mesh » Learning Resources","id":"66","title":"Learning Resources"},"67":{"body":"The goal is: Higher Portability: Deploy it wherever Higher Agility: Update whenever Lower Operational Management: Invest low cognitive Lower Security Risk How do Services find each other? Answering this question allows us to break down the value of Service Mesh — different Services have to find each other. If one service fails, the traffic has to be routed to another service so that requests don't fail Service-discovery can become the biggest bottleneck. Open platform, independent service mesh. In its simplest form a service mesh is a network of your microservices, managing the traffic between services. This allows it to manage the different interactions between your microservices. A lot of the responsibilities that a service mesh has could be managed on an application basis. However, with the service mesh takes that logic out of the application specific services and manages those on an infrastructure basis. Why do you need Service Mesh? Istio is a popular solution for managing communication between microservices. When we move from monolithic to microservice application, we run into several issues that we did not have before. It will need the following setup Each microservice has its own business logic — all service endpoints must be configured Ensure Security standards with firewall rules set-up — every service inside the cluster can talk to every other service if we do not have any additional security inside — for more important applications this is not secure enough. This may result in a lot of complicated configuration. To better manage the application configuration, everything but the business logic could be packed into its own Sidecar Proxy, which would then be responsible to Handle the networking logic Act as a Proxy Take care of third-party applications Allow cluster operators to configure everything easily Enable developers to focus on the actual business logic A service mesh will have a control plane that will inject this business logic automatically into every service. Once done, the microservices can talk to each other through proxies. Core feature of service mesh: Traffic Splitting: When you spin up a new service in response to a high number of requests, you only want to forward about 10% of the traffic to the new Service to make sure that it really works before distributing the traffic between all services. This may also be referred to as Canary Deployment \"In a service mesh, requests are routed between microservices through proxies in their own infrastructure layer. For this reason, individual proxies that make up a service mesh are sometimes called “sidecars,” since they run alongside each service, rather than within them. Taken together, these “sidecar” proxies—decoupled from each service—form a mesh network.\" https://www.redhat.com/cms/managed-files/service-mesh-1680.png \"A sidecar proxy sits alongside a microservice and routes requests to other proxies. Together, these sidecars form a mesh network.\" Service Mesh is just a paradigm and Istio is one of the implementations Istio allows Service A and Service B to communicate to each other. Once your microservices scale, you have more services, the service mesh becomes more complicated — it becomes more complicated to manage the connection between different services. That's where Istio comes in. It runs on Kubernetes Nomad Console I will focus on Kubernetes. Features Load Balancing: Receive some assurance of load handling — enabled some level of abstraction that enables services to have their own IP addresses. Fine Grain Control: to make sure to have rules, fail-overs, fault connection Access Control: Ensure that the policies are correct and enforceable Visibility: Logging and graphing Security: It manages your TSL certificates Additionally, Service Mesh makes it easier to discover problems within your microservice architecture that would be impossible to discover without. Components — Connect to the Control Plane API within Kubernetes — note that this is the logic of Istio up to version 1.5. The latest versions only deal with Istiod. Pilot: Has A/B testing, has the intelligence how everything works, the driver of Istio Cit: Allows Service A and Service B to talk to each other How do we configure Istio? You do not have to modify any Kubernetes Deployment and Service YAML files Istio is configured separately from application configuration Since Istio is implemented through Kubernetes Custom Resource Definitions (CRD), it can be easily extended with other Kubernetes-based plug-ins It can be used like any other Kubernetes object The Istio-Ingress Gateway is an entry-point to our Kubernetes cluster. It runs as a pod in our cluster and acts as a LoadBalancer.","breadcrumbs":"Service Mesh » Example Notes","id":"67","title":"Example Notes"},"68":{"body":"With different projects and companies creating their own Service Mesh, the need for standards and specifications arise. One of those standards is provided by the Service Mesh Interface (SMI). In its most basic form, SMI provides a list of Service Mesh APIs. Separately SMI is currently a CNCF sandbox project. SMI provides a standard interface for Service Mesh on Kubernetes Provides a basic set of features for the most common use cases Flexible to support new use case over time Website with more information SMI covers the following Traffic policy – apply policies like identity and transport encryption across services Traffic telemetry – capture key metrics like error rate and latency between services Traffic management – shift traffic between different services","breadcrumbs":"Service Mesh » Service Mesh Interface","id":"68","title":"Service Mesh Interface"},"69":{"body":"Gloo Mesh: Enterprise version of Istio Service Mesh but also has a Gloo Mesh open source version. Linkerd : Its main advantage is that it is lighter than Istio itself. Note that Linkerd was origially developed by Buoyant. Linkerd specifically, is run through an open governance model. Nginx service mesh : Focused on the data plane and security policies; platform agnostic; traffic orchestration and management","breadcrumbs":"Service Mesh » Other Service Mesh Examples","id":"69","title":"Other Service Mesh Examples"},"7":{"body":"Anais Urlichs' public Notion Rishab Kumar's GitHub repository","breadcrumbs":"Introduction » List of Example Notes by the Community","id":"7","title":"List of Example Notes by the Community"},"70":{"body":"","breadcrumbs":"Kubernetes Volumes » Kubernetes Volumes","id":"70","title":"Kubernetes Volumes"},"71":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kubernetes Volumes » 100Days Resources","id":"71","title":"100Days Resources"},"72":{"body":"https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/ https://codeburst.io/kubernetes-storage-by-example-part-1-27f44ae8fb8b https://youtu.be/0swOh5C3OVM","breadcrumbs":"Kubernetes Volumes » Learning Resources","id":"72","title":"Learning Resources"},"73":{"body":"We cannot store data within our containers — if our pod crashes and restarts another container based on that container image, all of our state will be lost. Kubernetes does not give you data-persistence out of the box. Volumes are references to files and directories made accessible to containers that form a pod. So, they basically keep track of the state of your application and if one pod dies the next pod will have access to the Volume and thus, the previously recorded state. There are over 25 different Volume types within Kubernetes — some of which are specific to hosting providers e.g. AWS The difference between volumes is the way that files and directories are created. Additionally, Volumes can also be used to access other Kubernetes resources such as to access the Docker socket. The problem is that the storage has to be available across all nodes. When a pod fails and is restarted, it might be started on a different node. Overall, Kubernetes Volumes have to be highly error-resistant — and even survive a crash of the entire cluster. Volumes and Persistent Volumes are created like other Kubernetes resources, through YAML files. Additionally, we can differentiate between remote and local volumes — each volume type has its own use case. Local volumes are tied to a specific node and do not survive cluster disasters. Thus, you want to use remote volumes whenever possible. apiVersion: v1\nkind: Pod\nmetadata: name: empty-dir\nspec: containers: - name: busybox-a command: ['tail', '-f', '/dev/null'] image: busybox volumeMounts: - name: cache mountPath: /cache - name: busybox-b command: ['tail', '-f', '/dev/null'] image: busybox volumeMounts: - name: cache mountPath: /cache volumes: - name: cache emptyDir: {} Create the resource: kubectl apply -f empty-dir Write to the file: kubectl exec empty-dir --container busybox-a -- sh -c \"echo \\\"Hello World\\\" > /cache/hello.txt\" Read what is within the file kubectl exec empty-dir --container busybox-b -- cat /cache/hello.txt However, to ensure that the data will be saved beyond the creation and deletion of pods, we need Persistent volumes. Ephemeral volume types only have the lifetime of a pod — thus, they are not of much use if the pod crashes. A persistent volume will have to take the same storage as the physical storage. Storage in Kubernetes is an external plug-in to our cluster. This way, you can also have multiple different storage resources. The storage resources is defined within the PersistentVolume YAML A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications. — Kubernetes — Volumes apiVersion: v1\nkind: Pod\nmetadata: name: host-path\nspec: containers: - name: busybox command: ['tail', '-f', '/dev/null'] image: busybox volumeMounts: - name: data mountPath: /data volumes: - name: data hostPath: path: /data Create the volume kubectl apply -f empty-dir Read from the volume kubectl exec host-path -- cat /data/hello.txt","breadcrumbs":"Kubernetes Volumes » Example Notes","id":"73","title":"Example Notes"},"74":{"body":"","breadcrumbs":"KinD Cluster » KinD","id":"74","title":"KinD"},"75":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"KinD Cluster » 100Days Resources","id":"75","title":"100Days Resources"},"76":{"body":"Official Documentation","breadcrumbs":"KinD Cluster » Learning Resources","id":"76","title":"Learning Resources"},"77":{"body":"","breadcrumbs":"KinD Cluster » Example Notes","id":"77","title":"Example Notes"},"78":{"body":"These are the docs that I used to set-up all of my resources on Windows: Setting up the Ubuntu in Windows https://docs.microsoft.com/en-us/windows/wsl/install-win10#step-4---download-the-linux-kernel-update-package https://github.com/microsoft/WSL/issues/4766 Use WSL in Code https://docs.docker.com/docker-for-windows/wsl/ https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl Kubectl installation https://devkimchi.com/2018/06/05/running-kubernetes-on-wsl/ Create Kind cluster https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/ If you want to run microk8s on WSL, you have to get a snap workaround described here https://github.com/microsoft/WSL/issues/5126","breadcrumbs":"KinD Cluster » Setting up Kind on Windows and Cluster Comparison","id":"78","title":"Setting up Kind on Windows and Cluster Comparison"},"79":{"body":"Especially when you are just getting started, you might want to spin up a local cluster on your machine. This will allow you to run tests, play around with the resources and more without having to worry much about messing something up :) — If you watched any of my previous videos, you will have already a good understand of how much I enjoy trying out different things — setting things up just to try something out — gain a better understanding — and then to delete everything in the next moment. Also, you might have seen that I already have two videos on microk8s —mainly since the minikube set-up on my Ubuntu did not properly work. Now that I am on Windows, I might actually have more options. So let's take a look at those and see how they compare.","breadcrumbs":"KinD Cluster » Running a local cluster","id":"79","title":"Running a local cluster"},"8":{"body":"If you have contributed to this book, please add yourself to the list :) Anais Urlichs GitHub YouTube Twitter DevOps","breadcrumbs":"Contributors » About the contributors","id":"8","title":"About the contributors"},"80":{"body":"When you install Docker for Desktop (or however you call it) you can enable Kubernetes: The set-up will take a few seconds but then you have access to a local cluster both from the normal Windows terminal (I am still new to Windows, so excuse any terminology that is off) or through the WSL. kubectl config get-contexts and you will see a display of the different clusters that you have set-up. This will allow you to switch to a different cluster: kubectl config use-context Below are several other options highlighted.","breadcrumbs":"KinD Cluster » Kubernetes in Docker","id":"80","title":"Kubernetes in Docker"},"81":{"body":"minikube is probably the best known of the three; maybe because it is the oldest. When you are using minikube, it will spin up a VM that runs a single Kubernetes node. To do so it needs hypervisor. Now, if you have never interacted with much virtualisation technology, you might think of a hypervisor as something like this: I assure you, it is not. So what is a Hypervisor then? A Hypervisor is basically a form of software, firmware, or hardware that is used to set-up virtual environments on your machine. Running minikube, you can spin up multiple nodes as well, each will be running their own VM (Virtual Machine). For those of you, who are really into Dashboards, minikube provides a Dashboard, too! Personally, I am not too much of a fan but that is your choice. If you would like some sort of Dashboard, I highly recommend you k9s. Here is my introductory video if you are curious. This is what the minikube Dashboard looks like — just your usual UI 😊 Now how easy is the installation of minikube? Yes, I mentioned that I had some problems installing minikube on Ubuntu and thus, went with microk8s. At that time, I did not know about kind yet. Going back to the original question, if you are using straight up Windows it is quite easy to install, if you are using Linux in Windows however, it might be a bit different — tbh I am a really impatient person, so don't ask me. Documentation","breadcrumbs":"KinD Cluster » minikube","id":"81","title":"minikube"},"82":{"body":"Kind is quite different to minikube, instead of running the nodes in VMs, it will run nodes as Docker containers. Because of that, it is supposed to start-up faster — I am not sure how to test this, they are both spinning up the cluster in an instance and that is good enough for me. However, note that kind requires more space on your machine to run than etiher microk8s or minikube. In fact, microk8s is actually the smallest of the three. Like detailed in this article, you can With 'kind load docker-image my-app:latest' the image is available for use in your cluster Which is an additional feature. If you decide to use kind, you will get the most out of it if you are fairly comfortable to use YAML syntax since that will allow you to define different cluster types. Documentation The documentation that I used to install it","breadcrumbs":"KinD Cluster » What is kind?","id":"82","title":"What is kind?"},"83":{"body":"Microk8s is in particular useful if you want to run a cluster on small devices; it is better tested in ubuntu than the other tools. Resulting, you can install it with snap super quickly! In this case, it will basically run the cluster separate from the rest of the stuff on your computer. It also allows for multi-node clusters, however, I did not try that yet, so I don't know how well that actually works. Also note that if you are using microk8s on MacOS or Windows, you will need a hypevisor of sorts. Running it on Ubuntu, you do not. Documentation My video https://youtu.be/uU-8Zcst5Qk","breadcrumbs":"KinD Cluster » microk8s","id":"83","title":"microk8s"},"84":{"body":"This article by Max Brenner offers a really nice comparison between the different tools with a comparison table https://brennerm.github.io/posts/minikube-vs-kind-vs-k3s.html","breadcrumbs":"KinD Cluster » Direct comparison","id":"84","title":"Direct comparison"},"85":{"body":"","breadcrumbs":"K3s and K3sup » K3s and K3sup","id":"85","title":"K3s and K3sup"},"86":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"K3s and K3sup » 100Days Resources","id":"86","title":"100Days Resources"},"87":{"body":"Overview README Website Documentation","breadcrumbs":"K3s and K3sup » Learning Resources","id":"87","title":"Learning Resources"},"88":{"body":"When you are choosing a Kubernetes distribution, the most obvious one is going to be K8s, which is used by major cloud vendors and many more. However, if you just want to play around on your local machine with Kubernetes, test some tools or learn about Kubernetes resources, you would likely go with something like minikube, microk8s or kind. Now we just highlighted two use cases for different types of Kubernetes clusters. What about use cases where you want to run Kubernetes on really small devices, such as raspberry pis? What about IoT devices? Generally, devices where you want to run containers effectively without consuming too much resources. In those cases, you could go with K3s.","breadcrumbs":"K3s and K3sup » Example Notes","id":"88","title":"Example Notes"},"89":{"body":"In short, k3s is half the size in terms of memory footprint than \"normal Kubernetes\". The origin of k3s is Rio, which was developed by Rancher. However, they then decided to branch it out of Rio into its own tool — which became K3s. It was more of a figure it out by doing it. The important aspect of k3s is that it was oriented around production right from the beginning. You want to be able to run Kubernetes in highly resource constraint environments — which is not always possible with pure Kubernetes. K3s is currently a CNCF sandbox project — the development is led by Rancher, which provides Kubernetes as a service (?) Instead of Docker, it runs Containerd — Note that Kubernetes itself is also moving to Containerd as its container runtime. This does not mean that you will not be able to run Docker containers. If I just scared you, please watch this video to clarify. Designed based on the following goals: Lightweight: Show work on small resource environment Compatibility: You should be able to use most of the tools you can use with \"normal k8s\" Ethos: Everything you need to use k3s is built right in Btw: K3s is described as the second most popular Kubernetes distribution (Read in a Medium post, please don't quote me on this) How does k3s differ from \"normal\" Kubernetes? https://youtu.be/FmLna7tHDRc Do you have questions? We have answers! Now I recorded last week an episode with Alex Ellis, who built k3sup, which can be used to deploy k3s. Here are some of the questions that I had before the live recording that are answered within the recording itself: Let's hear a bit about the background of k3sup — how did it come about? How are both pronounced? How would you recommend learning about k3s — let's assume you are complete new, where do you start? Walking through the k3s architecture https://k3s.io/ What is the difference between k8s and k3s When would I prefer to use k8s over k3s What can I NOT do with k3s? Or what would I NOT want to do? Do I need a VM to run k3s? It is mentioned in some blog posts — let's assume I do not have a raspberry pi — I have a VM, you can set them up quite easily; why would I run K3s on a VM? So we keep discussing that this is great for a Kubernetes homelab or IoT devices — is that not a bit of an overkills to use Kubernetes with it? Is the single node k3s similar to microk8s — having one instance that is both worker node https://youtu.be/_1kEF-Jd9pw Use cases for k3s: Single node clusters Edge IoT CI Development Environments and Test Environments Experiments, useful for academia ARM Embedding K8s Situations where a PhD in K8s clusterology is infeasible","breadcrumbs":"K3s and K3sup » What is K3s?","id":"89","title":"What is K3s?"},"9":{"body":"","breadcrumbs":"Introduction to Kubernetes » What is Kubernetes and why do we want it?","id":"9","title":"What is Kubernetes and why do we want it?"},"90":{"body":"There are three ways for installing k3s* The quick way shown below directly with k3s Supposedly easier way with k3s up The long way that is detailed over several docs pages https://rancher.com/docs/k3s/latest/en/installation/ *Actually there are several more ways to install k3s like highlighted in this video: https://youtu.be/O3s3YoPesKs This is the installation script: curl -sfL https://get.k3s.io | sh - Note that this might take up to 30 sec. Once done, you should be able to run k3s kubectl get node What it does The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed Additional utilities will be installed, including kubectl, crictl, ctr, k3s-killall.sh , and k3s-uninstall.sh A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml and the kubectl installed by K3s will automatically use it Once this is done, you have to set-up the worker nodes that are used by k3s curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -","breadcrumbs":"K3s and K3sup » Install k3s","id":"90","title":"Install k3s"},"91":{"body":"Leverage the KUBECONFIG environment variable: export KUBECONFIG=/etc/rancher/k3s/k3s.yaml\nkubectl get pods --all-namespaces\nhelm ls --all-namespaces If you do not set KUBECONFIG as an environment variable,","breadcrumbs":"K3s and K3sup » Once installed, access the cluster","id":"91","title":"Once installed, access the cluster"},"92":{"body":"The following is taken from the k3sup READM E file $ curl -sLS https://get.k3sup.dev | sh\n$ sudo install k3sup /usr/local/bin/\n$ k3sup --help There is a lot to unpack... WHAT DOES THIS SENTENCE MEAN? This tool uses ssh to install k3s to a remote Linux host. You can also use it to join existing Linux hosts into a k3s cluster as agents. If you want to get an A to Z overview, watch the following video https://youtu.be/2LNxGVS81mE If you are still wondering about what k3s, have a look at this video https://youtu.be/-HchRyqNtkU How is Kubernetes modified? This is not a Kubernetes fork Added rootless support Dropped all third-party storage drivers, completely CSI is supported and preferred Following this tutorial; note that there are many others that you could use — have a look at their documentation: https://rancher.com/blog/2020/k3s-high-availability","breadcrumbs":"K3s and K3sup » Installing with k3sup","id":"92","title":"Installing with k3sup"},"93":{"body":"","breadcrumbs":"Kustomize » Kustomize","id":"93","title":"Kustomize"},"94":{"body":"Video by Anais Urlichs Add your blog posts, videos etc. related to the topic here!","breadcrumbs":"Kustomize » 100Days Resources","id":"94","title":"100Days Resources"},"95":{"body":"Their webiste has lots of amazing videos","breadcrumbs":"Kustomize » Learning Resources","id":"95","title":"Learning Resources"},"96":{"body":"Configuration Management for Kubernetes — \"A template free way to customize application configuration that simplifies the use of off-the-shelf applications\" When I create a YAML, the manifests go into the API server of the main node; the cluster then aims to create the resources within the cluster to match the desired state defined in the YAML Different to what we have seen before in the videos, YAML can get super complex! Additionally, there are several aspects of the state of our deployment that we want to frequently change. Including: Namespaces, Labels, Container Registry, Tags and more Then, we have resources and processes that we want to change a bit less frequently, such as Management Parameters Environment-specific processes and resources Infrastructure mapping Kustomize allows you to specify different values of your Kubernetes resources for different situations. To make your YAML resources more dynamic and to apply variations between environments, you can use Kustomize.","breadcrumbs":"Kustomize » Example Notes","id":"96","title":"Example Notes"},"97":{"body":"Install Kustomize — just reading about it is not going to help us. Here is their official documentation However, their options did not work for the Linux installation, which I also need on WSL — this one worked: https://weaveworks-gitops.awsworkshop.io/20_weaveworks_prerequisites/15_install_kustomize.html Kustomize is part of kubectl so it should work without additional installation using 'kubectl -k' to specify that you want to use kustomize. Next, scrolling through their documentation, they provide some amazing resources with examples on how to use kubectl correctly — but I am looking for kustomize example Have a look at their guides if you are curious https://kubectl.docs.kubernetes.io/guides/ So with kustomize, we want to have our YAML tempalte and then customize the values provided to that resource manifest. However, each directory that is referenced within kustomized must have its own kustomization.yaml file. First, let's set-up a deployment and a service, like we did in one of the previous days. The Deployment: apiVersion: apps/v1\nkind: Deployment\nmetadata: name: react-application\nspec: replicas: 2 selector: matchLabels: run: react-application template: metadata: labels: run: react-application spec: containers: - name: react-application image: anaisurlichs/react-article-display:master ports: - containerPort: 80 imagePullPolicy: Always env: - name: CUSTOM_ENV_VARIABLE value: Value defined by Kustomize ❤️ The service apiVersion: v1\nkind: Service\nmetadata: name: react-application labels: run: react-application\nspec: type: NodePort ports: - port: 8080 targetPort: 80 protocol: TCP name: http selector: run: react-application Now we want to customize that deployment with specific values. Set-up a file called 'kustomization.yaml' **resources: - deployment.yaml - service.yaml** Within this file, we will specify specific values that we will use within our Deployment resource. From a Kubernetes perspective, this is just another Kuberentes resource. One thing worth mentioning here is that Kustomize allows us to combine manifests from different repositories. So once you want to apply the kustomize resources, you can have a look at the changed resources: kustomize build . this will give you the changed YAML files with whatever you had defined in your resources. Now you can forward those resources into another file: kustomize build . > mydeployment.yaml Now if you have kubectl running, you could specify the resource that you want to use through : kubectl create -k . -k refers here to - -kustomize, you could use that flag instead if you wanted to. This will create our Deployment and service — basically all of the resources that have been defined in your kustomization.yaml file kubectl get pods -l