SpringBoot to Kubernetes with ReplicaSet.
Kubernetes Master Components: Etcd, API Server, Controller Manager, and Scheduler
etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus on being:
- Simple: well-defined, user-facing API (gRPC)
- Secure: automatic TLS with optional client cert authentication
- Fast: benchmarked 10,000 writes/sec
- Reliable: properly distributed using Raft. Etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log. Raft is a consensus algorithm designed as an alternative to Paxos. The Consensus problem involves multiple servers agreeing on values; a common problem that arises in the context of replicated state machines. Raft defines three different roles (Leader, Follower, and Candidate) and achieves consensus via an elected leader. For further information, please read the (Raft paper)[https://raft.github.io/raft.pdf].
##Etcdctl[https://github.com/etcd-io/etcd/tree/main/etcdctl] is the command-line interface tool written in Go that allows manipulating an etcd cluster. It can be used to perform a variety of actions, such as:
- Set, update and remove keys.
- Verify the cluster health.
- Add or remove etcd nodes.
- Generating database snapshots.
- You can play online with a 5-node etcd cluster at http://play.etcd.io.
The Scheduler watches for unscheduled pods and binds them to nodes via the /binding pod subresource API, according to the availability of the requested resourcesThe Scheduler watches for unscheduled pods and binds them to nodes via the /binding pod subresource API, according to the availability of the requested resources. Once the pod has a node assigned, the regular behavior of the Kubelet is triggered and the pod and its containers are created.
The Kubernetes Controller Manager is a daemon that embeds the core control loops (also known as “controllers”) shipped with Kubernetes. Basically, a controller watches the state of the cluster through the API Server watch feature and, when it gets notified, it makes the necessary changes attempting to move the current state towards the desired state.
When you interact with your Kubernetes cluster using the kubectl command-line interface, you are actually communicating with the master API Server component.
- kubectl writes to the API Server.
- API Server validates the request and persists it to etcd.
- etcd notifies back the API Server.
- API Server invokes the Scheduler.
- Scheduler decides where to run the pod on and return that to the API Server.
- API Server persists it to etcd.
- etcd notifies back the API Server.
- API Server invokes the Kubelet in the corresponding node.
- Kubelet talks to the Docker daemon using the API over the Docker socket to create the container.
- Kubelet updates the pod status to the API Server.
- API Server persists the new state in etcd.
1- Docker cmd build <Preferred>
docker build -t ahmedalsalih/spring-k8s:v2.2 .
or
2- Maven build
./mvnw spring-boot:build-image
docker run -p 9090:8080 ahmedalsalih/spring-k8s:v2.2
run git bash As Admin
the type:
minikube start
kubectl create deployment springk8sapp --image=ahmedalsalih/spring-k8s:v2.2
deployment and pod created as well.
kubectl get pods
kubectl get replicaset
kubectl describe pod springk8sapp-64c6ff74cd-86hpv
minikube dashboard
kubectl scale -n default deployment springk8sapp --replicas=3
kubectl apply -f <spec.yaml>
-----------------==========-------------------Delete---------------------==============----------------
kubectl delete --all deployments
Checking All three comends below should give No resources found in default namespace.
kubectl get pods
kubectl get deployment
kubectl get replicaset
kubectl apply -f deployment.yaml
kubectl get deployment -o wide
kubectl get pods --selector app=springk8s-app
###############################################################################################################
Checking the pod log using kubectl logs <Pod name>
kubectl logs springk8s-app-deployment-df7f649b7-88zg9
kubectl create deployment springk8sapp --image=ahmedalsalih/spring-k8s:v2.2
-type=NodePort
and --type=LoadBalancer
kubectl expose deployment springk8sapps --type=NodePort --port=8080
minikube tunnel
kubectl expose deployment springk8s-app-deployment --type=LoadBalancer --port=8080
or:
kubectl expose deployment springk8s-app-deployment --name=springk8s-service --port=80 --target-port=8080 --type=LoadBalancer
kubectl get service
Display the service wide:
kubectl get service -o wide
kubectl delete svc
kubectl delete svc springk8s-app-deployment
minikube delete --all