Q1. Task: Given an existing Kubernetes cluster running version 1.18.8, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.19.0. You are also expected to upgrade kubelet
and kubectl
on the master node.
kubectl config use-context mk8s
Note: Be sure to drain the master node before upgrading it and uncordon it after the upgrade. Do not upgrade the worker nodes, etcd, the container manager, the CNI plugin, the DNS service or any other addons.
Answer
kubectl drain <node-to-drain> --ignore-daemonsets
root@controlplane:~# apt update
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.22.x-00
kubeadm version
kubeadm upgrade plan
kubeadm upgrade apply v1.20.0
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.22.x-00 kubectl=1.22.x-00
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon <node-to-drain>
Q2. Task: From the pod label name=cpu-user
, find pods running high CPU workloads and write the name of the pod consuiming most CPU to the file /opt/KUTROO401/KUTROO401.txt (which already exists).
kubectl config use-context k8s
Answer
kubectl top -l name=cpu-utilizer –A echo 'pod name' >>/opt/KUT00401/KUT00401.txt
kubectl top pod --sort-by='cpu' --no-headers | head -1
or
kubectl top pods -l name=name-cpu-loader --sort-by=cpu
echo ‘top pod name' >>/opt/KUTR00401/KUTR00401.txt
or
kubectl top node --sort-by='cpu' --no-headers | head -1
kubectl top pod --sort-by='memory' --no-headers | head -1
kubectl top pod --sort-by='cpu' --no-headers | tail -1
Q3. Task: Check to see how many nodes are ready (not including nodes tainted NoSchedule) and wrtie the number to /opt/KUSC00402/kusc00402.txt
kubectl config use-context k8s
Answer
kubectl get nodes
kubectl get node | grep -i ready |wc -l
kubectl describe nodes | grep ready | wc -l
kubectl describe nodes | grep -i taint | grep -i noschedule | wc -l
echo 3 > /opt/KUSC00402/kusc00402.txt
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" > /opt/KUSC00402/kusc00402.txt
- Extract log lines corresponding to error
unable-to-access-website
- Write them to
/opt/KUTR00101/foobar
kubectl config use-context k8s
Answer
kubectl logs foobar | grep 'unable-to-access-website' > /opt/KULM00201/foobar
- Name:
nginx-kusc00401
- Image:
nginx
- Node selector:
disk=ssd
kubectl config use-context k8s
Answer
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00101
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
kubectl create –f pod.yaml \
kubectl get pods
Q6. Task: List all persistent volumes sorted by capacity, saving the full kubectl output to /opt/KUCC00102/volume_list
. Use kubectl 's own functionality for sorting the output, and do not manipulate it any further.
Answer
# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storage
kubectl get pv --sort-by = .spec.capacity.storage > /opt/ KUCC00102/volume_list
Q7. Task: Create a file: /opt/KUCC00302/kucc00302.txt
that lists all pods that implement service baz
in namespace development
. The format of the file should be one pod name per line.
Note: selector: name=foo
Answer
kubectl describe service baz –n development
kubectl get pods –l name=foo –n development –o NAME > /opt/KUCC00302/kucc00302.txt
- Name:
non-persistent-redis
- container Image:
redis
- Volume with name:
cache-control
- Mount path:
/data/redis
- The pod should launch in the staging namespace be persistent.
Answer
vi volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: non-persistent-redis
namespace: staging
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: cache-control
mountPath: /data/redis
volumes:
- name: cache-control
emptyDir: {}
kubectl create –f volume.yaml
kubectl get pods -n staging
Answer
# List PersistentVolumes sorted by capacity
kubectl get pods -o wide
kubectl delete pods nginx-dev nginx-prod
Q10. Task: A Kubernetes worker node, named wk8s-node-0
is in state NotReady
, investigate why this is the case, and perform any appropriate steps to bring the node to a Ready
state, ensuring that any changes are made permanent.
kubectl config use-context wk8s
You can
ssh
to the failed node using:
ssh wk8s-node-0
You can assume elevated privileges on the node with the following command:
sudo -i
Answer
kubectl get nodes
kubectl describe node wk8s-node-0 #view faulty node info
ssh wk8s-node-0 #enter node01
ps –aux | grep kubelet
sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet
Q11. Task: Given a partially-functioning Kubernetes cluster, identify symptoms of failure on the cluster. Determine the node, the failing service, and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.
You can
ssh
to the relevant I nodes:
[student@node-1]$ ssh <nodename>
You can assume elevated privileges on the node with the following command:
[student@nodename]$ sudo –i
Answer
cat /var/lib/kubelet/config.yaml
staticpodpath: /etc/kubernates/manifests
systemctl restart kublet
systemctl enable kublet
[student@node-1] > ssh ek8s
kubectl config use-context ek8s
Answer
```bash kubectl cordon ek8s-node-1 kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force ```Q13. Task: Configure the kubelet systemd-managed service
, on the node labelled with name=wk8s-node-1
, to launch a pod containing a single container of image httpd
named webtool
automatically. Any spec
files required should be placed in the /etc/kubernetes/manifests
directory on the node.
You can
ssh
to the appropriate node using:
[student@node-1]$ ssh wk8s-node-1
You can assume elevated privileges on the node with the following command:
[student@wk8s-node-1]$ sudo –i
Answer
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force
Q14. Task: Create a pod that echo hello world
and then exists. Have the pod deleted automatically when it's completed
You can
ssh
to the appropriate node using:
[student@node-1]$ ssh wk8s-node-1
You can assume elevated privileges on the node with the following command:
[student@wk8s-node-1]$ sudo –i
Answer
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force
Answer
```bash kubectl get pods --all-namespaces > /opt/pods-list.yaml ```Q16. Task: Create a pod named kucc4
with a single app container for each of the following images running inside (there may be between 1 and 4 images specified)
nginx + redis + memcached + consul
kubectl config use-context k8s
Answer
Multipod question: vi kucc4.yaml
kubectl run kucc8 --image=nginx --dry-run -o yaml > kucc4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kucc4
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
kubectl create -f kucc8.yaml
Q17. Task: You have been asked to create a new ClusterRole
for a deployment pipeline and bind it to a specific ServiceAccount
scoped to a specific namespace.
- Create a new
ClusterRole
nameddeployment-clusterrole
, which only allows to create the following resource types:Deployment
StatefulSet
DaemonSet
- Create a new
ServiceAccount
namecicd-token
in the existing namespaceapp-team1
. - Bind the new ClusterRole
deployment-clusterrole
to the new ServiceAccountcicd-token
, limited to the namespaceapp-team1
.
kubectl config use-context ek8s
Answer
kubectl create namespace app-team1
kubectl create serviceaccount cicd-token -n app-team1 #create service account
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefulset,daemonset #create cluster role
kubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=default:cicd-token --namespace=app-team1
kubectl create rolebinding cicd-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token –n app-team1