Skip to content
This repository has been archived by the owner on Oct 29, 2021. It is now read-only.

Latest commit

 

History

History
159 lines (133 loc) · 7.5 KB

6.Enable-Pod-Security-Policy.md

File metadata and controls

159 lines (133 loc) · 7.5 KB

6. Enable Pod Security Policies

Pre-requirements setup

Due to the insecure communication between the kube-apiserver and the controller-manager, we have to modify some parameters in the controller-manager.

The Controller Manager must be run against the secured API port, and must not have superuser permissions. Otherwise requests would bypass authentication and authorization modules, all PodSecurityPolicy objects would be allowed, and users would be able to create privileged containers. For more details on configuring Controller Manager authorization, see Controller Roles.

source: troubleshooting Pod Security Polciies.

First of all we have to create a new serviceaccount to authorize the controller-manager to talk secure to the api server, along with the cluster-role system:kube-controller-manager available in the default cluster configuration.

$ kubectl apply -f rbac/controller-manager.yaml
serviceaccount/custom-controller-manager created
clusterrolebinding.rbac.authorization.k8s.io/custom-controller-manager created

Then we have to generate the corresponding kube-config file:

$ helper/create_sa_kubeconfig.sh custom-controller-manager kube-system
Creating target directory to hold files in /home/angel.barrera/personal/lab-microk8s-pod-security-policies...done
Creating a service account in kube-system namespace: custom-controller-manager
serviceaccount/custom-controller-manager configured

Getting secret of service account custom-controller-manager on kube-system
Secret name: custom-controller-manager-token-7xmvk

Extracting ca.crt from secret...done
Getting user token from secret...done
Setting current context to: microk8s
Cluster name: microk8s-cluster
Endpoint: http://:8080

Preparing k8s-custom-controller-manager-kube-system-conf
Setting a cluster entry in kubeconfig...Cluster "microk8s-cluster" set.
Setting token credentials entry in kubeconfig...User "custom-controller-manager-kube-system-microk8s-cluster" set.
Setting a context entry in kubeconfig...Context "custom-controller-manager-kube-system-microk8s-cluster" created.
Setting the current-context in the kubeconfig file...Switched to context "custom-controller-manager-kube-system-microk8s-cluster".

All done! Test with:
KUBECONFIG=/home/angel.barrera/personal/lab-microk8s-pod-security-policies/.kube-custom-controller-manager-kube-system-conf kubectl get pods
you should not have any permissions by default - you have just created the authentication part
You will need to create RBAC permissions
NAME                        READY   STATUS    RESTARTS   AGE
kube-dns-6ccd496668-cnlxb   3/3     Running   0          83s

Then copy the generated config file:

$ sudo cp .kube-custom-controller-manager-kube-system-conf /var/snap/microk8s/current/args/.kube-config

Let's rock

Until now, we were preparing the environment to be able to configure the Pod Security Policy admission controller. Let's go:

  • Append PodSecurityPolicy at the end of the parameter --enable-admission-plugins in the /var/snap/microk8s/current/args/kube-apiserver file.
  • Add --use-service-account-credentials=true at the end of /var/snap/microk8s/current/args/kube-controller-manager file.
  • Ensure the /var/snap/microk8s/current/args/kube-controller-manager file contains only the following parameters:
    • --kubeconfig=${SNAP_DATA}/args/.kube-config
    • --service-account-private-key-file=${SNAP_DATA}/certs/serviceaccount.key
    • --use-service-account-credentials=true
  • Restart microk8s cluster: microk8s.stop && microk8s.start.

Now you should have the following outputs:

$ grep PodSecurityPolicy /var/snap/microk8s/current/args/kube-apiserver
--enable-admission-plugins="NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodSecurityPolicy"
$ cat /var/snap/microk8s/current/args/kube-controller-manager
--kubeconfig=${SNAP_DATA}/args/.kube-config
--service-account-private-key-file=${SNAP_DATA}/certs/serviceaccount.key
--use-service-account-credentials=true
$ kubectl get sa -n kube-system
NAME                                 SECRETS   AGE
attachdetach-controller              1         67s
certificate-controller               1         65s
clusterrole-aggregation-controller   1         66s
cronjob-controller                   1         64s
custom-controller-manager            1         92m
daemon-set-controller                1         65s
default                              1         26h
deployment-controller                1         67s
disruption-controller                1         64s
endpoint-controller                  1         62s
expand-controller                    1         65s
generic-garbage-collector            1         67s
horizontal-pod-autoscaler            1         67s
job-controller                       1         62s
kube-dns                             1         77m
namespace-controller                 1         67s
node-controller                      1         66s
persistent-volume-binder             1         62s
pod-garbage-collector                1         67s
pv-protection-controller             1         66s
pvc-protection-controller            1         67s
replicaset-controller                1         64s
replication-controller               1         66s
resourcequota-controller             1         63s
service-account-controller           1         63s
service-controller                   1         63s
statefulset-controller               1         63s
ttl-controller                       1         64s

Test it

Once this admission controller is activated, if you try to deploy a nginx, it will not start.

$ kubectl apply -f examples/nginx.yaml
$ kubectl get rs,deploy,pod
NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.extensions/nginx-deployment-7c74c7d654   1         0         0       2m59s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx-deployment   0/1     0            0           2m59s
$ kubectl describe rs nginx-deployment-7c74c7d654
Name:           nginx-deployment-7c74c7d654
Namespace:      default
Selector:       app=nginx,pod-template-hash=7c74c7d654
Labels:         app=nginx
                pod-template-hash=7c74c7d654
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/nginx-deployment
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=nginx
           pod-template-hash=7c74c7d654
  Containers:
   nginx:
    Image:        nginx:1.15.4
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age                   From                   Message
  ----     ------        ----                  ----                   -------
  Warning  FailedCreate  51s (x16 over 3m34s)  replicaset-controller  Error creating: pods "nginx-deployment-7c74c7d654-" is forbidden: no providers available to validate pod request

The error indicates that you are not able to validate the deployment with any security policy.

$ kubectl delete -f examples/nginx.yaml