-
Notifications
You must be signed in to change notification settings - Fork 3
Step by Step Setup
This is a comprehensive guide to install, configure, test and debug OISP on a kubernetes cluster from scratch. Try not to skip any step, as some do setup that is also required in following steps.
- A running kubernetes cluster with helm and a volume provisioner
- Very basic understanding of kubernetes and docker
- A local Linux machine with standard development tools like git, make etc.
On your local machine, you are going to need kubectl
and helm
installed and configured.
kubectl is a client program to manage objects on a kubernetes cluster.
Installation instructions for kubectl can be found here.
After installing kubectl you will need to configure it to connect to your cluster. If you are testing with minikube
locally, you can skip this step, as kubectl tries to connect to a local cluster by default.
This page describes the config files used by kubectl
. Your cloud provider might also provide a kubeconfig
file, which you can use to skip writing the configuration yourself.
By default, kubectl will use ~/.kube/config
, you can override this behavior with the --kubeconfig
flag. However, it is recommended to copy the config four your cluster to this location, as many scripts in the project assume the default config is the correct one.
You can verify that your setup is working right by running:
$ kubectl cluster-info
Kubernetes master is running at https://[IP-ADRESS]:6443
Heapster is running at https://[IP-ADRESS]:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://[IP-ADRESS]:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Helm is a package manager for kubernetes. OISP is published as a helm chart (~package).
Installation instructions for the helm client here. After installing, run helm init
which will initialize helm locally, and install tiller
on the cluster, which is the server-side part of helm. You might need to run helm init --upgrade
if there is a version mismatch between helm and tiller.
Kubernetes dashboard is a web UI for monitoring and managing your cluster. For troubleshooting, detailed information on setup and installation can be found in the official documentation.
By default, the dashboard is not installed, but your cloud provider might offer it as an extension. Otherwise, follow the installation instruction on the link above.
The dashboard is not exposed to the internet, but it might not always be practical to browse it on a kubernetes node. In order to access it on the local machine, run the following command:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
Alternatively, you can run make proxy
, which also exposes OISP components which are running at the time the command is given. This is useful for debugging, but keep in mind that you might need to restart the proxy when pods are recreated.
The dashboard should be accessible on http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
The dashboard offers two authentication methods, first using a kubeconfig file, and second using an access token. Depending on how you have written your kubeconfig file the first method might not work. The recommended way is creating a service account with the right permissions. More information on service accounts and roles can be found here. For now, we are just going to create an account with all permissions.
First, create the account:
$ kubectl create serviceaccount cluster-admin-sa
$ kubectl create clusterrolebinding cluster-admin-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-sa
Then, we will need to get the corresponding token. First, get the secret containing the token by:
$ kubectl get secret | grep cluster-admin-sa
cluster-admin-sa-token-l6qtq kubernetes.io/service-account-token 3 5m
And finally get the token by running the following:
$ kubectl describe secret cluster-admin-sa-token-l6qtq
Name: cluster-admin-sa-token-l6qtq
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cluster-admin-sa
kubernetes.io/service-account.uid: fe79dd8c-dcf4-11e8-86ee-005056333c84
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsI...
You can copy-paste the token (without surrounding white-space) to the input field. Afterward, you should be able to login to the dashboard.
First, clone the oisp-k8s repository, which is a HELM chart for the platform.
Then, you will need to edit the values.yaml
file to update docker details. By default, it will try to access the OISP dockerhub repository, if you do not have access to this repository, edit the file to use your registry and repository with the same images. You will also need to update the credentials, however, you might prefer to provide those as command line arguments in order to avoid accidentally checking them into version control.
If you have a domain and want to expose the platform to the internet, also edit templates/frontend.yaml
and templates/websocket-server.yaml
to update domain names for the ingress.
Export DOCKERUSER
and DOCKERPASS
environment variables to access oisp or your own Docker repo. Afterwards, run make deploy-oisp
or make deploy-oisp-test
.
You can remove your deployment by running make undeploy-oisp
cd to the repo root directory, and run the following:
$ helm install . --name oisp --set imageCredentials.username=$DOCKERUSERNAME --set imageCredentials.password=$DOCKERPASSWORD --namespace oisp
You can monitor the progress on the dashboard or via kubectl.
Notes:
-
--name
and--namespace
parameters are optional by highly recommended as it makes managing the deployment much easier. - Even though it is possible to upgrade a helm chart after applying changes using
helm upgrade
, not every change can be deployed using this command. You can clear the installation by running the following commands:
kubectl delete namespace oisp
helm del oisp --purge
kubectl delete will by default not wait until termination is complete, if you want to achieve this, use the --wait
flag.
There are many ways to test whether your deployment is running as intended. First, make sure all pods are up and running using the dashboard. This might take a while (~1 minute), and until startup is complete, some pods will report crashes, and restart automatically when the resources they need are available.
If you have an ingress configured, you can visit the URL to see whether the OISP dashboard is up.
Otherwise you can run make open-shell
to get a terminal to an Ubuntu container, which contains some tools which are helpful for debugging. From there you should be able to access other components. This command also takes a DEPLOYMENT
parameter, which you can use to open a shell to a random pod in another deployment.
You can run make test
to run E2E on the debugger container, to which you can also provide some extra parameters, described in make help
. For all tests to pass, you need to deploy oisp by running make deploy-oisp-test
, which deploys the platform and creates ethereal e-mail accounts for testing.
The repository also contains a locust
folder, which is another HELM chart to deploy the locust testing framework on the same cluster. The readme provides instructions on getting started.