Skip to content

Commit

Permalink
Refine README and values.template.yaml
Browse files Browse the repository at this point in the history
  • Loading branch information
Tirokk committed May 14, 2021
1 parent c0a1028 commit 050d0c3
Show file tree
Hide file tree
Showing 3 changed files with 115 additions and 50 deletions.
10 changes: 5 additions & 5 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
name: ocelot.social publish branded CI

# Wolle on:
# push:
# branches:
# - master
on: [push]
on:
push:
branches:
- master
# on: [push] # for testing while developing

jobs:
##############################################################################
Expand Down
130 changes: 97 additions & 33 deletions deployment/kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,54 @@
# Helm installation of Ocelot.social
# Helm Installation Of Ocelot.Social

Deploying *ocelot.social* with Helm is very straight forward. All you have to do is to change certain parameters, like domain names and API keys, then you just install our provided Helm chart to your cluster.

## Configuration

You can customize the network with your configuration by changing the `values.yaml`, all variables will be available as environment variables in your deployed kubernetes pods. For more details refer to the `values.yaml.dist` file.
You can customize the network with your configuration by duplicate the `values.template.yaml` to a new `values.yaml` file and change it to your need. All included variables will be available as environment variables in your deployed kubernetes pods.

Besides the `values.yaml.dist` file we provide a `nginx.values.yaml.dist` and `dns.values.yaml.dist`. The `nginx.values.yaml` is the configuration for the ingress-nginx helm chart, while the `dns.values.yaml` file is for automatically updating the dns values on digital ocean and therefore optional.

As hinted above you should copy the given files and rename them accordingly. Then go ahead and modify the values in the newly created files accordingly.
Besides the `values.template.yaml` file we provide a `nginx.values.template.yaml` and `dns.values.template.yaml` for a similar procedure. The new `nginx.values.yaml` is the configuration for the ingress-nginx Helm chart, while the `dns.values.yaml` file is for automatically updating the dns values on digital ocean and therefore optional.

## Installation

Due to the many limitations of Helm you still have to do several manual steps. Those occur before you run the actual ocelot helm chart. Obviously it is expected of you to have `helm` and `kubectl` installed. For Digital Ocean you might require `doctl` aswell.
Due to the many limitations of Helm you still have to do several manual steps. Those occur before you run the actual *ocelot.social* Helm chart. Obviously it is expected of you to have `helm` and `kubectl` installed. For Digital Ocean you might require `doctl` aswell.

### Cert Manager (https)

Please refer to [cert-manager.io docs](https://cert-manager.io/docs/installation/kubernetes/) for more details.

***ATTENTION:*** *Be with the Terminal in your repository in the folder of this README.*

1. Create Namespace

```bash
kubectl --kubeconfig=/../kubeconfig.yaml create namespace cert-manager
# kubeconfig.yaml set globaly
$ kubectl create namespace cert-manager
# or kubeconfig.yaml in your repo, then adjust
$ kubectl --kubeconfig=/../kubeconfig.yaml create namespace cert-manager
```

2. Add Helm Repo & update
2. Add Helm repository and update

```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
```

3. Install Cert-Manager Helm chart

```bash
# this can not be included sine the CRDs cant be installed properly via helm...
helm --kubeconfig=/../kubeconfig.yaml \
# option 1
# this can't be applied via kubectl to our cluster since the CRDs can't be installed properly this way ...
# $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.crds.yaml

# option 2
# kubeconfig.yaml set globaly
$ helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.1.0 \
--set installCRDs=true
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml \
install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.1.0 \
Expand All @@ -43,15 +57,20 @@ helm --kubeconfig=/../kubeconfig.yaml \

### Ingress-Nginx

1. Add Helm Repo & update
1. Add Helm repository and update

```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
```

2. Install ingress-nginx

```bash
helm --kubeconfig=/../kubeconfig.yaml install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
# kubeconfig.yaml set globaly
$ helm install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
```

### Digital Ocean Firewall
Expand All @@ -61,56 +80,101 @@ This is only necessary if you run Digital Ocean without load balancer ([see here
1. Authenticate towards DO with your local `doctl`

You will need a DO token for that.

```bash
doctl auth init
# without doctl context
$ doctl auth init
# with doctl new context to be filled in
$ doctl auth init --context <new-context-name>
```

You will need an API token, which you can generate in the control panel at <https://cloud.digitalocean.com/account/api/tokens> .

2. Generate DO firewall

Fill in the `CLUSTER_UUID` and `your-domain` (Get the `CLUSTER_UUID` value from the dashboard or the ID column from doctl kubernetes cluster list.):

```bash
doctl compute firewall create \
# without doctl context
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:1ebf0cdc-86c9-4384-936b-40010b71d049 \
--name=my-domain-http-https
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https
# with doctl context to be filled in
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https --context <context-name>
```

To get informations about your success use this command. (Fill in the `ID` you got at creation.):

```bash
# without doctl context
$ doctl compute firewall get <ID>
# with doctl context to be filled in
$ doctl compute firewall get <ID> --context <context-name>
```

### DNS

This chart is only necessary (recommended is more precise) if you run Digital Ocean without load balancer.
You need to generate a token for the `dns.values.yaml`.
This chart is only necessary (recommended is more precise) if you run Digital Ocean without load balancer.
You need to generate an access token with read + write for the `dns.values.yaml` at <https://cloud.digitalocean.com/account/api/tokens> and fill it in.

1. Add Helm repository and update

1. Add Helm Repo & update
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
```

2. Install DNS

```bash
helm --kubeconfig=/../kubeconfig.yaml install dns bitnami/external-dns -f dns.values.yaml
# kubeconfig.yaml set globaly
$ helm install dns bitnami/external-dns -f dns.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install dns bitnami/external-dns -f dns.values.yaml
```

### Ocelot.social
### Ocelot.Social

All commands for ocelot need to be executed in the kubernetes folder. Therefore `cd deployment/kubernetes/` is expected to be run before every command. Furthermore the given commands will install ocelot into the default namespace. This can be modified to by attaching `--namespace not.default`.

#### Install

Only run once for the first time of installation:

```bash
helm --kubeconfig=/../kubeconfig.yaml install ocelot ./
# kubeconfig.yaml set globaly
$ helm install ocelot ./
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install ocelot ./
```

#### Upgrade
#### Upgrade & Update

Run for all upgrades and updates:

```bash
helm --kubeconfig=/../kubeconfig.yaml upgrade ocelot ./
# kubeconfig.yaml set globaly
$ helm upgrade ocelot ./
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml upgrade ocelot ./
```

#### Uninstall

Be aware that if you uninstall ocelot the formerly bound volumes become unbound. Those volumes contain all data from uploads and database. You have to manually free their reference in order to bind them again when reinstalling. Once unbound from their former container references they should automatically be rebound (considering the sizes did not change)

```bash
helm --kubeconfig=/../kubeconfig.yaml uninstall ocelot
# kubeconfig.yaml set globaly
$ helm uninstall ocelot
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml uninstall ocelot
```

## Error reporting
## Error Reporting

We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
our backend and web frontend. You can either use a hosted or a self-hosted
Expand All @@ -125,4 +189,4 @@ If you are lucky enough to have a kubernetes cluster with the required hardware
support, try this [helm chart](https://github.com/helm/charts/tree/master/stable/sentry).

On our kubernetes cluster we get "mult-attach" errors for persistent volumes.
Apparently Digital Ocean's kubernetes clusters do not fulfill the requirements.
Apparently Digital Ocean's kubernetes clusters do not fulfill the requirements.
25 changes: 13 additions & 12 deletions deployment/kubernetes/values.template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,12 @@ PUBLIC_REGISTRATION: false
INVITE_REGISTRATION: false

BACKEND:
# Change all the below if needed
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# Label is appended based on .Chart.appVersion
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/backend-branded"
CLIENT_URI: "https://staging.ocelot.social"
# create a new one for your network
JWT_SECRET: "b/&&7b78BF&fv/Vd"
MAPBOX_TOKEN: "pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g"
PRIVATE_KEY_PASSPHRASE: "a7dsf78sadg87ad87sfagsadg78"
Expand All @@ -21,7 +22,7 @@ BACKEND:
SMTP_IGNORE_TLS: 'true'
SMTP_SECURE: 'false'

# Most likely you don't need to change this
# most likely you don't need to change this
MIN_READY_SECONDS: "15"
PROGRESS_DEADLINE_SECONDS: "60"
REVISIONS_HISTORY_LIMIT: "25"
Expand All @@ -31,9 +32,9 @@ BACKEND:
STORAGE_UPLOADS: "25Gi"

WEBAPP:
# Change all the below if needed
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# Label is appended based on .Chart.appVersion
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/webapp-branded"
WEBSOCKETS_URI: "wss://staging.ocelot.social/api/graphql"

Expand All @@ -47,7 +48,7 @@ WEBAPP:
DOCKER_IMAGE_PULL_POLICY: "Always"

NEO4J:
# Most likely you don't need to change this
# most likely you don't need to change this
REVISIONS_HISTORY_LIMIT: "25"
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/neo4j-community-branded"
DOCKER_IMAGE_PULL_POLICY: "Always"
Expand Down Expand Up @@ -75,9 +76,9 @@ NEO4J:
DBMS_SECURITY_PROCEDURES_UNRESTRICTED: "algo.*,apoc.*"

MAINTENANCE:
# Change all the below if needed
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# Label is appended based on .Chart.appVersion
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/maintenance-branded"

# Most likely you don't need to change this
Expand All @@ -87,7 +88,7 @@ MAINTENANCE:
DOCKER_IMAGE_PULL_POLICY: "Always"

LETSENCRYPT:
# Change all the below if needed
# change all the below if needed
# ISSUER is used by cert-manager to set up certificates with the given provider.
# change it to "letsencrypt-production" once you are ready to have valid cetrificates.
# Be aware that the is an issuing limit with letsencrypt, so a dry run with staging might be wise
Expand All @@ -98,14 +99,14 @@ LETSENCRYPT:
- "www.staging.ocelot.social"

NGINX:
# Most likely you don't need to change this
# most likely you don't need to change this
PROXY_BODY_SIZE: "10m"

STORAGE:
# Change all the below if needed
# change all the below if needed
PROVISIONER: "dobs.csi.digitalocean.com"

# Most likely you don't need to change this
# most likely you don't need to change this
RECLAIM_POLICY: "Retain"
VOLUME_BINDING_MODE: "Immediate"
ALLOW_VOLUME_EXPANSION: true

0 comments on commit 050d0c3

Please sign in to comment.