Skip to content

Commit

Permalink
feat: calendar working and deployable with observability in kubernetes
Browse files Browse the repository at this point in the history
  • Loading branch information
mati007thm committed Dec 22, 2023
1 parent c991816 commit 091dfba
Show file tree
Hide file tree
Showing 17 changed files with 498 additions and 244 deletions.
89 changes: 52 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,63 @@
## Original repos

```shell
# dapr-distributed-calendar
https://github.com/dapr/samples

# distributed-calculator
https://github.com/dapr/quickstarts
```

# dapr-distributed-calendar
https://github.com/dapr/samples
## dapr-distributed-calendar

The dapr-distributed-calendar is now working perfectly. It is possible to do POST, DELETE and GET events.

### Possible operations with Postman are

POST: <http://localhost:3000/newevent>
Body:

```json
{
"data": {
"name": "Uninstall Event",
"date": "TBD",
"id": "1"
}
}
```

DELETE: <http://localhost:3000/event/1>

GET: <http://localhost:3000/event/1>

### Setup with docker-compose

```shell
cd 12-factor-app/dapr-distributed-calendar
docker-compose up
```

### Setup with Kubernetes

```shell
cd 12-factor-app/dapr-distributed-calendar
./kubernetes-deploy.sh
```

Or pick specific parts of the deployment.
Every part of the deployment process that is not required, has a `OPTIONAL` comment!

**Traces** are gathered and send to the OpenTelemetry Collector with the build in dapr integration and then send to Jaeger.

**Metrics** are scraped by the OpenTelemetry Collector and then send to Prometheus. In addition, the nodejs application is instrumented manually to send additional non dapr metrics to the collector, but currently the metrics are only send to console output, which will be fixed in the next Sprint. The other microservices are yet to be instrumented.

**Logs** are currently send directly over fluentd to elasticsearch. Due to most logging implementations for OpenTelemetry still being under development, being experimental or not implemented yet, this is the most logical resolution. (Go not implemented yet, Python experimental, Node in development)

### About Auto-Instrumentation

It is theoretically possible to use auto-instrumentation for kubernetes, but sadly this is very buggy especially Go and Python. and even those that do work do not provide a very good instrumentation when it comes to metrics, traces and logs. Therefore thy have been uncommented in the code, but can still be found within the folder `12-factor-app/dapr-distributed-calendar/otel`.

## distributed-calculator

The distributed-calculator works, and I added DELETE and PUT to the already existing POST and GET requests:
Expand Down Expand Up @@ -45,38 +95,3 @@ curl -s -X PUT http://localhost:8000/state -H 'Content-Type: application/json' -
```bash
curl -s -X DELETE http://localhost:8000/state
```

## dapr-distributed-calendar

The dapr-distributed-calendar is not working perfectly. It is possible to do PUT and DELETE but not GET. In addition no put statement directly to the state store is possible like described in the documentation and same redis retryable errors.

### Possible operations with Postman are

PUT: <http://localhost:3001/newevent>
Body:

```json
{
"data": {
"name": "Uninstall Event",
"date": "TBD",
"id": "1"
}
}
```

DELETE: <http://localhost:3001/event/1>

### What is not working is

PUT: <http://localhost:3001/v1.0/states/events/1>
Body:

```json
{
"key": "1",
"value": "Test Postman"
}
```

GET: <http://localhost:3001/v1.0/states/events/1>
105 changes: 105 additions & 0 deletions dapr-distributed-calendar/fluent/fluentd-config-map.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: kube-system
data:
fluent.conf: |
<match fluent.**>
@type null
</match>
<match kubernetes.var.log.containers.**fluentd**.log>
@type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
@type null
</match>
<match kubernetes.var.log.containers.**kibana**.log>
@type null
</match>
<source>
@type tail
path /var/log/containers/*.log
pos_file fluentd-docker.pos
time_format %Y-%m-%dT%H:%M:%S
tag kubernetes.*
<parse>
@type multi_format
<pattern>
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</pattern>
<pattern>
format regexp
expression /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
time_format '%Y-%m-%dT%H:%M:%S.%N%:z'
keep_time_key false
</pattern>
</parse>
</source>
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
<filter kubernetes.var.log.containers.**>
@type parser
<parse>
@type json
format json
time_key time
time_type string
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
keep_time_key false
</parse>
key_name log
replace_invalid_sequence true
emit_invalid_record_to_error true
reserve_data true
</filter>
<match **>
@type elasticsearch
@id out_es
@log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'dapr'}"
logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'dapr'}"
type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
<buffer>
flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
</buffer>
</match>
100 changes: 100 additions & 0 deletions dapr-distributed-calendar/fluent/fluentd-dapr-with-rbac.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
namespace: default
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.9.2-debian-elasticsearch7-1.0
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch-master.observability"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluentd-config
mountPath: /fluentd/etc
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluentd-config
configMap:
name: fluentd-config
27 changes: 26 additions & 1 deletion dapr-distributed-calendar/go/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,7 +1,32 @@
# #first stage - builder
# FROM golang:1.15-buster as builder
# WORKDIR /dir
# COPY go_events.go .
# RUN go get -d -v
# RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
# #second stage
# FROM debian:buster-slim
# WORKDIR /root/
# COPY --from=builder /dir/app .
# EXPOSE 6000
# CMD ["./app"]

# FROM golang:1.21-alpine
# WORKDIR /root
# COPY go_events.go .
# RUN go mod init go_events
# RUN go mod tidy
# RUN go get -d -v
# RUN go build go_events .
# EXPOSE 6000
# CMD ["./go_events"]

#first stage - builder
FROM golang:1.15-buster as builder
FROM golang:1.20.12-alpine as builder
WORKDIR /dir
COPY go_events.go .
RUN go mod init go_events
RUN go mod tidy
RUN go get -d -v
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
#second stage
Expand Down
6 changes: 3 additions & 3 deletions dapr-distributed-calendar/go/go_events.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"log"
"net/http"
"os"
Expand Down Expand Up @@ -77,7 +77,7 @@ func deleteEvent(w http.ResponseWriter, r *http.Request) {
log.Printf("Response after delete call: %s", resp.Status)

defer resp.Body.Close()
bodyBytes, _ := ioutil.ReadAll(resp.Body)
bodyBytes, _ := io.ReadAll(resp.Body)

log.Printf(string(bodyBytes))
}
Expand Down Expand Up @@ -108,7 +108,7 @@ func getEvent(w http.ResponseWriter, r *http.Request) {
log.Printf("Response after get call: %s", resp.Status)

defer resp.Body.Close()
bodyBytes, err := ioutil.ReadAll(resp.Body)
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
log.Printf("Error reading response body: %v", err)
return
Expand Down
Loading

0 comments on commit 091dfba

Please sign in to comment.