-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New version of zdm-proxy helm chart using StatefulSets #89
base: main
Are you sure you want to change the base?
Changes from 22 commits
9f32a90
9896ee1
e842935
29b0b4d
3d46813
0e765f3
9647c87
83a7cc6
bb8c038
461139e
7235572
eb5c49b
6eaf66f
286f7c9
a42d5e9
f58e7b3
5d52bd0
f86e2c8
6f57a02
e8dfd54
9f1affc
692a11b
4ef5754
98636c4
dd1c433
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
# pull in these templates outside of range loop | ||
{{ $zdm_fullname := include "zdm.fullname" . -}} | ||
{{- $zdm_labels := include "zdm.labels" . -}} | ||
{{- $zdm_selectorLabels := include "zdm.selectorLabels" . -}} | ||
|
||
# calculate a variable that contains all proxy service addresses | ||
{{ $service_addresses := "" -}} | ||
{{- range $index := until (.Values.proxy.count | int) -}} | ||
{{- $service_addresses = printf "%s,$(%s_%s_SERVICE_HOST)" $service_addresses ($zdm_fullname | upper | replace "-" "_") ($index | toString) -}} | ||
{{- end -}} | ||
{{- $service_addresses = $service_addresses | trimPrefix "," -}} | ||
|
||
weideng1 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
apiVersion: apps/v1 | ||
kind: StatefulSet | ||
metadata: | ||
labels: | ||
{{- $zdm_labels | nindent 4 }} | ||
app: {{ $zdm_fullname }} | ||
name: {{ $zdm_fullname }} | ||
namespace: {{ $.Values.namespace }} | ||
spec: | ||
serviceName: {{ $zdm_fullname }} | ||
replicas: {{ $.Values.proxy.count | int }} | ||
selector: | ||
matchLabels: | ||
{{- $zdm_selectorLabels | nindent 6 }} | ||
app: {{ $zdm_fullname }} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Consider these recommended labels from the k8s docs. https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ |
||
template: | ||
metadata: | ||
labels: | ||
{{- $zdm_selectorLabels | nindent 8 }} | ||
app: {{ $zdm_fullname }} | ||
spec: | ||
containers: | ||
- image: "{{ $.Values.proxy.image.repository }}:{{ $.Values.proxy.image.tag | default $.Chart.AppVersion }}" | ||
name: {{ $zdm_fullname }} | ||
command: | ||
- sh | ||
- "-c" | ||
- "ZDM_PROXY_TOPOLOGY_INDEX=`echo ${HOSTNAME##*-}` /main" | ||
Comment on lines
+38
to
+40
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's surprising that you're specifying a command here vs just passing in arguments to the command specified in the Docker image (or relying on arguments there).
Comment on lines
+38
to
+40
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You should be able to populate an environment variable with a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, it looks like you're extracting the ordinal index which is not available to the pod directly. It's frustrating that this isn't available. (I see the link to kubernetes/kubernetes#40651 as well) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @bradfordcp Yes exactly. That was the route I explored and ended up using the workaround as you pointed out. Should we resolve this comment? |
||
livenessProbe: | ||
httpGet: | ||
path: /health/liveness | ||
port: httpproxy | ||
initialDelaySeconds: 10 | ||
periodSeconds: 10 | ||
readinessProbe: | ||
httpGet: | ||
path: /health/readiness | ||
port: httpproxy | ||
initialDelaySeconds: 10 | ||
periodSeconds: 10 | ||
ports: | ||
- name: httpproxy | ||
containerPort: 14001 | ||
protocol: TCP | ||
- name: cql | ||
containerPort: 9042 | ||
protocol: TCP | ||
resources: | ||
weideng1 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
requests: | ||
memory: {{ $.Values.proxy.resources.requests.memory | quote }} | ||
cpu: {{ $.Values.proxy.resources.requests.cpu | quote }} | ||
limits: | ||
memory: {{ $.Values.proxy.resources.limits.memory | quote }} | ||
cpu: {{ $.Values.proxy.resources.limits.cpu | quote }} | ||
env: | ||
- name: ZDM_PRIMARY_CLUSTER | ||
value: {{ $.Values.proxy.primaryCluster | upper | quote }} | ||
- name: ZDM_READ_MODE | ||
value: {{ $.Values.proxy.readMode | upper | quote }} | ||
- name: ZDM_LOG_LEVEL | ||
value: {{ $.Values.proxy.logLevel | upper | quote }} | ||
- name: ZDM_PROXY_MAX_CLIENT_CONNECTIONS | ||
value: {{ $.Values.proxy.maxClientConnections | quote }} | ||
- name: ZDM_METRICS_ENABLED | ||
value: {{ $.Values.proxy.metricsEnabled | quote }} | ||
- name: ZDM_PROXY_LISTEN_ADDRESS | ||
valueFrom: | ||
fieldRef: | ||
fieldPath: status.podIP | ||
- name: ZDM_PROXY_LISTEN_PORT | ||
value: "9042" | ||
- name: ZDM_METRICS_ADDRESS | ||
valueFrom: | ||
fieldRef: | ||
fieldPath: status.podIP | ||
- name: ZDM_METRICS_PORT | ||
value: "14001" | ||
- name: ZDM_PROXY_TOPOLOGY_ADDRESSES | ||
value: {{ $service_addresses }} | ||
- name: ZDM_TARGET_SECURE_CONNECT_BUNDLE_PATH | ||
value: /tmp/scb/target.zip | ||
- name: ZDM_ORIGIN_CONTACT_POINTS | ||
valueFrom: | ||
secretKeyRef: | ||
name: zdmproxy | ||
key: origin_contact_points | ||
- name: ZDM_ORIGIN_PORT | ||
valueFrom: | ||
secretKeyRef: | ||
name: zdmproxy | ||
key: origin_port | ||
- name: ZDM_ORIGIN_USERNAME | ||
valueFrom: | ||
secretKeyRef: | ||
name: zdmproxy | ||
key: origin_username | ||
- name: ZDM_ORIGIN_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
name: zdmproxy | ||
key: origin_password | ||
- name: ZDM_TARGET_USERNAME | ||
valueFrom: | ||
secretKeyRef: | ||
name: zdmproxy | ||
key: target_username | ||
- name: ZDM_TARGET_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
name: zdmproxy | ||
key: target_password | ||
volumeMounts: | ||
- name: scb | ||
mountPath: "/tmp/scb" | ||
readOnly: true | ||
volumes: | ||
- name: scb | ||
secret: | ||
secretName: zdmproxy-scb | ||
items: | ||
- key: secure-connect-target.zip | ||
path: target.zip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you trying to match a per-pod label?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsanda The reason for us to create individual services and match them 1:1 at pod level is because unlike C*, zdm-proxy processes don't have gossip protocol to detect/notify topology changes among themselves; whatever set of IP addresses and their respective ordinal index (see here) specified at the start of the proxy process is going to stuck for the rest of its lifecycle. To allow dynamical IP address change to the pods due to reschedule/cordon/crash, we decided to for N number of proxies, implement N number of services, and them map 1-on-1. Given that pods managed by k8s Deployment object doesn't have a static pod name, we switched to using StatefulSets. In StatefulSets, each pod has a unique/static pod-name that we can map the service to. This will allow orderly rolling restart of the zdm-proxy pods to happen.