Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster Initialization is missing for custom conf.join #210

Open
shaardie opened this issue Nov 26, 2021 · 2 comments
Open

Cluster Initialization is missing for custom conf.join #210

shaardie opened this issue Nov 26, 2021 · 2 comments

Comments

@shaardie
Copy link

I have a setup for CockroachDB using multiple Kubernetes Cluster. Since the automatic join list only contains the services from the local cluster but not the other cluster, I used a custom conf.join list with the services from both clusters. But unfortunately the job.init to initialize the cluster is not running for both clusters due to

{{ $isClusterInitEnabled := and (eq (len .Values.conf.join) 0) (not (index .Values.conf `single-node`)) }}
{{ $isDatabaseProvisioningEnabled := .Values.init.provisioning.enabled }}
{{- if or $isClusterInitEnabled $isDatabaseProvisioningEnabled }}

So I end with an uninitialized database cluster.

What would be the best way to do this? Can we have something like

init:
  enabled: true

to explicitly enable the init on one of the clusters?

@samuel-form3
Copy link

We've implemented that on our Forked version on the Helm Chart.

We use the join flag to join multiple nodes from multiple clusters. Which means that the isClusterInitEnabled would never work as well.

We've changed it to this:

{{ $isClusterInitEnabled := and (not (index .Values.conf `single-node`)) .Values.init.enabled }}

And then we manually control the enabled flag per cluster manually.

Personally I think this would make sense to be introduced here, and I was thinking doing that to avoid having a forked chart. @pseudomuto WDYT ?

@Otterpohl
Copy link
Contributor

Otterpohl commented Nov 2, 2022

Just to note a quick workaround for this (until it is fixed) as i had the same problem.
You can just specify the join list members in the statefulset args

statefulset:
  args:
    - --join=cockroachdb-0.cockroachdb.database.svc.cluster.local:26257,cockroachdb-1.cockroachdb.database.svc.cluster.local:26257,cockroachdb-2.cockroachdb.database.svc.cluster.local:26257

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants