Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow update of CassandraDatacenter to be processed to StatefulSets i… #711

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

burmanm
Copy link
Contributor

@burmanm burmanm commented Sep 25, 2024

…f the Datacenter is running in unhealthy Kubernetes configuration. These could be scheduling related or pods crashing

What this PR does:

Which issue(s) this PR fixes:
Fixes #583

Checklist

  • Changes manually tested
  • Automated Tests added/updated
  • Documentation added/updated
  • CHANGELOG.md updated (not required for documentation PRs)
  • CLA Signed: DataStax CLA

@burmanm burmanm force-pushed the check_rack_podtemplate_changes branch from 287d162 to fd2b3cb Compare November 20, 2024 17:26
@burmanm burmanm force-pushed the check_rack_podtemplate_changes branch from fd2b3cb to a4ae065 Compare December 4, 2024 14:29
@burmanm burmanm marked this pull request as ready for review December 4, 2024 14:31
@burmanm burmanm requested a review from a team as a code owner December 4, 2024 14:31
Copy link
Contributor

@adejanovski adejanovski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was able to test this with a failed upgrade to a version of Cassandra that doesn't exist.
But I ran into a problem with the following scenario:

  • I created a cluster with 1G for initial and max heap.
  • Once it was started, I updated it with a 256M initial heap size and 128M max heap size.
  • The pod never gets more than one restart, so any fix wouldn't be applied.
  • I tried to use forceUpgradeRacks then, but it didn't apply the fix either:
  forceUpgradeRacks:
    - rack1
  managementApiAuth:
    insecure: {}
  racks:
    - name: rack1

Here's my full manifest:

apiVersion: cassandra.datastax.com/v1beta1
kind: CassandraDatacenter
metadata:
  name: test-cluster
  namespace: cass-operator
spec:
  clusterName: development
  config:
    cassandra-yaml:
      authenticator: PasswordAuthenticator
      authorizer: CassandraAuthorizer
      num_tokens: 16
      role_manager: CassandraRoleManager
    jvm-server-options:
      initial_heap_size: 1G
      max_heap_size: 1G
  configBuilderResources: {}
  managementApiAuth:
    insecure: {}
  racks:
    - name: rack1
  resources:
    requests:
      cpu: 1500m
      memory: 2Gi
  serverType: cassandra
  serverVersion: 4.1.5
  size: 3
  storageConfig:
    cassandraDataVolumeClaimSpec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: standard

…f the Datacenter is running in unhealthy Kubernetes configuration. These could be scheduling related or pods crashing
… ForceRackUpgrade is not set and fix some missing () for ifs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow modification of configs if the first pod of rack fails
2 participants