Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cluster is always in Deleting after volume expansion #6891

Closed
free6om opened this issue Mar 26, 2024 · 1 comment · Fixed by #6892
Closed

[BUG] Cluster is always in Deleting after volume expansion #6891

free6om opened this issue Mar 26, 2024 · 1 comment · Fixed by #6892
Assignees
Labels
bug kind/bug Something isn't working
Milestone

Comments

@free6om
Copy link
Contributor

free6om commented Mar 26, 2024

No description provided.

@free6om free6om added the kind/bug Something isn't working label Mar 26, 2024
@free6om free6om added this to the Release 0.9.0 milestone Mar 26, 2024
@free6om free6om self-assigned this Mar 26, 2024
@ahjing99
Copy link
Collaborator

When redis restore ,the cluster is also always deleting

  `kbcli cluster describe-backup redis-aatpsn-20240327025500 --namespace default `

Name: redis-aatpsn-20240327025500	Cluster: redis-aatpsn	Namespace: default

Spec:
  Method:             volume-snapshot
  Policy Name:        redis-aatpsn-redis-backup-policy

Status:
  Phase:              Completed
  Total Size:         1Gi
  Duration:           30s
  Expiration Time:    Apr 03,2024 10:55 UTC+0800
  Start Time:         Mar 27,2024 10:55 UTC+0800
  Completion Time:    Mar 27,2024 10:55 UTC+0800
  Path:               /default/redis-aatpsn-050eaa14-2fc5-4444-bdc1-4c9be501e852/redis/redis-aatpsn-20240327025500
  Time Range Start:   Mar 27,2024 10:55 UTC+0800
  Time Range End:     Mar 27,2024 10:55 UTC+0800

Warning Events: <none>

      `kbcli cluster restore redis-aatpsn-backup --backup redis-aatpsn-20240327025500 --namespace default `

Cluster redis-aatpsn-backup created

➜  ~ k describe cluster redis-aatpsn-backup
Name:         redis-aatpsn-backup
Namespace:    default
Labels:       clusterdefinition.kubeblocks.io/name=redis
              clusterversion.kubeblocks.io/name=redis-7.0.6
Annotations:  <none>
API Version:  apps.kubeblocks.io/v1alpha1
Kind:         Cluster
Metadata:
  Creation Timestamp:             2024-03-27T02:51:30Z
  Deletion Grace Period Seconds:  0
  Deletion Timestamp:             2024-03-27T02:53:34Z
  Finalizers:
    cluster.kubeblocks.io/finalizer
  Generation:  2
  Managed Fields:
    API Version:  apps.kubeblocks.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"cluster.kubeblocks.io/finalizer":
        f:labels:
          .:
          f:clusterdefinition.kubeblocks.io/name:
          f:clusterversion.kubeblocks.io/name:
      f:spec:
        .:
        f:affinity:
          .:
          f:podAntiAffinity:
          f:tenancy:
        f:clusterDefinitionRef:
        f:clusterVersionRef:
        f:componentSpecs:
        f:monitor:
        f:resources:
          .:
          f:cpu:
          f:memory:
        f:storage:
          .:
          f:size:
        f:terminationPolicy:
    Manager:      manager
    Operation:    Update
    Time:         2024-03-27T02:51:30Z
    API Version:  apps.kubeblocks.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:clusterDefGeneration:
        f:components:
          .:
          f:redis:
            .:
            f:phase:
            f:podsReady:
            f:podsReadyTime:
          f:redis-sentinel:
            .:
            f:phase:
            f:podsReady:
            f:podsReadyTime:
          f:redis-twemproxy:
            .:
            f:phase:
            f:podsReady:
            f:podsReadyTime:
        f:conditions:
        f:observedGeneration:
        f:phase:
    Manager:         manager
    Operation:       Update
    Subresource:     status
    Time:            2024-03-27T02:53:42Z
  Resource Version:  1494080
  UID:               c20f34f5-f675-4985-b26a-228f2ef4e156
Spec:
  Affinity:
    Pod Anti Affinity:     Preferred
    Tenancy:               SharedNode
  Cluster Definition Ref:  redis
  Cluster Version Ref:     redis-7.0.6
  Component Specs:
    Class Def Ref:
      Class:
    Component Def Ref:  redis
    Enabled Logs:
      running
    Monitor:   true
    Name:      redis
    Replicas:  3
    Resources:
      Limits:
        Cpu:     200m
        Memory:  644245094400m
      Requests:
        Cpu:               200m
        Memory:            644245094400m
    Service Account Name:  kb-redis-aatpsn
    Switch Policy:
      Type:  Noop
    Volume Claim Templates:
      Name:  data
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:    1Gi
    Component Def Ref:  redis-twemproxy
    Monitor:            true
    Name:               redis-twemproxy
    Replicas:           3
    Resources:
      Limits:
        Cpu:     100m
        Memory:  512Mi
      Requests:
        Cpu:               100m
        Memory:            512Mi
    Service Account Name:  kb-redis-aatpsn
    Volume Claim Templates:
      Name:  data
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:    1Gi
    Component Def Ref:  redis-sentinel
    Monitor:            true
    Name:               redis-sentinel
    Replicas:           3
    Resources:
      Limits:
        Cpu:     100m
        Memory:  512Mi
      Requests:
        Cpu:               100m
        Memory:            512Mi
    Service Account Name:  kb-redis-aatpsn
    Volume Claim Templates:
      Name:  data
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:  1Gi
  Monitor:
  Resources:
    Cpu:     0
    Memory:  0
  Storage:
    Size:              0
  Termination Policy:  WipeOut
Status:
  Cluster Def Generation:  2
  Components:
    Redis:
      Phase:            Running
      Pods Ready:       true
      Pods Ready Time:  2024-03-27T02:52:57Z
    Redis - Sentinel:
      Phase:            Running
      Pods Ready:       true
      Pods Ready Time:  2024-03-27T02:52:26Z
    Redis - Twemproxy:
      Phase:            Running
      Pods Ready:       true
      Pods Ready Time:  2024-03-27T02:52:28Z
  Conditions:
    Last Transition Time:  2024-03-27T02:51:30Z
    Message:               The operator has started the provisioning of Cluster: redis-aatpsn-backup
    Observed Generation:   1
    Reason:                PreCheckSucceed
    Status:                True
    Type:                  ProvisioningStarted
    Last Transition Time:  2024-03-27T02:53:36Z
    Message:               Operation cannot be fulfilled on services "redis-aatpsn-backup-redis-twemproxy": the object has been modified; please apply your changes to the latest version and try again
    Reason:                ApplyResourcesFailed
    Status:                False
    Type:                  ApplyResources
    Last Transition Time:  2024-03-27T02:52:57Z
    Message:               all pods of components are ready, waiting for the probe detection successful
    Reason:                AllReplicasReady
    Status:                True
    Type:                  ReplicasReady
    Last Transition Time:  2024-03-27T02:52:57Z
    Message:               Cluster: redis-aatpsn-backup is ready, current phase is Running
    Reason:                ClusterReady
    Status:                True
    Type:                  Ready
  Observed Generation:     1
  Phase:                   Deleting
Events:
  Type     Reason                    Age                      From                  Message
  ----     ------                    ----                     ----                  -------
  Normal   PreCheckSucceed           9m16s                    cluster-controller    The operator has started the provisioning of Cluster: redis-aatpsn-backup
  Normal   ApplyResourcesSucceed     9m16s                    cluster-controller    Successfully applied for resources
  Normal   NeedWaiting               8m53s (x9 over 9m15s)    component-controller  waiting for restore "redis-aatpsn-backup-redis-c20f34f5-preparedata" successfully
  Normal   ComponentPhaseTransition  8m43s (x3 over 9m12s)    cluster-controller    component is Creating
  Warning  ReplicasNotReady          8m20s                    cluster-controller    pods are not ready in Components: [redis redis-twemproxy], refer to related component message in Cluster.status.components
  Warning  ComponentsNotReady        8m20s                    cluster-controller    pods are unavailable in Components: [redis redis-twemproxy], refer to related component message in Cluster.status.components
  Warning  ComponentsNotReady        8m17s                    cluster-controller    pods are unavailable in Components: [redis], refer to related component message in Cluster.status.components
  Warning  ReplicasNotReady          8m17s                    cluster-controller    pods are not ready in Components: [redis], refer to related component message in Cluster.status.components
  Normal   ComponentPhaseTransition  7m49s (x3 over 8m20s)    cluster-controller    component is Running
  Normal   AllReplicasReady          7m49s                    cluster-controller    all pods of components are ready, waiting for the probe detection successful
  Normal   ClusterReady              7m49s                    cluster-controller    Cluster: redis-aatpsn-backup is ready, current phase is Running
  Normal   Running                   7m49s                    cluster-controller    Cluster: redis-aatpsn-backup is ready, current phase is Running
  Normal   NeedWaiting               7m49s                    component-controller  waiting for restore "redis-aatpsn-backup-redis-c20f34f5-postready" successfully
  Normal   DeletingCR                4m15s (x148 over 7m12s)  cluster-controller    Deleting : redis-aatpsn-backup

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants