Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

destroy-cluster needs more destruction and bigger explosions! #297

Open
jhoblitt opened this issue Jun 7, 2024 · 2 comments
Open

destroy-cluster needs more destruction and bigger explosions! #297

jhoblitt opened this issue Jun 7, 2024 · 2 comments

Comments

@jhoblitt
Copy link

jhoblitt commented Jun 7, 2024

~ $ k rook-ceph  destroy-cluster
Warning: Are you sure you want to destroy the cluster in namespace "rook-ceph"? If absolutely certain, enter: yes-really-destroy-cluster
yes-really-destroy-cluster
Info: proceeding
Info: getting resource kind cephclusters
Info: removing resource cephclusters: rook-ceph
Info: resource "rook-ceph" is not yet deleted, applying patch to remove finalizer...
Info: resource rook-ceph was deleted
Info: getting resource kind cephblockpoolradosnamespaces
Info: resource cephblockpoolradosnamespaces was not found on the cluster
Info: getting resource kind cephblockpools
Info: removing resource cephblockpools: replicapool
Info: resource "replicapool" is not yet deleted, applying patch to remove finalizer...
Info: resource replicapool was deleted
Info: getting resource kind cephbucketnotifications
Info: resource cephbucketnotifications was not found on the cluster
Info: getting resource kind cephbuckettopics
Info: resource cephbuckettopics was not found on the cluster
Info: getting resource kind cephclients
Info: resource cephclients was not found on the cluster
Info: getting resource kind cephcosidrivers
Info: resource cephcosidrivers was not found on the cluster
Info: getting resource kind cephfilesystemmirrors
Info: resource cephfilesystemmirrors was not found on the cluster
Info: getting resource kind cephfilesystems
Info: removing resource cephfilesystems: auxtel
Info: resource "auxtel" is not yet deleted, applying patch to remove finalizer...
Info: resource auxtel was deleted
Info: getting resource kind cephfilesystemsubvolumegroups
Info: resource cephfilesystemsubvolumegroups was not found on the cluster
Info: getting resource kind cephnfses
Info: removing resource cephnfses: auxtel
Info: resource "auxtel" is not yet deleted, applying patch to remove finalizer...
Info: resource auxtel was deleted
Info: getting resource kind cephobjectrealms
Info: resource cephobjectrealms was not found on the cluster
Info: getting resource kind cephobjectstores
Info: removing resource cephobjectstores: lfa
Info: resource "lfa" is not yet deleted, applying patch to remove finalizer...
Info: resource lfa was deleted
Info: removing resource cephobjectstores: o11y
Info: resource "o11y" is not yet deleted, applying patch to remove finalizer...
Info: resource o11y was deleted
Info: getting resource kind cephobjectstoreusers
Info: resource cephobjectstoreusers was not found on the cluster
Info: getting resource kind cephobjectzonegroups
Info: resource cephobjectzonegroups was not found on the cluster
Info: getting resource kind cephobjectzones
Info: resource cephobjectzones was not found on the cluster
Info: getting resource kind cephrbdmirrors
Info: resource cephrbdmirrors was not found on the cluster
Info: removing deployment rook-ceph-tools
Info: waiting to clean up resources
Info: 13 pods still alive, removing....
Info: pod rook-ceph-mds-auxtel-a-75d9bbbdb4-p5tqg still alive
Info: pod rook-ceph-mds-auxtel-b-56f9698cf4-qrcfw still alive
Info: pod rook-ceph-mds-auxtel-c-57547679cb-57686 still alive
Info: pod rook-ceph-mds-auxtel-d-7b5c6bdc94-kvjxl still alive
Info: pod rook-ceph-mds-auxtel-e-674f495c6-d92zg still alive
Info: pod rook-ceph-mds-auxtel-f-5c5b9f9bdb-9bplv still alive
Info: pod rook-ceph-nfs-auxtel-a-6b6db57977-8hjtz still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-5kw2r still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-95l84 still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-99vmm still alive
Info: 7 pods still alive, removing....
Info: pod rook-ceph-mds-auxtel-a-75d9bbbdb4-p5tqg still alive
Info: pod rook-ceph-mds-auxtel-c-57547679cb-57686 still alive
Info: pod rook-ceph-mds-auxtel-f-5c5b9f9bdb-9bplv still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-5kw2r still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-95l84 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-4dww2 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-wvh76 still alive
Info: 7 pods still alive, removing....
Info: pod rook-ceph-mds-auxtel-a-75d9bbbdb4-p5tqg still alive
Info: pod rook-ceph-mds-auxtel-c-57547679cb-57686 still alive
Info: pod rook-ceph-mds-auxtel-f-5c5b9f9bdb-9bplv still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-5kw2r still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-95l84 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-4dww2 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-wvh76 still alive
Info: 6 pods still alive, removing....
Info: pod rook-ceph-mds-auxtel-a-75d9bbbdb4-p5tqg still alive
Info: pod rook-ceph-mds-auxtel-c-57547679cb-57686 still alive
Info: pod rook-ceph-mds-auxtel-f-5c5b9f9bdb-9bplv still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-95l84 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-4dww2 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-wvh76 still alive
Info: 6 pods still alive, removing....
Info: pod rook-ceph-mds-auxtel-a-75d9bbbdb4-p5tqg still alive
Info: pod rook-ceph-mds-auxtel-c-57547679cb-57686 still alive
Info: pod rook-ceph-mds-auxtel-f-5c5b9f9bdb-9bplv still alive
Info: pod rook-ceph-rgw-lfa-a-b7d5cf9c5-95l84 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-4dww2 still alive
Info: pod rook-ceph-rgw-o11y-a-7cfc5c9957-wvh76 still alive
Info: done
 ~ $ k get pod
NAME                                            READY   STATUS    RESTARTS       AGE
csi-cephfsplugin-c9grs                          3/3     Running   77 (82m ago)   30d
csi-cephfsplugin-ncx74                          3/3     Running   39 (82m ago)   30d
csi-cephfsplugin-provisioner-7f58cd5cf4-mdbmk   6/6     Running   76 (82m ago)   30d
csi-cephfsplugin-provisioner-7f58cd5cf4-vfd5g   6/6     Running   77 (82m ago)   30d
csi-cephfsplugin-v6kr6                          3/3     Running   72 (83m ago)   30d
csi-cephfsplugin-w5cwf                          3/3     Running   72 (83m ago)   30d
csi-rbdplugin-4brcx                             3/3     Running   69 (83m ago)   30d
csi-rbdplugin-h4jvf                             3/3     Running   66 (82m ago)   30d
csi-rbdplugin-kflz7                             3/3     Running   39 (82m ago)   30d
csi-rbdplugin-l8hmw                             3/3     Running   75 (83m ago)   30d
csi-rbdplugin-provisioner-6b495c944-p6xxv       6/6     Running   82 (83m ago)   30d
csi-rbdplugin-provisioner-6b495c944-xr57m       6/6     Running   76 (82m ago)   30d
@travisn
Copy link
Member

travisn commented Jun 7, 2024

@jhoblitt Can you clarify which destruction and explosions you expected? :) The CSI driver to be deleted? I would expect those pods to all go away when the Rook operator deployment is deleted. Did you delete the operator deployment? The destroy-cluster action does not currently remove the operator, it currently just removes everything related to a CephCluster CR.

@BlaineEXE
Copy link
Member

It looks like the operator was deleted, but CSI driver deployments/daemonsets weren't. That does seem like a bug to me. If the operator were to also remain running, then the CSI drivers also running wouldn't seem like the worst thing to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants