You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 16, 2023. It is now read-only.
There does not seem to be clear lines of control between ES & Kibana. Consquently Kibana's post delete depends on cert that may no longer be present.
Steps to reproduce:
Removing EA & Kibana after failed trail install.
kubectl delete -n logging hr elasticsearch kibana
Expected behavior:
Both application should delete cleanly
Provide logs and/or server output (if relevant):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned logging/post-delete-kibana-kibana-d6tbj to ip-192-168-14-220.eu-west-1.compute.internal
Warning FailedMount 5m23s (x2 over 9m56s) kubelet Unable to attach or mount volumes: unmounted volumes=[elasticsearch-certs], unattached volumes=[elasticsearch-certs kibana-helm-scripts kube-api-access-qzp7x]: timed out waiting for the condition
Warning FailedMount 3m8s (x2 over 7m38s) kubelet Unable to attach or mount volumes: unmounted volumes=[elasticsearch-certs], unattached volumes=[kibana-helm-scripts kube-api-access-qzp7x elasticsearch-certs]: timed out waiting for the condition
Warning FailedMount 102s (x13 over 11m) kubelet MountVolume.SetUp failed for volume "elasticsearch-certs" : secret "elasticsearch-master-certs" not found
Warning FailedMount 52s kubelet Unable to attach or mount volumes: unmounted volumes=[elasticsearch-certs], unattached volumes=[kube-api-access-qzp7x elasticsearch-certs kibana-helm-scripts]: timed out waiting for the condition
Artifacts still remain, causing subsequent re-install to fail (this is not a complete list).
$ kubectl get -n logging roles.rbac.authorization.k8s.io
NAME CREATED AT
post-delete-kibana-kibana 2023-02-24T10:05:09Z
$ kubectl get -n logging rolebindings.rbac.authorization.k8s.io
NAME ROLE AGE
post-delete-kibana-kibana Role/post-delete-kibana-kibana 17m
$ kubectl get -n logging jobs.batch
NAME COMPLETIONS DURATION AGE
post-delete-kibana-kibana 0/1 17m 17m
$ kubectl get -n logging serviceaccounts
NAME SECRETS AGE
...
post-delete-kibana-kibana 1 18m
$ kubectl get -n logging configmaps
NAME DATA AGE
...
kibana-kibana-helm-scripts 1 28m
The text was updated successfully, but these errors were encountered:
this is also happening to me. moreover, even if we delete those artifacts, the pre-install job will fail because it will find an existing secret <release name>-kibana-es-token from a previous installation.
Creating a new Elasticsearch token for Kibana
Cleaning previous token
statusCode: 200
Creating new token
statusCode: 200
Creating K8S secret
statusCode: 409
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"......-kibana-es-token\" already exists","reason":"AlreadyExists","details":{"name":"......-kibana-es-token","kind":"secrets"},"code":409}
2023-02-24T13:09:26.978501436Z
Chart version:
8.5.1
Kubernetes version:
1.23
Kubernetes provider: E.g. GKE (Google Kubernetes Engine)
AWS/EKS
Helm Version:
3.6.3
helm get release
outputn/a
Output of helm get release
N/A
Describe the bug:
There does not seem to be clear lines of control between ES & Kibana. Consquently Kibana's post delete depends on cert that may no longer be present.
Steps to reproduce:
Removing EA & Kibana after failed trail install.
kubectl delete -n logging hr elasticsearch kibana
Expected behavior:
Both application should delete cleanly
Provide logs and/or server output (if relevant):
Artifacts still remain, causing subsequent re-install to fail (this is not a complete list).
The text was updated successfully, but these errors were encountered: