Ensure 100% persistence with Kubernetes deployment #4442
Unanswered
ChristopherDrum
asked this question in
Q&A
Replies: 1 comment
-
I think that option #2 above would be preferred, especially when I want to roll out an InfluxDB update (say from 2.3.0 -> 2.4.0) or such. Presumably the data could be brought in without issue across an upgrade. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've deployed InfluxDB2 to my company's Kubernetes cluster and have had no troubles until yesterday.
I have been using this base yaml as the foundation for my deployment definition
https://github.com/influxdata/docs-v2/blob/master/static/downloads/influxdb-k8-minikube.yaml
I had about 3 or 4 weeks of data accumulated, when I got an alert from our security team about the
/metrics
and/debug/pprof
endpoint exposure being a potential security risk.So, I appended some .yaml to the deployment files and redeployed.
This resulted in a blank slate InfluxDB instance, as far I could tell.
I was prompted to make a login/password, create a default bucket, and so on. My API tokens had to be regenerated. The bucket with my accumulated data seemed to be gone (though I don't really understand Kubernetes deeply; maybe the data was somewhere?).
I had to start over from scratch, which is typically not my experience with using a StatefulSet (which, as I understand, implements PersistentVolumeClaim). Using
k9s
I could verify that thedata
volumeMount (as defined in the sample configuration provided by Influx) is regarded as a PersistentVolumeClaim and is provisioned properly.However, now I wonder two things
data
enough to persist? Should I broaden the scope of that persistent claim? Maybe add other paths to it? Not sure where else I should add though.Beta Was this translation helpful? Give feedback.
All reactions