You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 7, 2018. It is now read-only.
Shouldn't there be an http livenessprobe on the data nodes that prevents terminating older data nodes when they are still replicating new data to newly running data nodes during a rolling update?
So meaning newly created data nodes are not seen as fully running untill the cluster status is green.
Only downside is; when a node goes down or reboots. Cluster status becomes yellow and then when the node tries to start it will never fully happen (will remain 0/1 Running). As the status is not green.
Still wondering if we need it. As with rolling updates data could get lost when as soon as one data node is available again the other one goes down before syncing new data to the new one.
Or do the containers have something built in for that?
Shouldn't there be an http livenessprobe on the data nodes that prevents terminating older data nodes when they are still replicating new data to newly running data nodes during a rolling update?
So meaning newly created data nodes are not seen as fully running untill the cluster status is green.
I'm looking at something like this:
The text was updated successfully, but these errors were encountered: