You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From time to time we have a need or desire to manually modify a resource in the k8s cluster (e.g., kubectl edit). We have occasionally also stumbled into issues when those manual changes remain in the cluster and then someone deploys this repo. The issue is that our deployments (perhaps rightly) use kubectl apply. kubectl apply does not overwrite the resource running in the cluster, but instead patches it. This means that the resulting resource will be a mix of the manual changes + what we have declaratively configured in this repo. This can cause unexpected behavior. A good example, might be changing the nodeSelector of a DaemonSet to something different. Let's say, someone changes the nodeSelector of a DaemonSet to something like lol/type=rofl. This change deploys across all pods in the DaemonSet. Then someone runs a build of this repo. The resulting DaemonSet will have a nodeSelector that includes both the manual change and what was in the repo, which would look something like this:
nodeSelector:
lol/type: rofl
mlab/type: platform
The DaemonSet will fail to deploy anywhere, since few to no nodes will have both of those labels.
The (maybe) obvious answer would be to use kubectl replace instead of kubectl apply. However, kubectl replace will not create a resource if it does not already exist, and we cannot guarantee that a build will not be introducing new resource definitions. Therefore, we cannot rely completely on kubectl replace.
Another option could be to use a combination of apply and replace. I tried this very thing, and ran into this error:
Service "prometheus-tls" is invalid: spec.clusterIP: Invalid value: "": field is immutable. Looking this up, the recommended resolution to this is to use apply instead of replace.
It's not yet clear what the right solution is.
The text was updated successfully, but these errors were encountered:
From time to time we have a need or desire to manually modify a resource in the k8s cluster (e.g.,
kubectl edit
). We have occasionally also stumbled into issues when those manual changes remain in the cluster and then someone deploys this repo. The issue is that our deployments (perhaps rightly) usekubectl apply
.kubectl apply
does not overwrite the resource running in the cluster, but instead patches it. This means that the resulting resource will be a mix of the manual changes + what we have declaratively configured in this repo. This can cause unexpected behavior. A good example, might be changing thenodeSelector
of a DaemonSet to something different. Let's say, someone changes thenodeSelector
of a DaemonSet to something likelol/type=rofl
. This change deploys across all pods in the DaemonSet. Then someone runs a build of this repo. The resulting DaemonSet will have anodeSelector
that includes both the manual change and what was in the repo, which would look something like this:The DaemonSet will fail to deploy anywhere, since few to no nodes will have both of those labels.
The (maybe) obvious answer would be to use
kubectl replace
instead ofkubectl apply
. However,kubectl replace
will not create a resource if it does not already exist, and we cannot guarantee that a build will not be introducing new resource definitions. Therefore, we cannot rely completely onkubectl replace
.Another option could be to use a combination of
apply
andreplace
. I tried this very thing, and ran into this error:Service "prometheus-tls" is invalid: spec.clusterIP: Invalid value: "": field is immutable
. Looking this up, the recommended resolution to this is to useapply
instead ofreplace
.It's not yet clear what the right solution is.
The text was updated successfully, but these errors were encountered: