Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Builds/deployments do no overwrite manual changes to the cluster. #394

Open
nkinkade opened this issue Mar 13, 2020 · 0 comments
Open

Builds/deployments do no overwrite manual changes to the cluster. #394

nkinkade opened this issue Mar 13, 2020 · 0 comments

Comments

@nkinkade
Copy link
Contributor

nkinkade commented Mar 13, 2020

From time to time we have a need or desire to manually modify a resource in the k8s cluster (e.g., kubectl edit). We have occasionally also stumbled into issues when those manual changes remain in the cluster and then someone deploys this repo. The issue is that our deployments (perhaps rightly) use kubectl apply. kubectl apply does not overwrite the resource running in the cluster, but instead patches it. This means that the resulting resource will be a mix of the manual changes + what we have declaratively configured in this repo. This can cause unexpected behavior. A good example, might be changing the nodeSelector of a DaemonSet to something different. Let's say, someone changes the nodeSelector of a DaemonSet to something like lol/type=rofl. This change deploys across all pods in the DaemonSet. Then someone runs a build of this repo. The resulting DaemonSet will have a nodeSelector that includes both the manual change and what was in the repo, which would look something like this:

nodeSelector:
  lol/type: rofl
  mlab/type: platform

The DaemonSet will fail to deploy anywhere, since few to no nodes will have both of those labels.

The (maybe) obvious answer would be to use kubectl replace instead of kubectl apply. However, kubectl replace will not create a resource if it does not already exist, and we cannot guarantee that a build will not be introducing new resource definitions. Therefore, we cannot rely completely on kubectl replace.

Another option could be to use a combination of apply and replace. I tried this very thing, and ran into this error:

Service "prometheus-tls" is invalid: spec.clusterIP: Invalid value: "": field is immutable. Looking this up, the recommended resolution to this is to use apply instead of replace.

It's not yet clear what the right solution is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant