-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kapp has yaml Norway issues #967
Comments
I've also just noticed that kapp did the same thing in the annotations of a service which has AWS LBC annotations: @@ update service/thing1 (v1) namespace: default @@
...
6, 6 alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
7 - alb.ingress.kubernetes.io/success-codes: "200"
7 + alb.ingress.kubernetes.io/success-codes: 200 |
Could you share an example template we can reproduce this with? |
However, my best bet is that running |
The output I provided for the example manifest in the OP is what is being passed to kapp, there is nothing that helm is doing other than rendering the template and passing that output to kapp. So no template is required.
Nope, |
I found it, the issue is upstream: kubernetes-sigs/yaml#108 |
For anyone else reading this, the missing detail in this ticket seems to be that there's a template variable being substituted in these manifests. kapp's yaml exporter sees that Then Both components are individually behaving correctly, but the net result is incorrect, because The same behaviour can be observed when using |
Please read the reproducible example, I'm not using kapp's helm, I literally passed it raw manifests. |
What steps did you take:
helm template | kapp deploy -f -
What happened:
Helm shows the labels in the (I'm using a HPA as an example) manifests as:
However kapp's diff shows:
@@ update horizontalpodautoscaler/thing1 (autoscaling/v2) namespace: default @@ ... 9, 9 app: thing1 10 - job_id: "42" 10 + job_id: 42 11 + kapp.k14s.io/app: "1718329494750920866" 12 + kapp.k14s.io/association: v1.28f428fa5fef94f7deadd11917f4bd7d 13, 13 other_label: test
Which produces this error from kapp:
What did you expect:
The deployment to succeed and no errors from kapp.
Anything else you would like to add:
While deploying an existing helm chart by passing the manifests from helm to kapp, I found that when kapp went to make ownership changes on ANY resource in the chart, kapp rewrote the entire stanza of yaml (in this case, the
metadata.labels
section) and removed some very critical"
characters, which turned a string into an integer but that integer is inmetadata.labels
which is against kubernetes manifest specifications. This could also cause issues in other parts of manifests which take a string but have the value of said string set to a value that can be converted to an integer, thus making the yaml manifest have an integer instead of the expected (and requested! (because of the explicit usage of"
!!)) string based value.I'm passing kapp manifests from helm because the environment I'm working in already has a large existing base of helm charts deployed (and moving to something like ytt is not feasible due to time and risk) but helm as a deployment tool is painful for automated deployments from CICD pipelines, which is why I want to move to using kapp over helm as a deployment tool.
Example HPA manifest:
Environment:
0.62.1
v1.29.4-eks-036c24b
The text was updated successfully, but these errors were encountered: