-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix install and upgrade applying subchart CRDs when condition is false #1123
base: main
Are you sure you want to change the base?
Conversation
ab97ed5
to
5ece165
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @matheuscscp 🎖️
ab9c2b0
to
5607fe9
Compare
Will add a test for upgrade tomorrow. |
Signed-off-by: Matheus Pimenta <[email protected]>
Signed-off-by: Matheus Pimenta <[email protected]>
Signed-off-by: Matheus Pimenta <[email protected]>
Signed-off-by: Matheus Pimenta <[email protected]>
Signed-off-by: Matheus Pimenta <[email protected]>
5607fe9
to
f708e0d
Compare
Signed-off-by: Matheus Pimenta <[email protected]>
Signed-off-by: Matheus Pimenta <[email protected]>
Signed-off-by: Matheus Pimenta <[email protected]>
f69d2ff
to
667a0f8
Compare
@@ -55,7 +55,7 @@ func Upgrade(ctx context.Context, config *helmaction.Configuration, obj *v2.Helm | |||
if err != nil { | |||
return nil, err | |||
} | |||
if err := applyCRDs(config, policy, chrt, setOriginVisitor(v2.GroupVersion.Group, obj.Namespace, obj.Name)); err != nil { | |||
if err := applyCRDs(config, policy, chrt, vals, setOriginVisitor(v2.GroupVersion.Group, obj.Namespace, obj.Name)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems I have encountered a related bug for the upgrade in our code. But it could be debatable considering how we pass values in helm.
While reading the upstream upgrade code that calls ProcessDependenciesWithMerge()
, I noticed that unlike the install code, it reuses the value from the previous release, refer https://github.com/helm/helm/blob/v3.16.3/pkg/action/upgrade.go#L240-L246. This made me wonder about the scenario where initially, the dependency condition evaluates to false, no subchart CRD gets installed. But in the next upgrade, if the same values are not explicitly set, would it result in a different result? I tried this scenario with helm CLI and helm-controller and saw different results.
I created a chart in which the default values.yaml doesn't have the value that determines the chart dependency condition. The dependencies in Chart.yaml has:
dependencies:
- name: mysubchart
condition: mysubchart.enabled
But mysubchart.enabled
is not set in the default values.yaml file.
While installing, I passed the value explicitly to disable the subchart:
$ helm install full-coral ./mychart/ --set 'mysubchart.enabled=false'
After installing the chart, when I query the values of the release, I get:
$ helm get values full-coral
USER-SUPPLIED VALUES:
mysubchart:
enabled: false
Then I run upgrade:
$ helm upgrade full-coral ./mychart
And see the values from the first release version still present:
$ helm get values full-coral
USER-SUPPLIED VALUES:
mysubchart:
enabled: false
And the subchart CRD has not been installed.
Trying the same scenario with HC.
Setting the values in HR:
values:
mysubchart:
enabled: false
After first install, querying the values results in:
$ helm get values mychart-release
USER-SUPPLIED VALUES:
mysubchart:
enabled: false
Subchart CRD is not installed.
Removing the mysubchart
value from HR and upgrading results in the subchart CRD getting installed and querying the values:
$ helm get values mychart-release
USER-SUPPLIED VALUES:
null
Even if I pass -a
to see all the values, there are other default values but there's no mysubchart
value present.
The way values are used in HR, removing the values may not be something users are likely to do, but this shows a difference in behavior. Helm-controller doesn't reuse the values from the previous release if no values are explicitly passed during upgrade. This may be a separate issue as a whole, but applying the same theory of value reuse in upgrade to what this PR is trying to address, it still seems like a bug. The ProcessDependenciesWithMerge()
call for apply CRD should receive the reused values, like the upstream upgrade does, to prevent subchart CRDs from getting installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are the results of my experiments (and they are crazy, and I repeated them many times, and they always ended up crazy):
Steps using helm install
, helm upgrade
and helm upgrade --install
- Install with
helm install -f values.yaml
withvalues.yaml
containingsubchart: {enabled: false}
.
After this step, only the main chart ConfigMap
is in the cluster, as expected. No subchart ConfigMap
, no subchart CRD.
- Upgrade with
helm upgrade
, i.e. without passing-f values.yaml
.
After this step, the cluster has not changed. Only the main chart ConfigMap
is present. No subchart ConfigMap
, no subchart CRD. This is explained by the behavior mentioned in this thread: Helm reuses the user-supplied values from the previous release stored in-cluster, which can be retrieved with helm get values <release>
. At the end of this step the values retrieved with this command look exactly like the values from the previous step.
- Change
values.yaml
content tosubchart: {enabled: true}
and upgrade withhelm upgrade -f values.yaml
.
After this step the cluster now has both ConfigMaps
, main chart and subchart. Subchart CRD is not in the cluster! A bit crazy, but this is apparently well documented here. Maybe helm upgrade --install
will apply the subchart CRD.
- Use the same
values.yaml
from previous step, upgrade withhelm upgrade --install -f values.yaml
.
After this step, the CRD is still not in the cluster! So they are really only applied with helm install
. This is apparently aligned with the Helm docs mentioned above.
- Wipe everything, install with
subchart: {enabled: true}
.
After this step, both ConfigMaps
are in the cluster, subchart CRD as well.
- Change values to
subchart: {enabled: false}
, upgrade withhelm upgrade -f values.yaml
.
After this step, the subchart ConfigMap
is no longer in the cluster, but CRD is. This is apparently also aligned with the docs above.
Steps using helm-controller
with the fix from this pull request
- Install the chart with a
HelmRelease
withspec: {values: {subchart: {enabled: false}}}
.
After this step, only the main chart ConfigMap
is in the cluster, no subchart ConfigMap
, no subchart CRD.
- Remove
.spec.values
from theHelmRelease
.
After this step, both ConfigMaps
are in the cluster, as well as the subchart CRD. As we can observe, this behavior does not match the Helm CLI, which would prevent the CRD from being installed in this upgrade for two separate reasons: 1) Helm reuses values from the previous release, Flux does not; and 2) Helm will, by design according to the docs and tests above, not install CRDs in an upgrade. From the subchart ConfigMap
perspective, this behavior is also not aligned with Helm's behavior of reusing values from a previous release, which would also have prevented the subchart from being applied.
- Re-add the exact same values from step 1 to the
HelmRelease
.
After this step, the subchart ConfigMap
is no longer in the cluster, but the subchart CRD remains. This is aligned with Helm's documented design of not touching CRDs in an upgrade.
Conclusion
In my opinion we should ignore all the stateful Helm crap. Helm was not designed with GitOps principles in mind. In the Flux world, what you want is reality to match the state you have specified as the desired one. If this was our design choice, then the result observed after step 2 in the helm-controller
experiment above would have been fine, there's no previous state to care about. But despite being aligned with Helm's behavior, what happened after step 3 in my opinion is awful from a GitOps perspective. My values specify that I don't want the CRD to be in the cluster anymore, but it is still there.
My vote for this pull request
Let's merge it as is, it's fixing something entirely orthogonal to stateful versus stateless, let's open a separate issue for that. Here we are simply fixing our CRD installation code to be aware of dependency processing logic. Stateful versus stateless is outside the scope of this PR.
Fixes #938
Supersedes #941