Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix extraEnv in gitjob controller template #2650

Merged
merged 2 commits into from
Jul 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,15 @@ jobs:
uses: actions/checkout@v4
with:
fetch-depth: 0

-
name: Set up chart-testing
uses: helm/[email protected]
with:
version: v3.3.0
-
name: Run chart-testing (lint)
run: ct lint --all --validate-maintainers=false charts/
-
uses: actions/setup-go@v5
with:
Expand Down
2 changes: 2 additions & 0 deletions charts/fleet-agent/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.helmignore
ci/
2 changes: 2 additions & 0 deletions charts/fleet-agent/ci/default-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
apiServerURL: "https://localhost"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this file, and other files under charts/fleet{-agent,}/ci end up being packaged in our charts?
Should we add .helmignore files to those charts to prevent that?

Apart from that, I'm sligthly concerned about the maintenance overhead of those values files when we add new features involving such values: it looks likely to me that we may forget to update at least some of them to reflect the latest available values.

Copy link
Member Author

@manno manno Jul 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, added those .helmignore files. The actual values in those files don't matter much. I think chart-testing just checks that the result is valid YAML. Not sure if we will ever be able to cover all combinations, but at least this bug would have been catched by the one file with "debug: false".

apiServerCA: "abc"
4 changes: 2 additions & 2 deletions charts/fleet-agent/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ agentTLSMode: "system-store"
token: ""

# Labels to add to the cluster upon registration only. They are not added after the fact.
#labels:
# foo: bar
# labels:
# foo: bar

# The client ID of the cluster to associate with
clientID: ""
Expand Down
2 changes: 2 additions & 0 deletions charts/fleet/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.helmignore
ci/
64 changes: 64 additions & 0 deletions charts/fleet/ci/debug-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
bootstrap:
enabled: true

global:
cattle:
systemDefaultRegistry: "ghcr.io"

nodeSelector:
kubernetes.io/os: os2

tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
operator: "Equal"
value: "true"
effect: NoSchedule

priorityClassName: "prio1"

gitops:
enabled: true

metrics:
enabled: true

debug: true
debugLevel: 4
propagateDebugSettingsToAgents: true

cpuPprof:
period: "60s"
volumeConfiguration:
hostPath:
path: /tmp/pprof
type: DirectoryOrCreate

migrations:
clusterRegistrationCleanup: true

leaderElection:
leaseDuration: 30s
retryPeriod: 10s
renewDeadline: 25s

controller:
reconciler:
workers:
gitrepo: "1"
bundle: "1"
bundledeployment: "1"

extraEnv:
- name: EXPERIMENTAL_OCI_STORAGE
value: "true"

shards:
- id: shard0
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-0
- id: shard1
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-1
- id: shard2
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-2
Empty file.
64 changes: 64 additions & 0 deletions charts/fleet/ci/nobootstrap-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
bootstrap:
enabled: false

global:
cattle:
systemDefaultRegistry: "ghcr.io"

nodeSelector:
kubernetes.io/os: mac

tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
operator: "Equal"
value: "true"
effect: NoSchedule

priorityClassName: "prio1"

gitops:
enabled: true

metrics:
enabled: true

debug: false
debugLevel: 4
propagateDebugSettingsToAgents: true

cpuPprof:
period: "60s"
volumeConfiguration:
hostPath:
path: /tmp/pprof
type: DirectoryOrCreate

migrations:
clusterRegistrationCleanup: true

leaderElection:
leaseDuration: 30s
retryPeriod: 10s
renewDeadline: 25s

controller:
reconciler:
workers:
gitrepo: "1"
bundle: "1"
bundledeployment: "1"

extraEnv:
- name: EXPERIMENTAL_OCI_STORAGE
value: "true"

shards:
- id: shard0
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-0
- id: shard1
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-1
- id: shard2
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-2
64 changes: 64 additions & 0 deletions charts/fleet/ci/nodebug-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
bootstrap:
enabled: true

global:
cattle:
systemDefaultRegistry: "ghcr.io"

nodeSelector:
kubernetes.io/os: plan9

tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
operator: "Equal"
value: "true"
effect: NoSchedule

priorityClassName: "prio1"

gitops:
enabled: true

metrics:
enabled: true

debug: false
debugLevel: 4
propagateDebugSettingsToAgents: true

cpuPprof:
period: "60s"
volumeConfiguration:
hostPath:
path: /tmp/pprof
type: DirectoryOrCreate

migrations:
clusterRegistrationCleanup: true

leaderElection:
leaseDuration: 30s
retryPeriod: 10s
renewDeadline: 25s

controller:
reconciler:
workers:
gitrepo: "1"
bundle: "1"
bundledeployment: "1"

extraEnv:
- name: EXPERIMENTAL_OCI_STORAGE
value: "true"

shards:
- id: shard0
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-0
- id: shard1
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-1
- id: shard2
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-2
64 changes: 64 additions & 0 deletions charts/fleet/ci/nogitops-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
bootstrap:
enabled: true

global:
cattle:
systemDefaultRegistry: "ghcr.io"

nodeSelector:
kubernetes.io/os: winxp

tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
operator: "Equal"
value: "true"
effect: NoSchedule

priorityClassName: "prio1"

gitops:
enabled: false

metrics:
enabled: true

debug: false
debugLevel: 4
propagateDebugSettingsToAgents: true

cpuPprof:
period: "60s"
volumeConfiguration:
hostPath:
path: /tmp/pprof
type: DirectoryOrCreate

migrations:
clusterRegistrationCleanup: true

leaderElection:
leaseDuration: 30s
retryPeriod: 10s
renewDeadline: 25s

controller:
reconciler:
workers:
gitrepo: "1"
bundle: "1"
bundledeployment: "1"

extraEnv:
- name: EXPERIMENTAL_OCI_STORAGE
value: "true"

shards:
- id: shard0
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-0
- id: shard1
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-1
- id: shard2
nodeSelector:
kubernetes.io/hostname: k3d-upstream-server-2
6 changes: 3 additions & 3 deletions charts/fleet/templates/deployment_gitjob.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,9 @@ spec:
- name: GITREPO_RECONCILER_WORKERS
value: {{ quote $.Values.controller.reconciler.workers.gitrepo }}
{{- end }}
{{- if $.Values.extraEnv }}
{{ toYaml $.Values.extraEnv | indent 12}}
{{- end }}
{{- if $.Values.debug }}
- name: CATTLE_DEV_MODE
value: "true"
Expand All @@ -95,9 +98,6 @@ spec:
drop:
- ALL
{{- end }}
{{- if $.Values.extraEnv }}
{{ toYaml $.Values.extraEnv | indent 12}}
{{- end }}
nodeSelector: {{ include "linux-node-selector" $shard.id | nindent 8 }}
{{- if $.Values.nodeSelector }}
{{ toYaml $.Values.nodeSelector | indent 8 }}
Expand Down
12 changes: 6 additions & 6 deletions charts/fleet/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,12 +78,12 @@ propagateDebugSettingsToAgents: true

## Optional CPU pprof configuration. Profiles are collected continuously and saved every period
## Any valid volume configuration can be provided, the example below uses hostPath
#cpuPprof:
# period: "60s"
# volumeConfiguration:
# hostPath:
# path: /tmp/pprof
# type: DirectoryOrCreate
# cpuPprof:
# period: "60s"
# volumeConfiguration:
# hostPath:
# path: /tmp/pprof
# type: DirectoryOrCreate

migrations:
clusterRegistrationCleanup: true
Expand Down
Loading