-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/k8s job #132
base: main
Are you sure you want to change the base?
Feat/k8s job #132
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a great start! I found a few minor typos/residuals from Deployment, but overall I think this touches the core!
charts/k8s-job/README.md
Outdated
|
||
back to [root README](/README.adoc#day-to-day-operations) | ||
|
||
## How do I expose my application internally to the cluster? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section does not apply to a Job
, since a Job
by design is for running background tasks and not exposed with an endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it looks like almost everything after this doesn't apply to a job, so probably can cut everything after here, and then add new sections that pertain to job. Off the top of my head:
How do I check the status of the Job?
How do I configure my Job to run periodically?
{{- range $key, $value := .Values.additionalDeploymentLabels }} | ||
{{ $key }}: {{ $value }} | ||
{{- end}} | ||
{{- with .Values.deploymentAnnotations }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be renamed to jobAnnotations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is missing from the values.yaml
file
app.kubernetes.io/name: {{ include "k8s-job.name" . }} | ||
app.kubernetes.io/instance: {{ .Release.Name }} | ||
app.kubernetes.io/managed-by: {{ .Release.Service }} | ||
{{- range $key, $value := .Values.additionalDeploymentLabels }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be renamed to additionalJobLabels
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is missing from the values.yaml
file
{{- if .isCanary }} | ||
gruntwork.io/deployment-type: canary | ||
{{- else }} | ||
gruntwork.io/deployment-type: main | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we don't have canaries, can remove.
Looks like a good start! I want to actually pull and run some of the tests before leaving detailed feedback. Just wanted to let you know that I'll get to that soon! |
Co-authored-by: Yoriyasu Yano <[email protected]>
Co-authored-by: Yoriyasu Yano <[email protected]>
…services into feat/k8s-job
containerArgs: | ||
- "-c" | ||
- "while true; do echo hello; sleep 10;done" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I think this makes sense as a trial job to run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or.. is this just container start up? Ah, I'm not very familiar with k8s Job, but I was thinking the example could run a very simple job, that the integration test (k8s_job_test.go) would verify was successful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should just run for ten seconds while printing "hello" before closing out. Jobs are usually just one-off tasks like db-migrations, so I thought this kind of thing would work. But you're right we still need to sort out how we would do integration testing for verifying a successful finish for a Job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that makes sense.
test_structure.RunTestStage(t, "validate_job_deployment", func() { | ||
verifyPodsCreatedSuccessfully(t, kubectlOptions, "busybox", releaseName, NumPodsExpected) | ||
|
||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might make sense to also have a stage for "validate_job_successful".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if there's enough time I'm going to turn to that for sure
Co-authored-by: Rho <[email protected]>
Co-authored-by: Rho <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very close to ready for an initial release! I think the one thing that needs attention is a general cleanup of the chart README and adding one for the example, but overall I think we are very close!
Also, getting the tests to pass. I just attempted a run and ran into the following errors:
=== CONT TestK8SJobOptionalValuesAreOptional/containerImage.pullPolicy
template.go:20:
Error Trace: template.go:20
k8s_job_template_test.go:78
Error: Received unexpected error:
error while running command: exit status 1; WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/yoriy/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /Users/yoriy/.kube/config
Error: parse error at (k8s-job/templates/_job_spec.tpl:141): undefined variable "$hasInjectionTypes"
Use --debug flag to render out invalid YAML
Test: TestK8SJobOptionalValuesAreOptional/containerImage.pullPolicy
--- FAIL: TestK8SJobOptionalValuesAreOptional (0.00s)
--- FAIL: TestK8SJobOptionalValuesAreOptional/containerImage.pullPolicy (0.05s)
k8s_job_template_test.go:192:
Error Trace: k8s_job_template_test.go:192
Error: Not equal:
expected: ""
actual : "second-custom-value"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-
+second-custom-value
Test: TestK8SJobAddingAdditionalLabels
--- FAIL: TestK8SJobAddingAdditionalLabels (0.16s)
k8s_job_template_test.go:91:
Error Trace: k8s_job_template_test.go:91
Error: Not equal:
expected: 0
actual : 1
Test: TestK8SJobAnnotationsRenderCorrectly
k8s_job_template_test.go:92:
Error Trace: k8s_job_template_test.go:92
Error: Not equal:
expected: ""
actual : "yqDBbJ"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-
+yqDBbJ
Test: TestK8SJobAnnotationsRenderCorrectly
--- FAIL: TestK8SJobAnnotationsRenderCorrectly (0.16s)
k8s_job_template_test.go:180:
Error Trace: k8s_job_template_test.go:180
Error: Not equal:
expected: ""
actual : "main"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-
+main
Test: TestK8SJobMainJobContainersLabeledCorrectly
--- FAIL: TestK8SJobMainJobContainersLabeledCorrectly (0.16s)
|
||
back to [root README](/README.adoc#day-to-day-operations) | ||
|
||
## How do I set and share configurations with the application? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can actually link to the k8s-service
docs for this section, since the content is largely applicable across the two. Something like:
This chart exposes the same parameters as `k8s-service` to set and
share configuration variables with your container Pod. Refer to the
[corresponding section](/charts/k8s-service/README.md#how-do-i-set-and-share-configurations-with-the-application)
in the k8s-service documentation for more details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seconding Yori's comment! Where possible, we like to link to the "single source of truth".
labels: | ||
app.kubernetes.io/name: {{ include "k8s-job.name" . }} | ||
app.kubernetes.io/instance: {{ .Release.Name }} | ||
gruntwork.io/deployment-type: main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This label is not necessary in k8s-job
, as it is primarily used to differentiate with canaries, which doesn't apply for the job use case.
@@ -0,0 +1,164 @@ | |||
#---------------------------------------------------------------------------------------------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good to start! In the short term future right after this, we should also implement support for running multiple copies of the pod with parallelism configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can give some better feedback based on having pulled the code. I think it's a great start.
If you were to move forward with this PR in the future I would say:
- Clean up the charts/k8s-job/README.md first. It would be nice to explain, no more complicated than need be, just what the chart aims to do.
- Write the README for examples/k8s-job-busybox, which you've stubbed out.
- Then work on getting tests to work. You'll want to wire up the tests using the right render-helper functions. I'd say start with the template tests, then the integration test, but it depends on your workflow. Maybe the integration test makes more sense. 🤷
- Part of the above, cull the tests down to only what's needed for k8s-job.
- Going beyond if you wanted: Organize the
test/
folder to be a little easier to read. Imagine if we started supporting another new chart. This could be a separate PR too.
|
||
uniqueID := random.UniqueId() | ||
// ERROR: Need to find function that can inject annotations into a job | ||
job := renderK8SServiceDeploymentWithSetValues(t, map[string]string{"jobAnnotations.unique-id": uniqueID}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I listened to your outro call, and I think you got a little tripped up about this function here. This helper function is actually defined right in this directory in a helper file. You could follow a similar pattern and also define your helper functions for k8s-job that way. Or if you feel another approach is easier to reason about, that could be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can totally understand ramping up on this test as well as Terratest can be a lot in one go
. (Punintended)
Description
This PR attempts to address Issue #24 by providing a deployment bundle for running a single Kubernetes job. Requested by @rhoboat to review. WIP
Documentation
Inside
/helm/charts/k8s-job/README.md
TODOs
Request by @rhoboat
Related Issues
Addresses Issue #24