-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds Statefulset Support #137
base: main
Are you sure you want to change the base?
Conversation
- Adds appropriate documentation to Chart README - Introduces statefulset spec template - Adds appropriate tests - Adds appropriate test fixtures - Adds appropriate templates
Bumping this up as it's been a couple of months with no response. @autero1 Is there anything missing from this PR that I should work on before it's ready for review? |
It is a useful addition indeed 👍 |
@yorinasub17 this is a good feature. |
Hiya! Responding because I was tagged - unfortunately, I am no longer with Gruntwork so I can't do anything to move this along 😞 |
Bumping this again. @rhoboat @zackproser @autero1 |
Hi @rvc-mf - none of the folks you've pinged here work at Gruntwork any longer. |
AFAICT, this project is orphaned and should be archived to indicate the unmaintained state. |
How is it an orphaned project if it's actively being used by the k8s-service terraform module? Let alone a module you released not 2 weeks ago now. |
@zackproser I pinged the people who were assigned to review the PR automatically. Where can I find a list of people who can look at these changes? |
Hi @rvc-mf, I'll make sure this PR gets routed to the right SME for review. |
I use the charts/k8s-service in my own IasC and I've been watching this project for quite a while, and based on at least two
tells the gut feeling of my 20+ years of experience as an open source developer and contributor that this project has been orphaned. |
Hi folks, Gruntwork co-founder and Chief Product Officer here. I'm sorry for the delays and lack of response from our team. As a few folks have mentioned, all those Grunts subscribed to this thread happen to be no longer with the company, so this escaped our radar screen. But as of today, it escapes us no more. As @arsci said, we'll route this to the right SMEs and get an official response on priority here. |
Hey all, as @josh-padnick mentioned, apologies for the lack of traction on this one. I just got this added to our project board to get prioritized for a review. Feel free to tag me for anything related to this repo as I am now one of the maintainers of this project. Review soon to come! |
Awesome news, thank you! |
- updates tests - updates helper files/functions
@ryehowell , I believe I've accounted for all the changes you've mentioned in your comment, but I'm running into issues with the testing suite. I can't seem to get it to work, even for tests that cover files that I didn't change. See the error log below (this is on a file I have not changed): $ go test -v -tags all -timeout 60m -run TestK8SServiceCanaryDeployment
# (...)
TestK8SServiceCanaryDeployment 2023-09-12T07:36:48-06:00 logger.go:66: Error: INSTALLATION FAILED: services "k8s-service-canary-ykmv5h-canary-test" already exists
=== NAME TestK8SServiceCanaryDeployment
install.go:16:
Error Trace: $HOME/projects/helm-kubernetes-services/test/install.go:16
$HOME/projects/helm-kubernetes-services/test/k8s_service_canary_deployment_test.go:53
Error: Received unexpected error:
error while running command: exit status 1; Error: INSTALLATION FAILED: services "k8s-service-canary-ykmv5h-canary-test" already exists
Test: TestK8SServiceCanaryDeployment
TestK8SServiceCanaryDeployment 2023-09-12T07:36:48-06:00 logger.go:66: Running command helm with args [delete --namespace k8s-service-canary-ykmv5h k8s-service-canary-ykmv5h]
TestK8SServiceCanaryDeployment 2023-09-12T07:36:49-06:00 logger.go:66: release "k8s-service-canary-ykmv5h" uninstalled
TestK8SServiceCanaryDeployment 2023-09-12T07:36:49-06:00 client.go:42: Configuring Kubernetes client using config file $HOME/.kube/config with context
--- FAIL: TestK8SServiceCanaryDeployment (0.47s)
FAIL
exit status 1
FAIL github.com/gruntwork-io/helm-kubernetes-services/test 0.488s I tried debugging the error and it seems like the error happens at the helm module ("github.com/gruntwork-io/terratest/modules/helm") in one of the RunCommand steps. Here's an example command the Running command helm with args [install --namespace k8s-service-ykmv5h -f /home/rogvc/projects/helm-kubernetes-services/charts/k8s-service/linter_values.yaml -f /home/rogvc/projects/helm-kubernetes-services/test/fixtures/main_statefulset_values.yaml k8s-service-ykmv5h /home/rogvc/projects/helm-kubernetes-services/charts/k8s-service] Any tips on how to proceed? |
Hey @rvc-mf , apologies for the delay here. We've been working on getting some new features out for our EKS module. I'll do another review of this and take a look at the failing test as well. @mateimicu, would you be available to help review this? I've got this in my queue, but wanted see if you would like to take a look at this as well 😄 to cast a wider net on this. |
@ryehowell sure :D, I will take a look today |
FYA, although I'm in no position to provide a technical review of this PR, I would like to endorse it as a user of this project. |
Hi @rvc-mf, Let's first try and unblock you. I managed to run the command you provided locally, I think this fails because there is a name collision.
|
@mateimicu Thanks for the reply. I was able to find the issue and solve it on my latest changes. I'm now running into another issue that I think has more to do with my docker + minikube environment than the code changes. I checked out the Has anyone in your team seen a similar error to this when running the testing suite? Error we get when we add 'minikube' to /etc/hosts
Error we get when we don't add it:
If y'all don't use Docker + Minikube to run tests, how do you set up a kubernetes cluster to run the test suite? |
Tried running the Would appreciate some direction on how to set up a proper testing environment to get unblocked on this PR. |
Update: got test suite running by giving minikube a static ip on start (thanks @brandon-langley-mf ). See gist for updated test results: https://gist.github.com/rvc-mf/c2c4c82713cd8da4a95a3bf079d43cb4 PR is ready for code review once again 👍🏼 |
Hi @rvc-mf,
This is what I use for MacOSStart minikube like this
How to debug this in generalTo check how to do this in your setup, you can:
|
Awesome 🥳 Let me pull the latest and test it |
@mateimicu fyi, we found two tests are failing that are also failing in the release tag as well. They look like examples that are unfortuantely tagged in such a way they are pulled into the 'all' and 'integration' tests. We did not adjust them because we weren't sure if this was intentional or not - is this something we should open an issue for? |
If you have some details a new issue would be awesome to track it. |
@mateimicu, @ryehowell, any updates on this one? |
Sorry @brandon-langley-mf this slipped on my side. Let me jump on it 👀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took another look at the PR. I tried to set up a Zookeeper with this module, a fairly common use case of a stateful set.
Unfortunately, it is not straightforward, and the default values must be evaluated for the stateful case.
There is also an overload of features present only with some workload types. One option to avoid this complexity is to create another helm module under helm-kubernetes-services/charts/k8s-stateful-service
with a separate feature set specifically tailored to stateful services.
@@ -367,6 +397,8 @@ minPodsAvailable: 0 | |||
# Kubernetes defaults to None. | |||
# - sessionAffinityConfig (object) : Configuration for session affinity, as defined in Kubernetes | |||
# (https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) | |||
# - name (required for statefulset : The name of the service associated with this deployment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: required for statefulset
makes configuring this more complicated. And in Helm, we can't validate for proper configuration.
suggestion: If it is about the headless service I would propose {{ include "k8s-service.fullname" . }}-headless
as a default name to avoid having a conflict with the default configuration.
- name: {{ $name }} | ||
emptyDir: {} | ||
{{- end }} | ||
{{- /* END VOLUME LOGIC */ -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (blocking): One key feature of stateful sets is the volumeClaimTemplates.
What is the ability to configure this? Is it expected to happen via another section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mateimicu Is this something that could be added in a future iteration of this feature? Maybe this PR represents the initial bare-bones support that @mloskot and @rvc-mf are already using in their deployments 🤔
Co-authored-by: Matei <[email protected]>
Folks, please forgive me the direct language, but can this be merged - let's show minimum respect to @rvc-mf 's work. |
@mateimicu @ryehowell Could you bring this one to a resolution? 'cc @ZachGoldberg |
I'm re-reviewing this now @josh-padnick @ZachGoldberg. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @rvc-mf, I'm re-reviewing this PR again. There are still some gaps in this PR that I think are critical for releasing StatefulSet functionality, namely dynamic volume provisioning.
Here are my review comments:
- The StatefulSet template should support Volume Claim Templates.
- Dynamic volume provisioning is one of the key features of StatefulSets and one of the primary reasons StatefulSets are used over the other workload object types. I think this is MVP functionality to roll out.
- The headless service spec will have a name collision with the standard service if using both.
- The headless service is a required object for StatefulSets. While it does look like the name can be overridden to prevent this, I think it would be a better user experience to have this work out of the box without needing to provide overrides. This could potentially be accepted as is for a MVP, but it will need to be clearly documented of this limitation/gotchya.
- There have been some updates to main since this PR was originally created. We will need to get this branch updated as it has some test failure fixes.
- Looks like there are some copy/paste items from the canary deployment spec that I'm not sure are needed in the STS. While mostly benign, it'd be good to justify their need. Primary item is the annotation
gruntwork.io/deployment-type: main
which is specific to the canary functionality of the original Deployment.
To be candid, I agree with @mateimicu last comment about this functionality being better suited for a new chart as the existing chart is pretty overloaded. I think having a chart for each use case makes a lot of sense and will make the implementation/maintenance much simpler.
First, thank you all for taking action to this PR forward - a merge or a rejection, but conclusion.
Could you expand on this idea a bit more? |
New charts would go in this repo within the Having a new chart, such as |
This is the answer I was hoping to receive, excellent! re this PR, let's see and hopefully @rvc-mf will chime in with updates. Otherwise, although I've been only Helm charts user, I may try address the identified shortcomings. |
Description
Adds support for statefulset deployments.
This pull request introduces the following changes:
workloadType
tovalues.yaml
, where the default value isdeployment
butstatefulset
is also an option._statefulset_spec.tpl
template which gets used whenworkloadType
is set tostatefulset
. In turn,_deployment_spec_tpl
only gets used whenworkloadType
is set todeployment
.updateStrategy
field tovalues.yaml
which gets injected intoupdateStrategy
onstatefulset
workloads (workloadType: statefulset
).deploymentStrategy
continues getting injected intostrategy
ondeployment
workloads.name
field underservice
due to Statefulset workload requirements.service
could still haveenabled: false
if the service resource is created outside of the chart.None of these changes are breaking, as statefulset support is optional and off-by-default. Pre-existing charts should be able to use this version without encountering issues. Similarly, I don't expect any downtime from the changes.
Documentation
I have updated the README in
/charts/k8s-service
to indicate that statefulsets are also deployable by the chart. I have also added proper documentation on each field that was added/edited in the chart'svalues.yaml
.I have also added three tests that test the correctness of
statefulset
canary deployments, similar to howdeployment
workloads are tested. You can find the output here: https://gist.github.com/rvc-mf/c2c4c82713cd8da4a95a3bf079d43cb4Furthermore, I have successfully deployed several statefulset applications using these changes (previously made in a "private" fork) in other projects.
TODOs
Please ensure all of these TODOs are completed before asking for a review.
feature/new-vpc-endpoints-955
orbug/missing-count-param-434
.Related Issues
Addresses #136