-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grouped Shares Issues #297
Comments
Thanks!
It's a bit between drift, shifting priorities, and becoming more aware of other constrains in the system. Additionally, when we wrote the design doc we didn't know how much a priority different storage backends might be. Other storage backends in Samba (cephfs, glusterfs, etc) do not fall under this restriction, so the restriction doesn't appear in the design doc. There's been little to no demand for alternative backends to PVC storage so far so I didn't want to put a lot of work into that. (And there are other issues with managing pods that might these vfs modules) Unless Kubernetes gains the ability to add PVCs to an existing pod (and if I've overlooked this possibility please let me know) then the only way to support grouped shares with different PVCs would be to stop and then regenerate a previously running pod. We will need this ability when we start to support upgrades properly, so after we add support for upgrades, it might be possible to come back to this issue and revisit this restriction. Just to satisfy my curiosity, what is your use case for having multiple PVCs on a grouped SmbShare? Is it mainly about resource consumption, or something else?
Thanks for the reminder that we ought to do another release. We should at least try do a release that corresponds roughly with the sambaXP conference which is when we created our previous release. |
Hi! :) Thank you very much for the reply. I'm not entirely sure what you mean with the different storage Backends. In my opinion the only issue with the Backends would be whether all of the PVCs support to be mounted by multiple pods (e.g.: Being ReadWriteMany). If that is supported then there should be no issue regardless whether it's backed by rook, gluster etc. I'm also not sure what the issue is with the Pods. If I saw it correctly, the exporter Pod is already managed by a I'm currently quiet busy but if I get to it I will try to replicate this today manually without the operator :) Maybe it works out of the box.
Although resource consumption is one point it's also mainly about having just one IP for all shares since I currently have multiple PVCs in the cluster and migrating to one big PVC would be cumbersome, so I would like to export all my PVCs under one IP with multiple Shares outside of the cluster.
Wonderful! I'm looking forward to it! :) 👍 |
Hi! Just in case it helps with further questions: You should have all the files you need in the following Gist. Of course what you need is a ReadWriteMany Storage Class and a cluster, that can provision LoadBalancer IPs (I use MetalLB). Basically after I applied these Files I could successfully create one Service with multiple Shares from different PVCs inside of it. I'm quiet busy right now, but if I have some time I will try to create a PR, but unfortunately I'm not really a Go Expert, so what I can do might not be the best solution.... Furthermore, I also haven't tested this with an HA setup, because I saw the operator is capable of that. Best regards and happy easter! |
Thanks very much for sharing that, I will look over it and see what I can learn! |
Hello!
First of all, I really like this project and I'm currently evaluating the usage of it.
I'm quiet interested in the Grouped Shares feature as implemented in PR #240. Based on this I see that it should be possible to create a Grouped share.
Now I have two issues and I hope you can help me with this:
config/default
directory.Best regards and thank you very much for your help in advance!
The text was updated successfully, but these errors were encountered: