Skip to content
This repository has been archived by the owner on Dec 11, 2020. It is now read-only.

Mount buckets with VolumeFlex rather than FUSE #111

Open
delgadom opened this issue Dec 13, 2019 · 2 comments
Open

Mount buckets with VolumeFlex rather than FUSE #111

delgadom opened this issue Dec 13, 2019 · 2 comments

Comments

@delgadom
Copy link
Member

Would have to make sure this works with netcdfs, but seems like the way of the future: jupyterhub/zero-to-jupyterhub-k8s#300

@bolliger32
Copy link
Collaborator

maybe i'm not understanding it, but it's not clear to me that this is relevant for GCS object storage?

@delgadom
Copy link
Member Author

The Met office pangeo uses one to mount s3 buckets using FUSE. As I understand it the benefit is in having the FUSE mount overhead operate at the node level rather than the pod level, so the data is served the same way but is more stable and efficient under the hood. That said, the only FUSE+VolumeFlex implementation I've seen is for AWS (not GCS) so we'd have to wait or roll our own. And we'd have to change the way we grant access to buckets... doing fine-grained user-managed bucket permissions has it's advantages. Doesn't seem super hard to get around, but not a near term fix. This was mostly just me casting about looking for solutions to our bucket mounting issue, which was just that I had accidentally removed priviledged access for our pods, which they need to mount something with FUSE. In general, I think our method of giving users all-powerful credentials to leave on their servers and then elevating their privileges so they can manipulate node hardware is a no-no, and I've never seen a z2jh implementation that does anything like what we do.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants