You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 11, 2020. It is now read-only.
The Met office pangeo uses one to mount s3 buckets using FUSE. As I understand it the benefit is in having the FUSE mount overhead operate at the node level rather than the pod level, so the data is served the same way but is more stable and efficient under the hood. That said, the only FUSE+VolumeFlex implementation I've seen is for AWS (not GCS) so we'd have to wait or roll our own. And we'd have to change the way we grant access to buckets... doing fine-grained user-managed bucket permissions has it's advantages. Doesn't seem super hard to get around, but not a near term fix. This was mostly just me casting about looking for solutions to our bucket mounting issue, which was just that I had accidentally removed priviledged access for our pods, which they need to mount something with FUSE. In general, I think our method of giving users all-powerful credentials to leave on their servers and then elevating their privileges so they can manipulate node hardware is a no-no, and I've never seen a z2jh implementation that does anything like what we do.
Would have to make sure this works with netcdfs, but seems like the way of the future: jupyterhub/zero-to-jupyterhub-k8s#300
The text was updated successfully, but these errors were encountered: