-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage options #44
Comments
AWS S3 has best potential, but might be tricky to setup permissions... never tried this myself. But as I was thinking about S3, I wondered whether from a design perspective might it not be better to have the 'file server' as an actual server - again through a separate Docker image? Examples online:
Plugging in to existing solutions might have a number of advantages over building for the Heidelberg ecosystem - but I could be completely wrong here. First objective would be a working solution, if Heidelberg is simpler then we go for that and if there's dev time left at the end of the project we move to S3. Working solution = users = likely that there'll be followup funding, especially as HMHLSA has deep pockets and needs to show some early wins to justify their 30m funding |
So from the coding perspective there's nothing heidelberg-specific about using a heicloud storage volume - it's just more disk space that you can mount to your docker image, so there shouldn't be any changes required other than updating the path to where the data should be stored when this is hosted on another server. Based on this I don't really see an advantage in adding a separate internal file server layer between our backend (which authenticates the user and returns a file) and the file system. With S3 I see 2 straightforward ways to use it:
I think the simplest solution to get things working now would be to add a heicloud storage volume. |
agree to simplest solution, leaving space for a followup proposal We received files from a nanopore sequencing commercial service today (expensive) and they use google drive(!) |
Full results data will be up to ~96 x 300mb ~ 30gb/week
So up to ~120GB/month
Options for storing this:
The text was updated successfully, but these errors were encountered: