-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2i2c-uk:lis] Enable profileList #3308
[2i2c-uk:lis] Enable profileList #3308
Conversation
Merging this PR will trigger the following deployment actions. Support and Staging deployments
Production deployments
|
- display_name: "Small: ~512 MB RAM / ~0.5 CPU" | ||
slug: mem_512m | ||
default: true | ||
kubespawner_override: | ||
# increase as requested via https://2i2c.freshdesk.com/a/tickets/1066 | ||
mem_guarantee: 512M | ||
mem_limit: 1G | ||
- display_name: "Large: ~1 GB RAM / ~0.5 CPU" | ||
slug: mem_1g | ||
kubespawner_override: | ||
# increase as requested via https://2i2c.freshdesk.com/a/tickets/1066 | ||
mem_guarantee: 1G | ||
mem_limit: 2G |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to avoid use of relative words like "small" and "large" because it will make communities use these words that we won't understand without reading up on their configuration first.
I think it would make sense for them to use the existing 4 CPU / 32 GB nodes with node resource allocation script generated requests/limits where the limit is 2x the request. Is the script able to do that atm? Hmmm...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually did a deploy with just the sizes as names and it felt very mathematical 😅
I think it would make sense for them to use the existing 4 CPU / 32 GB nodes with node resource allocation script generated requests/limits where the limit is 2x the request. Is the script able to do that atm? Hmmm..
I am not familiar with what the script can do yet 😬
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neither am I and I failed to quickly whip up a suggested alternative - so lets go with anything for now and iterate!
The script currently only provides requests and limits being set to the same value, so nothing matches exactly what we want with limits higher than requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is approved as is, including "Small" and "Large" mentions, but also with currently deployed change. Let's iterate over time on this instead!
@consideRatio, thank you! I was thinking just now, if we are moving to profileLists, why not add their initial memory guarantee as another, even smaller option (256MB)? |
@GeorgianaElena I suggest we go for #3308 (comment) and ask them for feedback |
Co-authored-by: Erik Sundell <[email protected]>
I will close this PR to de-clutter and will open up again if/when agreement about this one. |
Update: waiting for feedback in https://2i2c.freshdesk.com/a/tickets/1066 before merging.
Follow-up to #3302, also for https://2i2c.freshdesk.com/a/tickets/1066.
Matthew from LIS requested a guarantee of 1GB as they are still experiencing crashes, even after the increase to 512GB of guarantee from #3302.
This PR enables profileLists for the lis hub, adding the new guarantee and reducing the limit of the initial one of 512G one from 2GB to 1GB.
I think it would be useful for them to enable https://github.com/jupyter-server/jupyter-resource-usage so they can better understand they mem needs, especially since from the grafana below, it looks like only a few users actually required around or more than 1GB.
This is also why the new server size option, instead of increasing it again.