-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nodevert heavy #128
Nodevert heavy #128
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome @rsevilla87. Just one suggestion, maybe add a note in the docs on the resource ( cpu, memory ) requirements, this way the user can turn on the heavy workload when the workers nodes are big enough? Thoughts?
I could add the above results to give the user an idea of the resource consumption. What do you think? |
@rsevilla87 sounds good. |
db1173a
to
9c02ad0
Compare
@chaitanyaenr I've added some info about memory consumption at the docs as well as replaced DeploymentConfigs for Deployments as stated at #130 |
docs/nodevertical.md
Outdated
|
||
Results obtained of running 2 pods (client + database) with the default `/ready` endpoint. | ||
``` | ||
# oc adm top pods -n nodevertical |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a note for the users to use this info to check if they have enough resources on the worker nodes to host 250 similar pods before running the nodevert heavy? Thoughts?
9c02ad0
to
5254933
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Include a "heavy" nodevertical workload. Further info about this workload is available at the included documentation.
250 pods tested successfully in m5.2xlarge nodes (8vCPU and 32GiB). Results below:
The interesting part is that thanks to this workload we're gonna be able to stress other components like network since the application readiness probe performs a periodic database query.
Note that the application container is currently available at my own quay project (quay.io/rsevilla/perfapp:latest), we should move it to the openshift-scale org in the future.