Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodevert heavy #128

Merged
merged 2 commits into from
Apr 30, 2020
Merged

Conversation

rsevilla87
Copy link
Contributor

Include a "heavy" nodevertical workload. Further info about this workload is available at the included documentation.

250 pods tested successfully in m5.2xlarge nodes (8vCPU and 32GiB). Results below:

root@ip-172-31-76-65: ~ # oc describe nodes -l nodevertical=true | grep Non-terminated
Non-terminated Pods:                      (250 in total)
Non-terminated Pods:                      (250 in total)
Non-terminated Pods:                      (250 in total)

root@ip-172-31-76-65: ~ # oc adm top nodes -l nodevertical=true
NAME                                        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ip-10-0-143-79.us-west-2.compute.internal   755m         10%    8495Mi          28%       
ip-10-0-155-81.us-west-2.compute.internal   732m         9%     8810Mi          29%       
ip-10-0-166-82.us-west-2.compute.internal   753m         10%    9116Mi          30%       

The interesting part is that thanks to this workload we're gonna be able to stress other components like network since the application readiness probe performs a periodic database query.

Note that the application container is currently available at my own quay project (quay.io/rsevilla/perfapp:latest), we should move it to the openshift-scale org in the future.

Copy link
Member

@chaitanyaenr chaitanyaenr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome @rsevilla87. Just one suggestion, maybe add a note in the docs on the resource ( cpu, memory ) requirements, this way the user can turn on the heavy workload when the workers nodes are big enough? Thoughts?

@rsevilla87
Copy link
Contributor Author

This is awesome @rsevilla87. Just one suggestion, maybe add a note in the docs on the resource ( cpu, memory ) requirements, this way the user can turn on the heavy workload when the workers nodes are big enough? Thoughts?

I could add the above results to give the user an idea of the resource consumption. What do you think?
In the other hand, I'm also working in migrating this workloads to deployments as discussed in other thread.

@chaitanyaenr
Copy link
Member

@rsevilla87 sounds good.

@rsevilla87 rsevilla87 force-pushed the nodevert-heavy branch 3 times, most recently from db1173a to 9c02ad0 Compare April 29, 2020 10:25
@rsevilla87
Copy link
Contributor Author

rsevilla87 commented Apr 29, 2020

@chaitanyaenr I've added some info about memory consumption at the docs as well as replaced DeploymentConfigs for Deployments as stated at #130

@rsevilla87 rsevilla87 requested a review from chaitanyaenr April 29, 2020 13:09

Results obtained of running 2 pods (client + database) with the default `/ready` endpoint.
```
# oc adm top pods -n nodevertical
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a note for the users to use this info to check if they have enough resources on the worker nodes to host 250 similar pods before running the nodevert heavy? Thoughts?

Copy link
Member

@chaitanyaenr chaitanyaenr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants