forked from cloud-bulldozer/benchmark-operator
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] master from cloud-bulldozer:master #8
Open
pull
wants to merge
252
commits into
mohit-sheth:master
Choose a base branch
from
cloud-bulldozer:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Model-S supports "node_range: [n, m]" and "density_range: [x, y]" to specify enumeration by both dimension.
…rk context. Fix last commit which introduced node_range and density_range but inadvertedly still used min_node and max_node for condition check i.e "when: max_node is defined"
…a correlation. While at it beefup a few place with "when: xxx is defined" for robustness
Valid step_size values are: addN or log2. N can be any decimal number.
enter main loop (due to start-True) then acquire pod_idx and node_idx. They miss out the restart state, and run prematurely
to reduce redis load.
This fits in with the idea that we are viewing worker nodes as just a pool of resources and not by individual hostnames. This gives the flexibility for users to isolate the tests to only one hardware model of workers as well (since each model can be labelled with its model name). Also there was a bug in previous code where only the first node in the list was technically being excluded in `workload_args.excluded_node[0]`. Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
Increase redis-server to 2 CPU. Add sleep's to workload/uperf clients Point operator resource spec to compatible image.
Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
***Important*** pin=true/false "pin=true" will run Pin mode "pin=false" will run Scale mode In Pin mode, "pair=n" is now obsolete. Use denstiy_range[] instead Scale mode The default values are: node_range=[1,1], density_range=1,1], step_size=add1 and colcate=false
Integrated "serviceip" into scale framework. Was not working until this commit. Cosmetic cleanup i.e debug TASK tags, and doublle "when" guards.
a. Keep 'pair' for pin mode b. Remove custom image URLs 2. Rebase to upstream
Signed-off-by: Raul Sevilla <rsevilla@redhat.com>
* initial draft of standard uperf with new features * bug fix for client creation in nodeport service * bug fix for client creation conflict due to same name in metadata * added index for metallb service labels * bug fix for ports mismatch between client and server pairs in without service and with clusterip service case * bug fix for clusterip service ports mismatch * reverting quay repo org to cloudbulldozer * removed test configuration * adding exit status to capture uperf failures * fix for client start issue caused by redis variables for vm use case * renamed client_vm to client_vms for vm kind * fixing redis variables for pod use cases as well * Merge nodeport and hostnetwork tests Signed-off-by: Raul Sevilla <rsevilla@redhat.com> * added a task to mark benchmark status as failure for pod errors * updating the image * fixed typo in failure message * removing an old comment Co-authored-by: root <root@f30-h09-000-r640.rdu2.scalelab.redhat.com> Co-authored-by: root <root@e16-h12-b02-fc640.rdu2.scalelab.redhat.com> Co-authored-by: Raul Sevilla <rsevilla@redhat.com>
When a new commit lands to the master branch, the master branch is not named correctly i.e: ``` rsevilla@wonderland /tmp/foo (master) $ git describe --tags 2>/dev/null || git rev-parse --abbrev-ref HEAD | sed 's/master/latest/g' v1.0.0-1-g487fada rsevilla@wonderland /tmp/foo (master) $ git log --oneline | head 487fada Foo 12701a6 Added exit status for uperf failures (#782) 94005cd missing env var num_pairs (#784) ``` Signed-off-by: Raul Sevilla <rsevilla@redhat.com>
* initial draft of standard uperf with new features * bug fix for client creation in nodeport service * bug fix for client creation conflict due to same name in metadata * initial draft for uperf-scale mode * removed uperf to avoid confusion * added index for metallb service labels * fix for client port mismatch with server and services * fixed es index issues * fixed services issue * bug fix for clusterip service ports mismatch * reverting quay repo org to cloudbulldozer * fix for service ip usecases except for nodeport * fixed ports issues for nodeport service use case * added test for uperf_scale * restoring image repo to original * removing bash eternal history * removed test configuration * fix for client start issue caused by redis variables for vm use case * renamed client_vm to client_vms for vm kind * fixing redis variables for pod use cases as well * fixed typo * resolving PR comments * removed an old comment Co-authored-by: root <root@f30-h09-000-r640.rdu2.scalelab.redhat.com> Co-authored-by: root <root@e16-h12-b02-fc640.rdu2.scalelab.redhat.com> Co-authored-by: Murali Krishnasamy <70236227+mukrishn@users.noreply.github.com>
Co-authored-by: root <root@e16-h12-b02-fc640.rdu2.scalelab.redhat.com>
Co-authored-by: root <root@e16-h12-b02-fc640.rdu2.scalelab.redhat.com>
With FIO we need to wait for server pods to be annotated with IP addresses. This fix adds that wait. closes #789 Signed-off-by: Joe Talerico <jtaleric@redhat.com>
* Affinity rules Use podAntiaffinity rules to place uperf client pods in different worker nodes where the servers are running on Use podAffinity rules to place all client and server pods in the same node, useful when running more than one pair All client pods will be scheduled in the same node and servers in another Signed-off-by: Raul Sevilla <rsevilla@redhat.com> * Update docs Signed-off-by: Raul Sevilla <rsevilla@redhat.com>
* updated README.md Spelling typos! * Update README.md Another typo * Update README.md
* SCTP support for services * Added a test now that e2e cluster supports SCTP * Added a pod2pod test
Set benchmark-operator's affinity to preferably be scheduled on workload nodes Signed-off-by: Raul Sevilla <rsevilla@redhat.com>
* Adding option to run tests with pvcvolumemode: Block * Running FIO tests with pvcvolumemode set as Block * Update kustomization.yaml
Signed-off-by: Huamin Chen <hchen@redhat.com> Signed-off-by: Huamin Chen <hchen@redhat.com>
Have not tested using `kind: vm`
…d, previous behavior is default (#799) Added support for prepare, run, and cleanup phase options in sysbench fileio
* code snippets for creating nginx server side resources * corrected typo in the workload name * initial draft for nighthawk workload * updated review comments (#1) * added default values for all workload_args * restored files to avoid resolve conflicts * adding readme and keeping default service as ClusterIP * made changes to keep clusterIP default --------- Co-authored-by: root <root@vkommadi.rdu2.scalelab.redhat.com> Co-authored-by: Murali Krishnasamy <70236227+mukrishn@users.noreply.github.com> Co-authored-by: Vishnu Challa <vchalla@vchalla.remote.csb>
…e set on e2e-benchmarking as ES_BACKEND_INDEX
Add es_index parameter to log-generator workload template as it can be set on e2e-benchmarking as ES_BACKEND_INDEX
Co-authored-by: Elena German <elgerman@elgerman-laptop.ca.redhat.com>
1. add storageClassName in "spec" section for PVC 2. remove volume.beta.kubernetes.io/storage-class from annotations section
* zone affinity for clients * updated affinity * corrected the mandatory fields
CNV w/ OpenShift 4.15 was broken. This should fix it. Signed-off-by: Joe Talerico <rook@redhat.com>
Co-authored-by: Chris Blum <cblum@ibm.com>
) using the "--no-cache-dir" flag in pip install, make sure downloaded packages by pip don't cache on the system. This is a best practice that makes sure to fetch from a repo instead of using a local cached one. Further, in the case of Docker Containers, by restricting caching, we can reduce image size. In terms of stats, it depends upon the number of python packages multiplied by their respective size. e.g for heavy packages with a lot of dependencies it reduces a lot by don't cache pip packages. Further, more detailed information can be found at https://medium.com/sciforce/strategies-of-docker-images-optimization-2ca9cc5719b6 Signed-off-by: Pratik Raj <rajpratik71@gmail.com>
Signed-off-by: Vishnu Challa <vchalla@vchalla-thinkpadp1gen2.rmtusnc.csb> Co-authored-by: Vishnu Challa <vchalla@vchalla-thinkpadp1gen2.rmtusnc.csb>
* Update kubevirt API version Signed-off-by: Raul Sevilla <rsevilla@redhat.com> * Wait for servers to be running before registering interfaces Signed-off-by: Raul Sevilla <rsevilla@redhat.com> * Centos 8 appstream is EOL: Use vault repository Signed-off-by: Raul Sevilla <rsevilla@redhat.com> --------- Signed-off-by: Raul Sevilla <rsevilla@redhat.com>
``` looking for "jobfile.j2" at "/opt/ansible/roles/stressng/templates/jobfile.j2" File lookup using /opt/ansible/roles/stressng/templates/jobfile.j2 as file fatal: [localhost]: FAILED! => { "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'cpu_method'. 'dict object' has no attribute 'cpu_method'. 'dict object' has no attribute 'cpu_method'. 'dict object' has no attribute 'cpu_method'\n\nThe error appears to be in '/opt/ansible/roles/stressng/tasks/main.yaml': line 4, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: template stressng config file\n ^ here\n" } ```
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )