Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added container-based VM tests #541

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions workloads/kube-burner/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,20 @@ Each iteration creates the following objects:
- 10 secrets containing 2048 character random string
- 10 configMaps containing a 2048 character random string

### node-vm-density variables

The `node-vm-density`workloads support the following environment variables:

- **NODE_COUNT**: Number of worker nodes to deploy the pods on. During the workload nodes will be labeled with `node-density=enabled`. Defaults to the number of worker nodes across the cluster (Nodes resulting of the expression `oc get node -o name --no-headers -l node-role.kubernetes.io/workload!="",node-role.kubernetes.io/infra!="",${WORKER_NODE_LABEL}`
- **PODS_PER_NODE**: Define the maximum number of VM to deploy on each labeled node. Defaults to 245
- **VM_POD_DENSITY_IMAGE**: Image to use as node-vm-density workload. Defaults to `quay.io/kubevirt/fedora-container-disk-images:35`.

These workloads create different objects each:

- **node-density**: Creates a single namespace with a number of Deployments proportional to the calculated number of pod.
Each iteration of this workload creates the following object:
- 1 pod. (sleep)

### Node-density and Node-density-heavy variables

The `node-density` and `node-density-heavy` workloads support the following environment variables:
Expand Down
2 changes: 2 additions & 0 deletions workloads/kube-burner/env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ export BURST=${BURST:-20}
export MAX_WAIT_TIMEOUT=${MAX_WAIT_TIMEOUT:-1h}
export CLEANUP=${CLEANUP:-true}
export POD_NODE_SELECTOR=${POD_NODE_SELECTOR:-'{node-role.kubernetes.io/worker: }'}
export VM_NODE_SELECTOR=${VM_NODE_SELECTOR:-'{kubernetes.io/hostname: }'}
bbenshab marked this conversation as resolved.
Show resolved Hide resolved
export WORKER_NODE_LABEL=${WORKER_NODE_LABEL:-"node-role.kubernetes.io/worker"}
export WAIT_WHEN_FINISHED=true
export POD_WAIT=${POD_WAIT:-false}
Expand All @@ -35,6 +36,7 @@ export JOB_PAUSE=${JOB_PAUSE:-1m}

# kube-burner workload defaults
export NODE_POD_DENSITY_IMAGE=${NODE_POD_DENSITY_IMAGE:-gcr.io/google_containers/pause:3.1}
export NODE_VM_DENSITY_IMAGE=${NODE_VM_DENSITY_IMAGE:-quay.io/kubevirt/fedora-container-disk-images:35}

# kube-burner churn enablement
export CHURN=${CHURN:-false}
Expand Down
9 changes: 9 additions & 0 deletions workloads/kube-burner/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,15 @@ case ${WORKLOAD} in
label_node_with_label $label
find_running_pods_num regular
;;
node-vm-density)
WORKLOAD_TEMPLATE=workloads/node-vm-density/node-vm-density.yml
METRICS_PROFILE=${METRICS_PROFILE:-metrics-profiles/metrics.yaml}
NODE_COUNT=${NODE_COUNT:-$(kubectl get node -l ${WORKER_NODE_LABEL},node-role.kubernetes.io/infra!=,node-role.kubernetes.io/workload!= -o name | wc -l)}
PODS_PER_NODE=${PODS_PER_NODE:-245}
label="node-density=enabled"
label_node_with_label $label
find_running_pods_num regular
;;
node-density-heavy)
WORKLOAD_TEMPLATE=workloads/node-density-heavy/node-density-heavy.yml
METRICS_PROFILE=${METRICS_PROFILE:-metrics-profiles/metrics.yaml}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
---
global:
writeToFile: false
indexerConfig:
enabled: {{.INDEXING}}
esServers: ["{{.ES_SERVER}}"]
insecureSkipVerify: true
defaultIndex: {{.ES_INDEX}}
type: elastic
measurements:
- name: podLatency
esIndex: {{.ES_INDEX}}
thresholds:
- conditionType: Ready
metric: P99
threshold: {{.POD_READY_THRESHOLD}}
- name: vmiLatency
{{ if eq .PPROF_COLLECTION "True" }}
- name: pprof
pprofInterval: {{ .PPROF_COLLECTION_INTERVAL }}
pprofDirectory: /tmp/pprof-data
pprofTargets:
- name: kube-apiserver-cpu
namespace: "openshift-kube-apiserver"
labelSelector: {app: openshift-kube-apiserver}
bearerToken: {{ .BEARER_TOKEN }}
url: https://localhost:6443/debug/pprof/profile?timeout=30
- name: kube-apiserver-heap
namespace: "openshift-kube-apiserver"
labelSelector: {app: openshift-kube-apiserver}
bearerToken: {{ .BEARER_TOKEN }}
url: https://localhost:6443/debug/pprof/heap
- name: kube-controller-manager-heap
namespace: "openshift-kube-controller-manager"
labelSelector: {app: kube-controller-manager}
bearerToken: {{ .BEARER_TOKEN }}
url: https://localhost:10257/debug/pprof/heap

- name: kube-controller-manager-cpu
namespace: "openshift-kube-controller-manager"
labelSelector: {app: kube-controller-manager}
bearerToken: {{ .BEARER_TOKEN }}
url: https://localhost:10257/debug/pprof/profile?timeout=30

- name: etcd-heap
namespace: "openshift-etcd"
labelSelector: {app: etcd}
cert: {{ .CERTIFICATE }}
key: {{ .PRIVATE_KEY }}
url: https://localhost:2379/debug/pprof/heap

- name: etcd-cpu
namespace: "openshift-etcd"
labelSelector: {app: etcd}
cert: {{ .CERTIFICATE }}
key: {{ .PRIVATE_KEY }}
url: https://localhost:2379/debug/pprof/profile?timeout=30
{{ end }}
jobs:
- name: node-vm-density
jobIterations: {{.TEST_JOB_ITERATIONS}}
qps: {{.QPS}}
burst: {{.BURST}}
namespacedIterations: false
namespace: {{.UUID}}
podWait: {{.POD_WAIT}}
cleanup: {{.CLEANUP}}
waitFor: {{.WAIT_FOR}}
waitWhenFinished: {{.WAIT_WHEN_FINISHED}}
verifyObjects: {{.VERIFY_OBJECTS}}
errorOnVerify: {{.ERROR_ON_VERIFY}}
maxWaitTimeout: {{.MAX_WAIT_TIMEOUT}}
preLoadImages: {{.PRELOAD_IMAGES}}
preLoadPeriod: {{.PRELOAD_PERIOD}}
namespaceLabels:
security.openshift.io/scc.podSecurityLabelSync: false
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
objects:

- objectTemplate: vm.yml
replicas: 1
inputVars:
containerImage: {{.NODE_VM_DENSITY_IMAGE}}
nodeSelector: "{{.VM_NODE_SELECTOR}}"
#nodeSelector: "{{.POD_NODE_SELECTOR}}"
50 changes: 50 additions & 0 deletions workloads/kube-burner/workloads/node-vm-density/vm.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
name: {{.JobName}}
name: {{.JobName}}-{{.Iteration}}
spec:
running: true
template:
metadata:
labels:
kubevirt-vm: {{.JobName}}-{{.Iteration}}
spec:
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- disk:
bus: virtio
name: node-vm-density
Copy link

@jeniferh jeniferh Mar 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+            - disk:
+                bus: virtio
+              name: cloudinitdisk

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, why?
some 1 might want to add more disks, and I like the name convention it fits the role

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have the cloudInitNoCloud section to set pw / config ssh so this template would fail w/out defining the corresponding disk (error was: spec.template.spec.domain.volumes[1].name 'cloudinitdisk' not found)

machine:
type: pc-q35-rhel8.4.0
bbenshab marked this conversation as resolved.
Show resolved Hide resolved
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
cpu: 100m
bbenshab marked this conversation as resolved.
Show resolved Hide resolved
#memory:
#hugepages:
#pageSize: 1Gi
nodeSelector: {{.nodeSelector}}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: {{.containerImage}}
name: node-vm-density
- cloudInitNoCloud:
userData: |-
#cloud-config
password: password
chpasswd: { expire: False }
runcmd:
- sed -i -e "s/PasswordAuthentication.*/PasswordAuthentication yes/" /etc/ssh/sshd_config
- systemctl restart sshd
name: cloudinitdisk
status: {}