Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add job to run kuttl without install_yamls #2571

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

cescgina
Copy link
Contributor

@cescgina cescgina commented Nov 26, 2024

This change attempts to make the approach that nova-operator uses to run
kuttl more generic, so it can be used in other operators that follow
a similar patter in their tests, like watcher-operator.

Related: OSPRH-11424

Copy link
Contributor

openshift-ci bot commented Nov 26, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link
Contributor

openshift-ci bot commented Nov 26, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from cescgina. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@cescgina cescgina force-pushed the kuttl_without_install_yamls branch 6 times, most recently from 9758ec8 to df95669 Compare November 27, 2024 18:21
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/8cb680713d79422991d1aa87aaac0caa

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 42m 00s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 17m 32s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 28m 50s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 50s
✔️ cifmw-pod-pre-commit SUCCESS in 7m 52s
✔️ cifmw-pod-zuul-files SUCCESS in 4m 00s
cifmw-multinode-kuttl TIMED_OUT in 2h 39m 08s
✔️ build-push-container-cifmw-client SUCCESS in 21m 51s

@cescgina cescgina force-pushed the kuttl_without_install_yamls branch 3 times, most recently from 403e323 to 61a85b4 Compare November 28, 2024 09:01
cescgina added a commit to cescgina/nova-operator that referenced this pull request Nov 28, 2024
In the depends-on PR, a new job definition is introduced that runs kuttl
tests following the pattern used in nova-operator. This is done mainly
to reuse the job definition for watcher-operator, which has just
introduced kuttl testing following the same pattern.

This change modifies the nova job to use the same parent.

Depends-On: openstack-k8s-operators/ci-framework#2571
@cescgina cescgina marked this pull request as ready for review November 28, 2024 12:54
@cescgina
Copy link
Contributor Author

tested the job definition by changing the nova-operator kuttl job to use it in openstack-k8s-operators/nova-operator#902 , job is passing openstack-k8s-operators/nova-operator#902 (comment)

@cescgina
Copy link
Contributor Author

cescgina added a commit to cescgina/watcher-operator that referenced this pull request Nov 28, 2024
This job uses a parent job defined in [1] based on the existing
nova-operator.

[1] openstack-k8s-operators/ci-framework#2571
Comment on lines 104 to 105
KUBECONFIG: "{{ cifmw_openshift_kubeconfig }}"
PATH: "{{ cifmw_path | default(ansible_env.PATH) }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not passing this environment vars as part of the block to avoid repeating the same thing 4 times?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch, done

- name: Check we have some compute in inventory
ansible.builtin.set_fact:
has_compute: >-
{% set ns = namespace(found=false) -%}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to use a namespace, a regular var is enough. Moreover, this can be done without jinja2 scripts with plain ansible-jinja2 filters:

hostvars.keys() | select('search', '^compute.*') | length > 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually we don't set a compute in the nodeset for this job. I carried this from the nova kuttl job, but it must be there for a different reason, since the kuttl nodeset does not have compute nodes. I'm removing this task

Comment on lines 21 to 24
- name: Assert that operator_name is set
ansible.builtin.assert:
that:
- operator_name is defined
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we move this to line 8 we can safe some time/resources/bandwith if the assertion is not met.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thanks

ansible.builtin.import_playbook: "../../../playbooks/01-bootstrap.yml"

- hosts: "{{ cifmw_target_host | default('localhost') }}"
name: install dev tools
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An un-capitalized playbook name looks strange.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

- has_compute | bool
ansible.builtin.assert:
that:
- "'compute' in crc_ci_bootstrap_networks_out"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why enforcing the instance name? I mean, you already used a different criteria to check if a compute is available, based on a regex. I'd vote to stick to a fixed name or pick the first one that is called ^compute.*.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, removing this tasks as we won't have computes for kuttl

This change attempts to make the approach that nova-operator uses to run
kuttl more generic, so it can be used in other operators that follow
a similar patter in their tests, like watcher-operator.
Copy link
Contributor

@SeanMooney SeanMooney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i have one comemnt inline but i bleive this is more or less good as is.

we can hook this via a zuul pre-run playbook in a child job but wanted to see if people tough having an actually ci framework hook was warranted or not.

oc wait InstallPlan {{ cifmw_edpm_prepare_wait_installplan_out.stdout }}
--namespace=openstack-operators
--for=jsonpath='{.status.phase}'=Complete --timeout=20m

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may want to have a hook here to install operators that are not part of the meta operator or other customisation.

i.e. we need to be able to install watcher operator here after the OpenStack operator is installed.

the "make kuttl-test-prep" should not actually depend on the operator its testing being installed so this will still work as we can add a pre-run playbook to the job that inherits form this so its not a blocker.

we have a hookpoint as a result of that but just wanted to highlight this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants