-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ironic overcloud hypervisor deployment #1089
base: stackhpc/2023.1
Are you sure you want to change the base?
Conversation
e2e2213
to
e24e4ff
Compare
e24e4ff
to
0493247
Compare
c01bda2
to
1218ec8
Compare
d02d07d
to
7598716
Compare
ed6155e
to
e850079
Compare
e850079
to
d7e920a
Compare
@assumptionsandg remember to mark conversations as resolved when you've completed them! Makes it much easier for reviewers to know when to re-review. Is this still a draft? Who do you need a review from to get it merged? |
It's still WIP on the client side of things, hence leaving it as a draft here. We could merge what's here now and interate/fix things as the client work progresses if what's here will be useful to people. |
I'd be happy with it as long as you add a big warning at the top saying the docs are still WIP. Something is better than nothing |
72cf9c8
to
eb82723
Compare
c484916
to
7f42dee
Compare
@assumptionsandg you need to tag people for re-review when you make changes. This has been sat around for two months and I don't think anyone has looked at it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are good docs. I've got a lot of comments but they're mostly just small formatting and grammar fixes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally I think the page should be a bit more declarative, at the moment it's written in a future tense. For example, I think that:
Overcloud Ironic will be deployed with a listening TFTP server on the control plane
Would flow better as
Overcloud Ironic is deployed with a listening TFTP server on the control plane
I don't think it would be a lot of effort to update (basically a find & replace: will be -> is
) but it's a bit of a nitpick so if you want to leave it as is then that's fine
It is also required to load the conntrack kernel module ``nf_nat_tftp``, | ||
``nf_conntrack`` and ``nf_conntrack_tftp`` on network nodes. You can load these | ||
modules using modprobe or define these in /etc/module-load. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the grammar can be improved here a bit. The I think the line lengths are a bit off in my suggestion because I'm doing it in the github comment editor
It is also required to load the conntrack kernel module ``nf_nat_tftp``, | |
``nf_conntrack`` and ``nf_conntrack_tftp`` on network nodes. You can load these | |
modules using modprobe or define these in /etc/module-load. | |
The conntrack kernel modules ``nf_nat_tftp``, ``nf_conntrack``, | |
and ``nf_conntrack_tftp`` are also required on network nodes. You | |
can load these modules using modprobe or define these in /etc/module-load. |
Conntrack_helper will be required when UEFI booting on a cloud with ML2/OVS | ||
and using the iptables firewall_driver, otherwise TFTP traffic is dropped due | ||
to it being UDP. You will need to define some extension drivers in ``neutron.yml`` | ||
to ensure conntrack is enabled in neutron server. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've done well to capitalise Nova
and Ironic
but missed a few instances of OpenStack
, Neutron
, and Kayobe
This is just one instance, but rather than pointing every one of them out, I'll let you search for them
to ensure conntrack is enabled in neutron server. | |
to ensure conntrack is enabled in Neutron server. |
modules using modprobe or define these in /etc/module-load. | ||
|
||
The Ironic neutron router will also need to be configured to use | ||
conntrack_helper. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not completely sure on this one, I'll leave it for you to decide
conntrack_helper. | |
``conntrack_helper``. |
Now the node has no instances allocated to it you can delete the instance using | ||
the OpenStack CLI and the node will be moved back to ``available`` state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just clarify here that it's the baremetal instance that you're deleting, not the VMs that you've migrated away
|
||
- name: Set baremetal node JSON variable | ||
ansible.builtin.set_fact: | ||
node_show_json: "{{ node_show.stdout | to_json | from_json }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| to_json | from_json
is a bit weird, and the name of the variable doesn't really fit.
You're taking a string, which is JSON, then converting it to a nice ansible dict var, so "node_show_json" isn't actually json
Also, is it actually used? Can it just be deleted?
|
||
- name: Set baremetal node JSON variable | ||
ansible.builtin.set_fact: | ||
node_show_json: "{{ node_show.stdout | to_json | from_json }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see previous comments on to & from json
- name: Slurp network allocations | ||
ansible.builtin.slurp: | ||
path: "{{ network_allocation_path }}" | ||
register: net_alc | ||
|
||
- name: Read network allocations | ||
ansible.builtin.set_fact: | ||
net_alc_yaml: "{{ net_alc['content'] | b64decode | from_yaml }}" | ||
|
||
- name: Write node IP address to allocations | ||
ansible.builtin.set_fact: | ||
new_net_alc: "{{ net_alc_yaml | combine(new_ips, recursive=True) }}" | ||
vars: | ||
new_ips: "{ '{{ admin_oc_net_name }}_ips': { '{{ inventory_hostname }}': '{{ ansible_host }}' } }" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The net IPs are dictionary vars you have access to right? I don't think you need to slurp them in from a file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EOF newline
Co-authored-by: Alex-Welsh <[email protected]>
Deployment playbook for baremetal instances to be used as hypervisors.
TODO: Documentation refactoring (separate everything prior to hypervisors?)