Tooling to install clusters for testing via an on-prem Assisted Installer in the Red Hat Scale/Alias Lab and bare metal servers in IBMcloud.
Three separate layouts of clusters can be deployed:
- BM - Bare Metal - 3 control-plane nodes, X number of worker nodes
- RWN - Remote Worker Node - 3 control-plane/worker nodes, X number of remote worker nodes
- SNO - Single Node OpenShift - 1 OpenShift Master/Worker Node "cluster" per available hardware resource
Each cluster layout requires a bastion machine which is the first machine out of your lab "cloud" allocation. The bastion machine will host the assisted-installer and serve as a router for clusters with a private machine network. BM and RWN layouts produce a single cluster consisting of 3 control-plane nodes and X number of worker or remote worker nodes. SNO layout creates an SNO cluster per available machine after fulfilling the bastion machine requirement. Lastly, BM/RWN cluster types will allocate any unused machines under the hv
ansible group which stands for hypervisor nodes. The hv
nodes can host vms for additional clusters that can be deployed from the hub cluster. (For ACM/MCE testing)
Table of Contents
- Tested Labs/Hardware
- Prerequisites
- Cluster Deployment Usage
- Quickstart guides
- Tips and Troubleshooting
- Disconnected API/Console Access
- Jetlag Hypervisors
The listed hardware has been used for cluster deployments successfully. Potentially other hardware has been tested but just not documented here.
Alias Lab
Hardware | BM | RWN | SNO |
---|---|---|---|
740xd | No | No | Yes |
Dell r750 | Yes | No | Yes |
Scale Lab
Hardware | BM | RWN | SNO |
---|---|---|---|
Dell r650 | Yes | No | Yes |
Dell r640 | Yes | Yes | Yes |
Dell fc640 | Yes | No | Yes |
Supermicro 1029p | Yes | Yes | No |
Supermicro 1029U | No | No | Yes |
Supermicro 5039ms | Yes | No | Yes |
IBMcloud
Hardware | BM | SNO |
---|---|---|
Supermicro E5-2620 | Yes | Yes |
Lenovo ThinkSystem SR630 | Yes | Yes |
Versions:
- Ansible 4.10+ (core >= 2.11.12) (on machine running jetlag playbooks)
- ibmcloud cli => 2.0.1 (IBMcloud environments)
- RHEL 8.6 / Rocky 8.6 (Bastion)
- podman 3 / 4 (Bastion)
Update to RHEL 8.6
[root@f31-h05-000-r640]# ./update-latest-rhel-release.sh 8.6
[root@f31-h05-000-r640]# dnf update -y
[root@f31-h05-000-r640]# reboot
Installing Ansible via bootstrap (requires python3-pip)
[root@f31-h05-000-r640 jetlag]# source bootstrap.sh
...
(.ansible) [root@f31-h05-000-r640 jetlag]#
Installing Ansible via python3
$ python3 -m venv .ansible # Create a virtualenv if one does not already exist
$ source .ansible/bin/activate # Activate the virtual environment
$ pip3 install --upgrade pip # Ensure pip is updated
$ pip3 install ansible netaddr # Install the lastest version of ansible and netaddr library
$ ansible-galaxy collection install ansible.utils
For guidance on how to order hardware on IBMcloud, see order-hardware-ibmcloud.md in docs directory.
Pre-reqs for Supermicro hardware:
- SMCIPMITool downloaded to jetlag repo, renamed to
smcipmitool.tar.gz
, and placed underansible/
There are three main files to configure and one is generated but might have to be edited for specific desired scenario/hardware usage:
ansible/vars/all.yml
- An ansible vars file (Sample providedansible/vars/all.sample.yml
)pull_secret.txt
- Your OCP pull secret, download from console.redhat.comansible/inventory/$CLOUDNAME.local
- The generated inventory file (Samples provided inansible/inventory
)
Start by editing the vars
cp ansible/vars/all.sample.yml ansible/vars/all.yml
vi ansible/vars/all.yml
Make sure to set/review the following vars:
lab
- eitheralias
orscalelab
lab_cloud
- the cloud within the lab environment (Example:cloud42
)cluster_type
- eitherbm
,rwn
, orsno
for the respective cluster layoutworker_node_count
- applies to bm and rwn cluster types for the desired worker count, ideal for leaving left over inventory hosts for other purposespublic_vlan
- applies to sno cluster_types, set to betrue
only for public routable vlan deploymentcontrolplane_lab_interface
- applies to bm and rwn cluster types and should map to the nodes interface in which the lab provides dhcp to and also required for public routable vlan based sno deployment(to disable this interface)controlplane_pub_network_cidr
andcontrolplane_pub_network_gateway
- only required for public routable vlan based sno deployment to input lab public routable vlan network and gatewayrwn_lab_interface
- applies only to rwn cluster type and should map to the nodes interface in which the lab provides dhcp to- More customization like cluster_network, service_network, rwn_vlan and rwn_networks can be supported as extra vars, check default files for variable name.
Set your pull-secret in pull_secret.txt
in repo base directory. Example:
[user@fedora jetlag]$ cat pull_secret.txt
{
"auths": {
...
Run create-inventory playbook
ansible-playbook ansible/create-inventory.yml
Run setup-bastion playbook
ansible-playbook -i ansible/inventory/cloud42.local ansible/setup-bastion.yml
Run deploy for either bm/rwn/sno playbook with inventory created by create-inventory playbook
Bare Metal Cluster:
ansible-playbook -i ansible/inventory/cloud42.local ansible/bm-deploy.yml
See troubleshooting.md in docs directory for BM install related issues
Remote Worker Node Cluster:
ansible-playbook -i ansible/inventory/cloud42.local ansible/rwn-deploy.yml
Single Node OpenShift:
ansible-playbook -i ansible/inventory/cloud42.local ansible/sno-deploy.yml
- Deploy a Bare Metal cluster via jetlag quickstart guide
- Deploy a Bare Metal cluster on IBMcloud via jetlag quickstart
- Deploy Single Node OpenShift (SNO) clusters via jetlag quickstart guide
- Deploy Single Node OpenShift (SNO) clusters on IBMcloud via jetlag quickstart
See tips-and-vars.md in docs directory.
See troubleshooting.md in docs directory.
See disconnected-ipv6-cluster-access.md in docs directory.
See hypervisors.md in docs directory.