Skip to content

Commit 4a8baad

Browse files
committed
[WIP] Add documentation on deploying a hosted cluster
This is a work in progress! This document describes how to deploy a bare metal hosted cluster using Red Hat's Hosted Control Plane service, with bare metal nodes and networking provided by the ESI environment at the MOC.
1 parent 3e5f191 commit 4a8baad

File tree

1 file changed

+204
-0
lines changed

1 file changed

+204
-0
lines changed

Diff for: deploying-a-hosted-cluster.md

+204
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
# Deploying a hosted cluster on ESI-provisioned nodes
2+
3+
## Prerequisites
4+
5+
- You are comfortable working with both OpenShift and OpenStack.
6+
- You are comfortable with shell scripts.
7+
- You have cluster admin privileges on the management (hypershift) cluster.
8+
- You are able to create floating ips both for the hypershift project and for the project that owns the nodes on which you'll deploy your target cluster.
9+
- You are able to create DNS records on demand for the domain that you are using as your base domain.
10+
- You have installed the latest version of `python-esiclient`.
11+
12+
## Assumptions
13+
14+
You have an OpenStack [`clouds.yaml`][clouds.yaml] file in the proper location, and it defines the following two clouds:
15+
16+
- `hypershift` -- this is the project that owns the nodes and networks allocated to the hypershift management cluster.
17+
- `mycluster` -- this is the project that owns the nodes and networks on which you will be deploying a new cluster.
18+
19+
[clouds.yaml]: https://docs.openstack.org/python-openstackclient/pike/configuration/index.html#clouds-yaml
20+
21+
## Allocate DNS and floating ips
22+
23+
You must have DNS records in place before deploying the cluster (the install process will block until the records exist).
24+
25+
- Allocate two floating ip addresses from ESI:
26+
27+
- One will be for the API and must be allocated from the hypershift project (because it will map to worker nodes on the management cluster)
28+
- One will be for the Ingress service and must be allocated from the network on which you are deploying your target cluster work nodes
29+
30+
- Create DNS entries that map to those addresses:
31+
32+
- `api.<clustername>.<basedomain>` should map to the api vip.
33+
- `api-int.<clustername>.<basedomain>` should map to the api vip.
34+
- `*.apps.<clustername>.<basedomain>` should map to the ingress vip.
35+
36+
Note that at this point these addresses are not associated with any internal ip address. We can't do that until after the cluster has been deployed.
37+
38+
## Gather required configuration
39+
40+
- You will need a pull secret, which you can download from <https://console.redhat.com/openshift/downloads>. Scroll to the "Tokens" section and download the pull secret.
41+
42+
- You will probably want to provide an ssh public key. This will be provisioned for the `core` user on your nodes, allowing you to log in for troubleshooting purposes.
43+
44+
## Deploy the cluster
45+
46+
First, create the namespace for your cluster:
47+
48+
```
49+
oc create ns clusters-mycluster
50+
```
51+
52+
Now you can use the `hcp` cli to create appropriate cluster manifests:
53+
54+
```
55+
hcp create cluster agent \
56+
--name mycluster \
57+
--pull-secret pull-secret.txt \
58+
--agent-namespace hardware-inventory \
59+
--base-domain int.massopen.cloud \
60+
--api-server-address api.mycluster.int.massopen.cloud \
61+
--etcd-storage-class lvms-vg1 \
62+
--ssh-key larsks.pub \
63+
--namespace clusters \
64+
--control-plane-availability-policy HighlyAvailable \
65+
--release-image quay.io/openshift-release-dev/ocp-release:4.17.9-multi \
66+
--node-pool-replicas 3
67+
```
68+
69+
This will create several resources in the `clusters` namespace:
70+
71+
- A HostedCluster resource
72+
- A NodePool resource
73+
- Several Secrets:
74+
- A pull secret (`<clustername>-pull-secret`)
75+
- Your public ssh key (`<clustername>-ssh-key`)
76+
- An etcd encryption key (`<clustername>-etcd-encryption-key`)
77+
78+
This will trigger the process of deploying control plane services for your cluster into the `clusters-<clustername>` namespace.
79+
80+
If you would like to see the manifests generated by the `hcp` command, add the options `--render --render-sensitive`; this will write the manifests to *stdout* instead of deploying them to the cluster.
81+
82+
After creating the HostedCluster resource, the hosted control plane will immediately start to deploy. You will find the associated services in the `clusters-<clustername>` namespace. You can track the progress of the deployment by watching the `status` field of the `HostedCluster` resource:
83+
84+
```
85+
oc -n clusters get hostedcluster mycluster -o json | jq .status
86+
```
87+
88+
You will also see that an appropriate number of agents have been allocated from the agent pool:
89+
90+
```
91+
$ oc -n hardware-inventory get agents
92+
NAME CLUSTER APPROVED ROLE STAGE
93+
07e21dd7-5b00-2565-ffae-485f1bf3aabc mycluster true worker
94+
2f25a998-0f1d-c202-4fdd-a2c300c9b7da mycluster true worker
95+
36c4906e-b96e-2de5-e4ec-534b45d61fa7 true auto-assign
96+
384b3b4f-e111-6881-019e-3668abb7cb0f true auto-assign
97+
5180125a-614c-ac90-7adf-9222dc228704 true auto-assign
98+
5aed1b72-90c6-da99-0bee-e668ca41b2ff true auto-assign
99+
8542e6ac-41b4-eca3-fedd-6af8edd4a41e mycluster true worker
100+
b698178a-7b31-15d2-5e20-b2381972cbdf true auto-assign
101+
c6a86022-c6b9-c89d-b6b9-3dd5c4c1063e true auto-assign
102+
d2c0f44b-993c-3e32-4a22-39af4be355b8 true auto-assign
103+
```
104+
105+
## Interacting with the control plane
106+
107+
The hosted control plane will be available within a matter of minutes, but in order to interact with it you'll need to complete a few additional steps.
108+
109+
### Set up port forwarding for control plane services
110+
111+
The API service for the new cluster is deployed as a [NodePort] service on the management cluster, as are several other services that need to be exposed in order for the cluster deploy to complete.
112+
113+
1. Acquire a floating ip address from the hypershift project if you don't already have a free one:
114+
115+
```
116+
api_vip=$(openstack --os-cloud hypershift floating ip create external -f value -c floating_ip_address)
117+
```
118+
119+
1. Pick the address of one of the cluster nodes as a target for the port forwarding:
120+
121+
```
122+
internal_ip=$(oc get nodes -l node-role.kubernetes.io/worker= -o name |
123+
shuf |
124+
head -1 |
125+
xargs -INODE oc get NODE -o jsonpath='{.status.addresses[?(@.type == "InternalIP")].address}'
126+
)
127+
```
128+
129+
1. Set up appropriate port forwarding:
130+
131+
```
132+
openstack --os-cloud hypershift esi port forwarding create "$internal_ip" "$api_vip" $(
133+
oc -n clusters-mycluster get service -o json |
134+
jq '.items[]|select(.spec.type == "NodePort")|.spec.ports[].nodePort' |
135+
sed 's/^/-p /'
136+
)
137+
```
138+
139+
The output of the above command will look something like this:
140+
141+
```
142+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
143+
| ID | Internal Port | External Port | Protocol | Internal IP | External IP |
144+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
145+
| 2bc05619-d744-4e8a-b658-714da9cf1e89 | 31782 | 31782 | tcp | 10.233.2.107 | 128.31.20.161 |
146+
| f386638e-eca2-465f-a05c-2076d6c1df5a | 30296 | 30296 | tcp | 10.233.2.107 | 128.31.20.161 |
147+
| c06adaff-e1be-49f8-ab89-311b550182cc | 30894 | 30894 | tcp | 10.233.2.107 | 128.31.20.161 |
148+
| b45f08fa-bbf3-4c1d-b6ec-73b586b4b0a3 | 32148 | 32148 | tcp | 10.233.2.107 | 128.31.20.161 |
149+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
150+
```
151+
152+
[nodeport]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
153+
154+
### Update DNS
155+
156+
Ensure that the DNS entry for your API address is correct. The names `api.<cluster_name>.<basedomain>` and `api-int.<cluster_name>.<basedomain>` must both point to the `$api_vip` address configured in the previous section.
157+
158+
### Obtain the admin kubeconfig file
159+
160+
The admin `kubeconfig` file is available as a Secret in the `clusters-<cluster_name>` namespace:
161+
162+
```
163+
oc -n clusters-mycluster extract secret/admin-kubeconfig --keys kubeconfig
164+
```
165+
166+
This will extract the file `kubeconfig` into your current directory. You can use that to interact with the hosted control plane:f6c5eae4-d8c8-4d82-bcd4-47718335c39c
167+
168+
```
169+
oc --kubeconfig kubeconfig get namespace
170+
```
171+
172+
## Set up port forwarding for the ingress service
173+
174+
1. Acquire a floating ip address from the ESI project that owns the bare metal nodes if you don't already have a free one:
175+
176+
```
177+
ingress_vip=$(openstack --os-cloud mycluster floating ip create external -f value -c floating_ip_address)
178+
```
179+
180+
1. Pick the address of one of the cluster nodes as a target for the port forwarding. Note that here we're using the `kubeconfig` file we downloaded in a previous step:
181+
182+
```
183+
internal_ip=$(oc --kubeconfig kubeconfig get nodes -l node-role.kubernetes.io/worker= -o name |
184+
shuf |
185+
head -1 |
186+
xargs -INODE oc --kubeconfig kubeconfig get NODE -o jsonpath='{.status.addresses[?(@.type == "InternalIP")].address}'
187+
)
188+
```
189+
190+
1. Set up appropriate port forwarding (in the bare metal node ESI project):
191+
192+
```
193+
openstack --os-cloud mycluster esi port forwarding create "$internal_ip" "$ingress" -p 80 -p 443
194+
```
195+
196+
## Wait for the cluster deploy to complete
197+
198+
When the target cluster is fully deployed, the output for the HostedCluster resource will look like this:
199+
200+
```
201+
$ oc -n clusters get hostedcluster mycluster
202+
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE
203+
mycluster 4.17.9 mycluster-admin-kubeconfig Completed True False The hosted control plane is available
204+
```

0 commit comments

Comments
 (0)