Skip to content

Commit

Permalink
docs(openstack): add instructions for OpenStack provider on HMC
Browse files Browse the repository at this point in the history
    - clarify provider setup, environment variables, and recommended image/flavor requirements

Signed-off-by: Satyam Bhardwaj <[email protected]>
  • Loading branch information
ramessesii2 committed Dec 27, 2024
1 parent ba962b7 commit c941091
Showing 1 changed file with 57 additions and 20 deletions.
77 changes: 57 additions & 20 deletions docs/dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,11 +82,43 @@ To properly deploy dev cluster you need to have the following variable set:

### Adopted Cluster Setup

To "adopt" an existing cluster first obtain the kubeconfig file for the cluster.
To "adopt" an existing cluster first obtain the kubeconfig file for the cluster.
Then set the `DEV_PROVIDER` to "adopted". Export the kubeconfig file as a variable by running the following:

`export KUBECONFIG_DATA=$(cat kubeconfig | base64 -w 0)`

### OpenStack Provider Setup

To deploy a development cluster on OpenStack, first set:

- `DEV_PROVIDER` - should be "openstack"

Configure your OpenStack credentials using the following environment variables:

- `OS_AUTH_URL`
- `OS_PROJECT_ID`
- `OS_USER_DOMAIN_NAME`
- `OS_PROJECT_NAME`
- `OS_USERNAME`
- `OS_PASSWORD`
- `OS_REGION_NAME`
- `OS_INTERFACE`
- `OS_IDENTITY_API_VERSION`

Alternatively, if you have a credentials file (e.g., OpenStackRC.sh), source it to automatically set these values.

You will also need to specify additional parameters related to machine sizes and images:

- `OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR`
- `OPENSTACK_NODE_MACHINE_FLAVOR`
- `CONTROL_PLANE_MACHINE_COUNT`
- `WORKER_MACHINE_COUNT`
- `OPENSTACK_IMAGE_NAME`

#### Note

The recommmend minimum value of control plane flavor's vCPU is 2 and minimum value of worker node flavor's vCPU is 1. For more guidance, refer to the - [machine-flavor CAPI docs](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/docs/book/src/clusteropenstack/configuration.md#machine-flavor).

The rest of deployment procedure is the same as for other providers.

## Deploy HMC
Expand All @@ -97,10 +129,9 @@ running make (e.g. `export DEV_PROVIDER=azure`).

1. Configure your cluster parameters in provider specific file
(for example `config/dev/aws-clusterdeployment.yaml` in case of AWS):

* Configure the `name` of the ClusterDeployment
* Change instance type or size for control plane and worker machines
* Specify the number of control plane and worker machines, etc
- Configure the `name` of the ClusterDeployment
- Change instance type or size for control plane and worker machines
- Specify the number of control plane and worker machines, etc

2. Run `make dev-apply` to deploy and configure management cluster.

Expand All @@ -114,33 +145,36 @@ running make (e.g. `export DEV_PROVIDER=azure`).
6. Wait for infrastructure to be provisioned and the cluster to be deployed. You
may watch the process with the `./bin/clusterctl describe` command. Example:

```bash
export KUBECONFIG=~/.kube/config
```bash
export KUBECONFIG=~/.kube/config

./bin/clusterctl describe cluster <clusterdeployment-name> -n hmc-system --show-conditions all
```
./bin/clusterctl describe cluster <clusterdeployment-name> -n hmc-system --show-conditions all
```

> [!NOTE]
> If you encounter any errors in the output of `clusterctl describe cluster` inspect the logs of the
> `capa-controller-manager` with:
```bash
kubectl logs -n hmc-system deploy/capa-controller-manager
```

> [!NOTE]
> If you encounter any errors in the output of `clusterctl describe cluster` inspect the logs of the
> `capa-controller-manager` with:
> ```bash
> kubectl logs -n hmc-system deploy/capa-controller-manager
> ```
> This may help identify any potential issues with deployment of the AWS infrastructure.
> This may help identify any potential issues with deployment of the AWS infrastructure.
7. Retrieve the `kubeconfig` of your managed cluster:

```bash
kubectl --kubeconfig ~/.kube/config get secret -n hmc-system <clusterdeployment-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
```
```bash
kubectl --kubeconfig ~/.kube/config get secret -n hmc-system <clusterdeployment-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
```

## Running E2E tests locally

E2E tests can be ran locally via the `make test-e2e` target. In order to have
CI properly deploy a non-local registry will need to be used and the Helm charts
and hmc-controller image will need to exist on the registry, for example, using
GHCR:

```
```bash
IMG="ghcr.io/mirantis/hmc/controller-ci:v0.0.1-179-ga5bdf29" \
REGISTRY_REPO="oci://ghcr.io/mirantis/hmc/charts-ci" \
make test-e2e
Expand All @@ -159,6 +193,7 @@ pass `CLUSTER_DEPLOYMENT_NAME=` from the get-go to customize the name used by th
test.

### Filtering test runs

Provider tests are broken into two types, `onprem` and `cloud`. For CI,
`provider:onprem` tests run on self-hosted runners provided by Mirantis.
`provider:cloud` tests run on GitHub actions runners and interact with cloud
Expand All @@ -179,6 +214,7 @@ ginkgo labels ./test/e2e
```

### Nuke created resources

In CI we run `make dev-aws-nuke` to cleanup test resources, you can do so
manually with:

Expand Down Expand Up @@ -239,6 +275,7 @@ CSI expects single Secret with configuration in `ini` format
Options are similar to CCM and same defaults/considerations are applicable.

## Generating the airgap bundle

Use the `make airgap-package` target to manually generate the airgap bundle,
to ensure the correctly tagged HMC controller image is present in the bundle
prefix the `IMG` env var with the desired image, for example:
Expand Down

0 comments on commit c941091

Please sign in to comment.