Skip to content

Commit

Permalink
Merge pull request kubernetes-sigs#1353 from salasberryfin/improve-bo…
Browse files Browse the repository at this point in the history
…ok-structure

Improve book structure and add clusterclass examples
  • Loading branch information
k8s-ci-robot authored Nov 5, 2024
2 parents 902e324 + 899e336 commit 5093714
Show file tree
Hide file tree
Showing 18 changed files with 372 additions and 64 deletions.
24 changes: 15 additions & 9 deletions docs/book/src/SUMMARY.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,24 @@
# Summary

[Introduction](./introduction.md)
- [Getting Started](./getting-started.md)
- [Topics](./topics/index.md)
- [Prerequisites](./topics/prerequisites.md)
- [Quick Start](./quick-start.md)
- [Prerequisites](./prerequisites.md)
- [Self-managed clusters](./self-managed/index.md)
- [Provisioning a Cluster](./self-managed/provision.md)
- [CNI](./self-managed/cni.md)
- [Managed clusters - GKE](./managed/index.md)
- [Provisioning a Cluster](./managed/provision.md)
- [Cluster Upgrades](./managed/upgrades.md)
- [Enabling](./managed/enabling.md)
- [Disabling](./managed/disabling.md)
- [ClusterClass](./clusterclass/index.md)
- [Provisioning a Cluster](./clusterclass/provision.md)
- [Enabling](./clusterclass/enabling.md)
- [Disabling](./clusterclass/disabling.md)
- [General Topics](./topics/index.md)
- [Conformance](./topics/conformance.md)
- [GKE Support](./topics/gke/index.md)
- [Creating a Cluster](./topics/gke/creating-a-cluster.md)
- [Cluster Upgrades](./topics/gke/cluster-upgrades.md)
- [Enabling](./topics/gke/enabling.md)
- [Disabling](./topics/gke/disabling.md)
- [Machine Locations](./topics/machine-locations.md)
- [Preemptible VMs](./topics/preemptible-vms.md)
- [Flannel](./topics/flannel.md)
- [Developer Guide](./developers/index.md)
- [Development](./developers/development.md)
- [Try unreleased changes with Nightly Builds](./developers/nightlies.md)
Expand Down
3 changes: 3 additions & 0 deletions docs/book/src/clusterclass/disabling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Disabling ClusterClass Support

Support for ClusterClass is disabled by default when you use the GCP infrastructure provider.
8 changes: 8 additions & 0 deletions docs/book/src/clusterclass/enabling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Enabling ClusterClass Support

Enabling ClusterClass support is done via the **ClusterTopology** feature flag by setting it to true. This can be done before running `clusterctl init` by using the **CLUSTER_TOPOLOGY** environment variable:

```shell
export CLUSTER_TOPOLOGY=true
clusterctl init --infrastructure gcp
```
16 changes: 16 additions & 0 deletions docs/book/src/clusterclass/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# ClusterClass

- **Feature status:** Experimental
- **Feature gate:** `ClusterTopology=true`

[ClusterClass](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/index.html) is a collection of templates that define a topology (control plane and machine deployments) to be used to continuously reconcile one or more Clusters. It is built on top of the existing Cluster API resources and provides a set of tools and operations to streamline cluster lifecycle management while maintaining the same underlying API.

<aside class="note warning">

<h1>Warning</h1>

ClusterClass is an experimental core CAPI feature and, as such, it may behave unreliably until promoted to the main repository. It is expected to be graduated with the release of [CAPI v1.9](https://github.com/kubernetes-sigs/cluster-api/milestone/38).

</aside>

CAPG supports the creation of clusters via Cluster Topology for **self-managed clusters only**.
87 changes: 87 additions & 0 deletions docs/book/src/clusterclass/provision.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Provisioning a Cluster via ClusterClass

<aside class="note warning">

<h1>Warning</h1>

We recommend you take a look at the [Prerequisites](./../prerequisites.md) section before provisioning a workload cluster.

**You should be familiar with the ClusterClass feature and its core concepts: [CAPI book on ClusterClass and Managed Topologies](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/write-clusterclass).**

</aside>


**This guide uses an example from the `./templates` folder of the CAPG repository. You can inspect the yaml file for the `ClusterClass` [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-gcp/refs/heads/main/templates/cluster-template-clusterclass.yaml) and the cluster definition [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-gcp/refs/heads/main/templates/cluster-template-topology.yaml).**


## Templates and clusters

ClusterClass makes cluster templates more flexible and versatile as it allows users to create cluster flavors that can be reused for cluster provisioning.

In this case, while inspecting the sample files, you probably noticed that there are references to two different yaml:
- `./templates/cluster-template-clusterclass.yaml` is the class definition. It represents the template that define a topology: control plane and machine deployment but it won't provision the cluster.
- `./templates/cluster-template-topology.yaml` is the cluster definition that references the class. This workload cluster definition is considerably simpler than a regular CAPI cluster template that does not use ClusterClass, as most of the complexity of defining the control plane and machine deployment has been removed by the class.

## Configure ClusterClass

While inspecting the templates you probably noticed that they contain a number of parameterized values that must be substituted with the specifics of your use case. This can be done via environment variables and `clusterctl` and effectively make the templates more flexible to adapt to different provisioning scenarios. These are the environment variables that you'll be required to set before deploying a class and a workload cluster from it:

```sh
export CLUSTER_CLASS_NAME=sample-cc
export GCP_PROJECT=cluster-api-gcp-project
export GCP_REGION=us-east4
export GCP_NETWORK_NAME=default
export IMAGE_ID=projects/cluster-api-gcp-project/global/images/your-image
```

## Generate ClusterClass definition

The sample ClusterClass template is already prepared so that you can use it with `clusterctl` to create a CAPI ClusterClass with CAPG.

```bash
clusterctl generate cluster capi-gcp-quickstart-clusterclass --flavor clusterclass -i gcp > capi-gcp-quickstart-clusterclass.yaml
```

In this example, `capi-gcp-quickstart-clusterclass` will be used as class name.

## Create ClusterClass

The resulting file represents the class template definition and you simply need to apply it to your cluster to make it available in the API:

```
kubectl apply -f capi-gcp-quickstart-clusterclass.yaml
```

## Create a cluster from a class

ClusterClass is a powerful feature of CAPI because we can now create one or multiple clusters that are based on the same class that is available in the CAPI Management Cluster. This base template can be parameterized so clusters created from it can make slight changes to the original configuration and adapt to the specifics of the use case, e.g. provisioning clusters for different development, staging and production environments.

Now that the class is available to be referenced by cluster objects, let's configure the workload cluster and provision it.

```sh
export CLUSTER_NAME=sample-cluster
export CLUSTER_CLASS_NAME=sample-cc
export KUBERNETES_VERSION=1.29.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
export GCP_REGION=us-east4
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export CNI_RESOURCES=./cni-resource
```

You can take a look at CAPG's CNI requirements [here](./../self-managed/cni.md)

You can use `clusterctl` to create a cluster definition.

```bash
clusterctl generate cluster capi-gcp-quickstart-topology --flavor topology -i gcp > capi-gcp-quickstart-topology.yaml
```

And by simply applying the resulting template, the cluster will be provisioned based on the existing ClusterClass.

```
kubectl apply -f capi-gcp-quickstart-topology.yaml
```

You can now experiment with creating more clusters based on this class while applying different configurations to each workload cluster.
1 change: 0 additions & 1 deletion docs/book/src/getting-started.md

This file was deleted.

File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@
- **Feature status:** Experimental
- **Feature gate (required):** GKE=true

<aside class="note warning">

<h1>Warning</h1>

Provisioning managed clusters (GKE) is an experimental feature and, as such, it may behave unreliably until promoted to the main repository.

</aside>

## Overview

The GCP provider supports creating GKE based cluster. Currently the following features are supported:
Expand All @@ -24,4 +32,4 @@ And a new template is available in the templates folder for creating a managed w
* [Enabling GKE Support](enabling.md)
* [Disabling GKE Support](disabling.md)
* [Creating a cluster](creating-a-cluster.md)
* [Cluster Upgrades](cluster-upgrades.md)
* [Cluster Upgrades](cluster-upgrades.md)
64 changes: 64 additions & 0 deletions docs/book/src/managed/provision.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Provisioning a GKE cluster

<aside class="note warning">

<h1>Warning</h1>

We recommend you take a look at the [Prerequisites](./../prerequisites.md) section before provisioning a workload cluster.

</aside>

**This guide uses an example from the `./templates` folder of the CAPG repository. You can inspect the yaml file [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-gcp/refs/heads/main/templates/cluster-template-gke.yaml).**

## Configure cluster parameters

While inspecting the cluster definition in `./templates/cluster-template-gke.yaml` you probably noticed that it contains a number of parameterized values that must be substituted with the specifics of your use case. This can be done via environment variables and `clusterctl` and effectively makes the template more flexible to adapt to different provisioning scenarios. These are the environment variables that you'll be required to set before deploying a workload cluster:

```sh
export GCP_PROJECT=cluster-api-gcp-project
export GCP_REGION=us-east4
export GCP_NETWORK_NAME=default
export WORKER_MACHINE_COUNT=1
```

## Generate cluster definition

The sample cluster templates are already prepared so that you can use them with `clusterctl` to create a GKE cluster with CAPG.

To create a GKE cluster with a managed node group (a.k.a managed machine pool):

```bash
clusterctl generate cluster capi-gke-quickstart --flavor gke -i gcp > capi-gke-quickstart.yaml
```

In this example, `capi-gke-quickstart` will be used as cluster name.

## Create cluster

The resulting file represents the workload cluster definition and you simply need to apply it to your cluster to trigger cluster creation:

```
kubectl apply -f capi-gke-quickstart.yaml
```

## Kubeconfig

When creating an GKE cluster 2 kubeconfigs are generated and stored as secrets in the management cluster.

### User kubeconfig

This should be used by users that want to connect to the newly created GKE cluster. The name of the secret that contains the kubeconfig will be `[cluster-name]-user-kubeconfig` where you need to replace **[cluster-name]** with the name of your cluster. The **-user-kubeconfig** in the name indicates that the kubeconfig is for the user use.

To get the user kubeconfig for a cluster named `managed-test` you can run a command similar to:

```bash
kubectl --namespace=default get secret managed-test-user-kubeconfig \
-o jsonpath={.data.value} | base64 --decode \
> managed-test.kubeconfig
```

### Cluster API (CAPI) kubeconfig

This kubeconfig is used internally by CAPI and shouldn't be used outside of the management server. It is used by CAPI to perform operations, such as draining a node. The name of the secret that contains the kubeconfig will be `[cluster-name]-kubeconfig` where you need to replace **[cluster-name]** with the name of your cluster. Note that there is NO `-user` in the name.

The kubeconfig is regenerated every `sync-period` as the token that is embedded in the kubeconfig is only valid for a short period of time.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,14 +1,8 @@
# Prerequisites

## Requirements
Before provisioning clusters via CAPG, there are a few extra tasks you need to take care of, including **configuring the GCP network** and **building images for GCP virtual machines**.

- Linux or MacOS (Windows isn't supported at the moment).
- A [Google Cloud](https://console.cloud.google.com) account.
- [Packer](https://www.packer.io/intro/getting-started/install.html) and [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) to build images
- Make to use `Makefile` targets
- Install `coreutils` (for timeout) on *OSX*

### Setup environment variables
### Set environment variables

```bash
export GCP_REGION="<GCP_REGION>"
Expand All @@ -21,7 +15,7 @@ export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default>
export CLUSTER_NAME="<CLUSTER_NAME>"
```

### Setup a Network and Cloud NAT
### Configure Network and Cloud NAT

Google Cloud accounts come with a `default` network which can be found under
[VPC Networks](https://console.cloud.google.com/networking/networks).
Expand Down Expand Up @@ -55,16 +49,6 @@ gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJ
--nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips
```

### Create a Service Account

To create and manage clusters, this infrastructure provider uses a service account to authenticate with GCP's APIs.

From your cloud console, follow [these instructions](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating) to create a new service account with `Editor` permissions.

If you plan to use GKE the service account will also need the `iam.serviceAccountTokenCreator` role.

Afterwards, generate a JSON Key and store it somewhere safe.

### Building images

> NB: The following commands should not be run as `root` user.
Expand Down
91 changes: 91 additions & 0 deletions docs/book/src/quick-start.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Getting started with CAPG

In this section we'll cover the basics of how to prepare your environment to use Cluster API Provider for GCP.

<aside class="note">

<h1>Tip</h1>

This covers the specifics of CAPG but won't go into detail of core CAPI basics. For more information on how CAPI works and how to interact with different providers, you can refer to [CAPI Quick Start](https://cluster-api.sigs.k8s.io/user/quick-start).

</aside>

Before installing CAPG, your Kubernetes cluster has to be transformed into a CAPI management cluster. If you have already done this, you can jump directly to the next section: [Installing CAPG](#installing-capg). If, on the other hand, you have an existing Kubernetes cluster that is not yet configured as a CAPI management cluster, you can follow the guide from the [CAPI book](https://cluster-api.sigs.k8s.io/user/quick-start#initialize-the-management-cluster).

## Requirements

- Linux or MacOS (Windows isn't supported at the moment).
- A [Google Cloud](https://console.cloud.google.com) account.
- [Packer](https://www.packer.io/intro/getting-started/install.html) and [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) to build images
- `make` to use `Makefile` targets
- Install `coreutils` (for timeout) on *OSX*

### Create a Service Account

To create and manage clusters, this infrastructure provider uses a service account to authenticate with GCP's APIs.

From your cloud console, follow [these instructions](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating) to create a new service account with `Editor` permissions.

If you plan to use GKE the service account will also need the `iam.serviceAccountTokenCreator` role.

Afterwards, generate a JSON Key and store it somewhere safe.


### Installing CAPG

There are two major provider installation paths: using `clusterctl` or the `Cluster API Operator`.

`clusterctl` is a command line tool that provides a simple way of interacting with CAPI and is usually the preferred alternative for those who are getting started. It automates fetching the YAML files defining provider components and installing them.

The `Cluster API Operator` is a Kubernetes Operator built on top of `clusterctl` and designed to empower cluster administrators to handle the lifecycle of Cluster API providers within a management cluster using a declarative approach. It aims to improve user experience in deploying and managing Cluster API, making it easier to handle day-to-day tasks and automate workflows with GitOps. Visit the CAPI Operator quickstart if you want to experiment with this tool.

You can opt for the tool that works best for you or explore both and decide which is best suited for your use case.

#### clusterctl

The Service Account you created will be used to interact with GCP and it must be base64 encoded and stored in a environment variable before installing the provider via `clusterctl`.

```
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
```
Finally, let's initialize the provider.
```
clusterctl init --infrastructure gcp
```
This process may take some time and, once the provider is running, you'll be able to see the `capg-controller-manager` pod in your CAPI management cluster.

#### Cluster API Operator

You can refer to the Cluster API Operator book [here](https://cluster-api-operator.sigs.k8s.io/01_user/02_quick-start) to learn about the basics of the project and how to install the operator.

When using Cluster API Operator, secrets are used to store credentials for cloud providers and not environment variables, which means you'll have to create a new secret containing the base64 encoded version of your GCP credentials and it will be referenced in the yaml file used to initialize the provider. As you can see, by using Cluster API Operator, we're able to manage provider installation declaratively.

Create GCP credentials secret.
```
export CREDENTIALS_SECRET_NAME="gcp-credentials"
export CREDENTIALS_SECRET_NAMESPACE="default"
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
kubectl create secret generic "${CREDENTIALS_SECRET_NAME}" --from-literal=GCP_B64ENCODED_CREDENTIALS="${GCP_B64ENCODED_CREDENTIALS}" --namespace "${CREDENTIALS_SECRET_NAMESPACE}"
```
Define CAPG provider declaratively in a file `capg.yaml`.
```
apiVersion: v1
kind: Namespace
metadata:
name: capg-system
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
metadata:
name: gcp
namespace: capg-system
spec:
version: v1.8.0
configSecret:
name: gcp-credentials
```
After applying this file, Cluster API Operator will take care of installing CAPG using the set of credentials stored in the specified secret.
```
kubectl apply -f capg.yaml
```
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Flannel
# CNI

This document describes how to use [Flannel](https://github.com/flannel-io/flannel) as your CNI solution. By default, the CNI plugin is not installed for self-managed clusters, so you have to [install your own](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) (e.g. Calico with VXLAN)
By default, no CNI plugin is installed when a self-managed cluster is provisioned. As a user, you need to [install your own](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) CNI (e.g. Calico with VXLAN) for the control plane of the cluster to become ready.

This document describes how to use [Flannel](https://github.com/flannel-io/flannel) as your CNI solution.

## Modify the Cluster resources

Expand Down
3 changes: 3 additions & 0 deletions docs/book/src/self-managed/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Self-managed clusters

This section contains information about how you can provision self-managed Kubernetes clusters hosted in GCP's Compute Engine.
Loading

0 comments on commit 5093714

Please sign in to comment.