Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update clickhouse examples #1312

Merged
merged 1 commit into from
Dec 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions addons/clickhouse/configs/00_default_overrides.xml.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,9 @@
</prometheus>
<!-- tls configuration -->
{{- if eq (index $ "TLS_ENABLED") "true" -}}
{{- $CA_FILE := /etc/pki/tls/ca.pem -}}
{{- $CERT_FILE := /etc/pki/tls/cert.pem -}}
{{- $KEY_FILE := /etc/pki/tls/key.pem }}
{{- $CA_FILE := "/etc/pki/tls/ca.pem" -}}
{{- $CERT_FILE := "/etc/pki/tls/cert.pem" -}}
{{- $KEY_FILE := "/etc/pki/tls/key.pem" }}
<protocols>
<prometheus_protocol>
<type>prometheus</type>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@
</prometheus>
<!-- tls configuration -->
{{- if eq (index $ "TLS_ENABLED") "true" -}}
{{- $CA_FILE := /etc/pki/tls/ca.pem -}}
{{- $CERT_FILE := /etc/pki/tls/cert.pem -}}
{{- $KEY_FILE := /etc/pki/tls/key.pem -}}
{{- $CA_FILE := "/etc/pki/tls/ca.pem" -}}
{{- $CERT_FILE := "/etc/pki/tls/cert.pem" -}}
{{- $KEY_FILE := "/etc/pki/tls/key.pem" -}}
<protocols>
<prometheus_protocol>
<type>prometheus</type>
Expand Down
6 changes: 3 additions & 3 deletions addons/clickhouse/configs/client.xml.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
<user>admin</user>
<password from_env="CLICKHOUSE_ADMIN_PASSWORD"/>
{{- if eq (index $ "TLS_ENABLED") "true" -}}
{{- $CA_FILE := /etc/pki/tls/ca.pem -}}
{{- $CERT_FILE := /etc/pki/tls/cert.pem -}}
{{- $KEY_FILE := /etc/pki/tls/key.pem }}
{{- $CA_FILE := "/etc/pki/tls/ca.pem" -}}
{{- $CERT_FILE := "/etc/pki/tls/cert.pem" -}}
{{- $KEY_FILE := "/etc/pki/tls/key.pem" }}
<secure>true</secure>
<openSSL>
<client>
Expand Down
2 changes: 1 addition & 1 deletion addons/clickhouse/templates/cmpd-ch-keeper.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ spec:
volumes:
- name: data
tls:
volumeName: tls
volumeName: tls
mountPath: /etc/pki/tls
caFile: ca.pem
certFile: cert.pem
Expand Down
2 changes: 1 addition & 1 deletion addons/clickhouse/templates/cmpd-clickhouse.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ spec:
targetPort: tcp-secure
port: 9440
tls:
volumeName: tls
volumeName: tls
mountPath: /etc/pki/tls
caFile: ca.pem
certFile: cert.pem
Expand Down
244 changes: 218 additions & 26 deletions examples/clickhouse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,87 +2,279 @@

ClickHouse is an open-source column-oriented OLAP database management system. Use it to boost your database performance while providing linear scalability and hardware efficiency.

There are two key components in the ClickHouse cluster:

- ClickHouse Server: The ClickHouse server is responsible for processing queries and managing data storage.
- ClickHouse Keeper: The ClickHouse Keeper is responsible for monitoring the health of the ClickHouse server and performing failover operations when necessary, alternative to the Zookeeper.


## Features In KubeBlocks

### Lifecycle Management

| Topology | Horizontal<br/>scaling | Vertical <br/>scaling | Expand<br/>volume | Restart | Stop/Start | Configure | Expose | Switchover |
|------------------|------------------------|-----------------------|-------------------|-----------|------------|-----------|--------|------------|
| standalone/cluster | Yes | Yes | Yes | Yes | Yes | Yes | No | N/A |

### Backup and Restore

| Feature | Method | Description |
|-------------|--------|------------|

### Versions

| Major Versions | Description |
|---------------|-------------|
| 24 | 24.8.3|

## Prerequisites

This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the kubectl command line tool and helm somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/setup/) and [Installing Helm](https://helm.sh/docs/intro/install/) for installation instructions for your platform.
- Kubernetes cluster >= v1.21
- `kubectl` installed, refer to [K8s Install Tools](https://kubernetes.io/docs/tasks/tools/)
- Helm, refer to [Installing Helm](https://helm.sh/docs/intro/install/)
- KubeBlocks installed and running, refer to [Install Kubeblocks](../docs/prerequisites.md)
- ClickHouse Addon Enabled, refer to [Install Addons](../docs/install-addon.md)

## Examples

### Create

#### Standalone Mode

Create a ClickHouse cluster with only ClickHouse server:

Also, this example requires kubeblocks installed and running. Here is the steps to install kubeblocks, please replace "`$kb_version`" with the version you want to use.
```bash
# Add Helm repo
helm repo add kubeblocks https://apecloud.github.io/helm-charts
# If github is not accessible or very slow for you, please use following repo instead
helm repo add kubeblocks https://jihulab.com/api/v4/projects/85949/packages/helm/stable
kubectl apply -f examples/clickhouse/cluster-standalone.yaml
```

It will create only one ClickHouse server pod with the default configuration.

To connect to the ClickHouse server, you can use the following command:

# Update helm repo
helm repo update
```bash
clickhouse-client --host <clickhouse-endpoint> --port 9000 --user admin --password
```

# Get the versions of KubeBlocks and select the one you want to use
helm search repo kubeblocks/kubeblocks --versions
# If you want to obtain the development versions of KubeBlocks, Please add the '--devel' parameter as the following command
helm search repo kubeblocks/kubeblocks --versions --devel
> [!NOTE]
> You may find the password in the secret `<clusterName>-clickhouse-account-admin`.

# Create dependent CRDs
kubectl create -f https://github.com/apecloud/kubeblocks/releases/download/v$kb_version/kubeblocks_crds.yaml
# If github is not accessible or very slow for you, please use following command instead
kubectl create -f https://jihulab.com/api/v4/projects/98723/packages/generic/kubeblocks/v$kb_version/kubeblocks_crds.yaml
e.g. you can get the password by the following command:

# Install KubeBlocks
helm install kubeblocks kubeblocks/kubeblocks --namespace kb-system --create-namespace --version="$kb_version"
```bash
kubectl get secrets clickhouse-cluster-clickhouse-account-admin -n default -oyaml | yq .data.password -r | base64 -d
```

## Examples
where `clickhouse-cluster-clickhouse-account-admin` is the secret name, it is named after pattern `<clusterName>-<componentName>-account-<accountName>`, and `password` is the key of the secret.

#### Cluster Mode

Create a ClickHouse cluster with ClickHouse servers and ch-keeper:

### [Create](cluster.yaml)
Create a clickhouse cluster with specified cluster definition
```bash
kubectl apply -f examples/clickhouse/cluster.yaml
```

Create a single node clickhouse cluster with specified cluster definition
This example shows the way to override the default accounts' password.

Option 1. override the rule `passwordCofnig` to generate password

```yaml
- name: ch-keeper
replicas: 1
# Overrides system accounts defined in referenced ComponentDefinition.
systemAccounts:
- name: admin # name of the system account
passwordConfig: # config rule to generate password
length: 10
numDigits: 5
numSymbols: 0
letterCase: MixedCases
seed: clickhouse-cluster
```

Option 2. specify the secret for the account

```yaml
- name: clickhouse
replicas: 2
# Overrides system accounts defined in referenced ComponentDefinition.
systemAccounts:
- name: admin # name of the system account
secretRef:
name: udf-account-info
namespace: default
```

Make sure the secret `udf-account-info` exists in the same namespace as the cluster, and has the following data:

```yaml
apiVersion: v1
data:
password: <SOME_PASSWORD> # password: required
metadata:
name: udf-account-info
type: Opaque
```

#### Cluster Mode with TLS Enabled

To create one ClickHouse server pod with the default configuration and TLS enabled.

```bash
kubectl apply -f examples/clickhouse/cluster-single-node.yaml
kubectl apply -f examples/clickhouse/cluster-tls.yaml
```

### [Horizontal scaling](horizontalscale.yaml)
Horizontal scaling out or in specified components replicas in the cluster
Compared to the default configuration, the only difference is the `tls` and `issuer` fields in the `cluster-tls.yaml` file.

```yaml
tls: true # enable tls
issuer: # set issuer information
name: KubeBlocks
```

To connect to the ClickHouse server, you can use the following command:

```bash
clickhouse-client --host <clickhouse-endpoint> --port 9440 --secure --user admin --password
```

#### Cluster with Multiple Shards

> [!WARNING]
> The sharding mode is an experimental feature at the moment.

Create a ClickHouse cluster with ch-keeper and clickhouse servers with multiple shards:

```bash
kubectl apply -f examples/clickhouse/horizontalscale.yaml
kubectl apply -f examples/clickhouse/cluster-sharding.yaml
```

This example creates a clickhouse cluster with 3 shards, each shard has 2 replicas.

### Horizontal scaling

#### [Scale-out](scale-out.yaml)

Horizontal scaling out Clickhouse cluster by adding ONE more replica:

```bash
kubectl apply -f examples/clickhouse/scale-out.yaml
```

#### [Scale-in](scale-in.yaml)

Horizontal scaling in clickhouse cluster by deleting ONE replica:

```bash
kubectl apply -f examples/clickhouse/scale-in.yaml
```

#### Scale-in/out using Cluster API

Alternatively, you can update the `replicas` field in the `spec.componentSpecs.replicas` section to your desired non-zero number.

```yaml
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: clickhouse-cluster
namespace: default
spec:
componentSpecs:
- name: clickhouse
replicas: 2 # Update `replicas` to 1 for scaling in, and to 3 for scaling out
```

### [Vertical scaling](verticalscale.yaml)

Vertical scaling up or down specified components requests and limits cpu or memory resource in the cluster

```bash
kubectl apply -f examples/clickhouse/verticalscale.yaml
```

### [Expand volume](volumeexpand.yaml)

> [!NOTE]
> Make sure the storage class you use supports volume expansion.

Check the storage class with following command:

```bash
kubectl get storageclass
```

If the `ALLOWVOLUMEEXPANSION` column is `true`, the storage class supports volume expansion.

Increase size of volume storage with the specified components in the cluster

```bash
kubectl apply -f examples/clickhouse/volumeexpand.yaml
```

### [Restart](restart.yaml)

Restart the specified components in the cluster

```bash
kubectl apply -f examples/clickhouse/restart.yaml
```

### [Stop](stop.yaml)

Stop the cluster and release all the pods of the cluster, but the storage will be reserved

```bash
kubectl apply -f examples/clickhouse/stop.yaml
```

### [Start](start.yaml)

Start the stopped cluster

```bash
kubectl apply -f examples/clickhouse/start.yaml
```

### Observability

There are various ways to monitor the cluster. Here we use Prometheus and Grafana to demonstrate how to monitor the cluster.

#### Installing the Prometheus Operator

You may skip this step if you have already installed the Prometheus Operator.
Or you can follow the steps in [How to install the Prometheus Operator](../docs/install-prometheus.md) to install the Prometheus Operator.

#### Create PodMonitor

Apply the `PodMonitor` file to monitor the cluster:

```bash
kubectl apply -f examples/clickhouse/pod-monitor.yaml
```

It sets endpoints as follows:

```yaml
podMetricsEndpoints:
- path: /metrics
port: http-metrics
scheme: http
```

> [!Note]
> Make sure the labels are set correctly in the `PodMonitor` file to match the dashboard.

### Delete

If you want to delete the cluster and all its resource, you can modify the termination policy and then delete the cluster

```bash
kubectl patch cluster clickhouse-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge"

kubectl delete cluster clickhouse-cluster

# delete secret udf-account-info if exists
# kubectl delete secret udf-account-info
```

Loading
Loading