Skip to content

Commit

Permalink
chore: update version
Browse files Browse the repository at this point in the history
  • Loading branch information
daviderli614 committed Dec 29, 2023
1 parent aae6ff1 commit 955ecdc
Show file tree
Hide file tree
Showing 16 changed files with 145 additions and 137 deletions.
29 changes: 15 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If you want to deploy the GreptimeDB cluster, you can use the following command(
We recommend using the Bitnami etcd [chart](https://github.com/bitnami/charts/blob/main/bitnami/etcd/README.md) to deploy the etcd cluster:

```console
helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
helm upgrade --install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
--set replicaCount=3 \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
Expand All @@ -44,7 +44,7 @@ If you want to deploy the GreptimeDB cluster, you can use the following command(
2. **Deploy GreptimeDB operator**

```console
helm install greptimedb-operator greptime/greptimedb-operator -n default
helm upgrade --install greptimedb-operator greptime/greptimedb-operator -n default
```

3. **Deploy GreptimeDB cluster**
Expand All @@ -54,31 +54,32 @@ If you want to deploy the GreptimeDB cluster, you can use the following command(
The default installation will use the local storage:

```console
helm install mycluster greptime/greptimedb-cluster -n default
helm upgrade --install mycluster greptime/greptimedb-cluster -n default
```

- **Use AWS S3 as backend storage**

Before installation, you must create the AWS S3 bucket, and the cluster will use the bucket as backend storage:
```console
helm install mycluster greptime/greptimedb-cluster -n default \
--set storage.s3.bucket="your-bucket" \
--set storage.s3.region="region-of-bucket" \
--set storage.s3.root="root-directory-of-data" \
--set storage.credentials.secretName="s3-credentials" \
--set storage.credentials.accessKeyId="your-access-key-id" \
--set storage.credentials.secretAccessKey="your-secret-access-key"
helm upgrade --install mycluster greptime/greptimedb-cluster -n default \
--set objectStorage.s3.bucket="your-bucket" \
--set objectStorage.s3.region="region-of-bucket" \
--set objectStorage.s3.root="root-directory-of-data" \
--set objectStorage.credentials.secretName="s3-credentials" \
--set objectStorage.credentials.accessKeyId="your-access-key-id" \
--set objectStorage.credentials.secretAccessKey="your-secret-access-key" \
-n default
```

4. **Use `kubectl port-forward` to access the GreptimeDB cluster**

```console
# You can use the MySQL client to connect the cluster, for example: 'mysql -h 127.0.0.1 -P 4002'.
kubectl port-forward svc/mycluster-frontend 4002:4002 > connections.out &
kubectl port-forward -n default svc/mycluster-frontend 4002:4002 > connections.out &
# You can use the PostgreSQL client to connect the cluster, for example: 'psql -h 127.0.0.1 -p 4003 -d public'.
kubectl port-forward svc/mycluster-frontend 4003:4003 > connections.out &
kubectl port-forward -n default svc/mycluster-frontend 4003:4003 > connections.out &
```

You also can read and write data by [Cluster](https://docs.greptime.com/user-guide/cluster).
Expand All @@ -88,13 +89,13 @@ If you want to deploy the GreptimeDB cluster, you can use the following command(
If you want to re-deploy the service because the configurations changed, you can:

```console
helm upgrade <your-release> <chart> --values <your-values-file> -n <namespace>
helm upgrade --install <your-release> <chart> --values <your-values-file> -n <namespace>
```

For example:

```console
helm upgrade mycluster greptime/greptimedb --values ./values.yaml
helm upgrade --install mycluster greptime/greptimedb --values ./values.yaml
```

### Uninstallation
Expand Down
4 changes: 2 additions & 2 deletions charts/greptimedb-cluster/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@ apiVersion: v2
name: greptimedb-cluster
description: A Helm chart for deploying GreptimeDB cluster in Kubernetes
type: application
version: 0.1.8
appVersion: 0.4.4
version: 0.1.9
appVersion: 0.5.0
38 changes: 21 additions & 17 deletions charts/greptimedb-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A Helm chart for deploying GreptimeDB cluster in Kubernetes

![Version: 0.1.8](https://img.shields.io/badge/Version-0.1.8-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.4.4](https://img.shields.io/badge/AppVersion-0.4.4-informational?style=flat-square)
![Version: 0.1.9](https://img.shields.io/badge/Version-0.1.9-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.5.0](https://img.shields.io/badge/AppVersion-0.5.0-informational?style=flat-square)

## Source Code

Expand All @@ -17,7 +17,7 @@ A Helm chart for deploying GreptimeDB cluster in Kubernetes
2. Install the etcd cluster:

```console
helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
helm upgrade --install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
--set replicaCount=3 \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
Expand All @@ -29,21 +29,22 @@ A Helm chart for deploying GreptimeDB cluster in Kubernetes
The default installation will use the local storage:

```console
helm install mycluster greptime/greptimedb-cluster -n default
helm upgrade --install mycluster greptime/greptimedb-cluster -n default
```

### Use AWS S3 as backend storage

Before installation, you must create the AWS S3 bucket, and the cluster will use the bucket as backend storage:

```console
helm install mycluster greptime/greptimedb-cluster \
--set storage.s3.bucket="your-bucket" \
--set storage.s3.region="region-of-bucket" \
--set storage.s3.root="root-directory-of-data" \
--set storage.credentials.secretName="s3-credentials" \
--set storage.credentials.accessKeyId="your-access-key-id" \
--set storage.credentials.secretAccessKey="your-secret-access-key"
helm upgrade --install mycluster greptime/greptimedb-cluster \
--set objectStorage.s3.bucket="your-bucket" \
--set objectStorage.s3.region="region-of-bucket" \
--set objectStorage.s3.root="root-directory-of-data" \
--set objectStorage.credentials.secretName="s3-credentials" \
--set objectStorage.credentials.accessKeyId="your-access-key-id" \
--set objectStorage.credentials.secretAccessKey="your-secret-access-key" \
-n default
```

If you set `storage.s3.root` as `mycluser`, then the data layout will be:
Expand All @@ -65,7 +66,7 @@ helm uninstall mycluster -n default
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| datanode.config | string | `""` | Extra datanode config in toml format. |
| datanode.podTemplate | object | `{"affinity":{},"annotations":{},"labels":{},"main":{"args":[],"command":[],"env":[],"image":"","resources":{"limits":{},"requests":{}}},"nodeSelector":{},"serviceaccount":{"annotations":{},"create":false,"name":"datanode-sa"},"tolerations":[]}` | The pod template for datanode |
| datanode.podTemplate | object | `{"affinity":{},"annotations":{},"labels":{},"main":{"args":[],"command":[],"env":[],"image":"","resources":{"limits":{},"requests":{}}},"nodeSelector":{},"serviceaccount":{"annotations":{},"create":false},"tolerations":[]}` | The pod template for datanode |
| datanode.podTemplate.affinity | object | `{}` | The pod affinity |
| datanode.podTemplate.annotations | object | `{}` | The annotations to be created to the pod. |
| datanode.podTemplate.labels | object | `{}` | The labels to be created to the pod. |
Expand All @@ -79,13 +80,13 @@ helm uninstall mycluster -n default
| datanode.podTemplate.nodeSelector | object | `{}` | The pod node selector |
| datanode.podTemplate.serviceaccount.annotations | object | `{}` | The annotations for datanode serviceaccount |
| datanode.podTemplate.serviceaccount.create | bool | `false` | Create a service account |
| datanode.podTemplate.serviceaccount.name | string | `"datanode-sa"` | The serviceaccount name |
| datanode.podTemplate.tolerations | list | `[]` | The pod tolerations |
| datanode.replicas | int | `3` | Datanode replicas |
| datanode.storage.dataHome | string | `"/data/greptimedb"` | The dataHome directory, default is "/data/greptimedb/" |
| datanode.storage.storageClassName | string | `nil` | Storage class for datanode persistent volume |
| datanode.storage.storageRetainPolicy | string | `"Retain"` | Storage retain policy for datanode persistent volume |
| datanode.storage.storageSize | string | `"10Gi"` | Storage size for datanode persistent volume |
| datanode.storage.walDir | string | `"/tmp/greptimedb/wal"` | The wal directory of the storage, default is "/tmp/greptimedb/wal" |
| datanode.storage.walDir | string | `"/data/greptimedb/wal"` | The wal directory of the storage, default is "/data/greptimedb/wal" |
| frontend.config | string | `""` | Extra frontend config in toml format. |
| frontend.podTemplate | object | `{"affinity":{},"annotations":{},"labels":{},"main":{"args":[],"command":[],"env":[],"image":"","resources":{"limits":{},"requests":{}}},"nodeSelector":{},"serviceAccountName":"","tolerations":[]}` | The pod template for frontend |
| frontend.podTemplate.affinity | object | `{}` | The pod affinity |
Expand All @@ -109,10 +110,10 @@ helm uninstall mycluster -n default
| image.pullSecrets | list | `[]` | The image pull secrets |
| image.registry | string | `"docker.io"` | The image registry |
| image.repository | string | `"greptime/greptimedb"` | The image repository |
| image.tag | string | `"v0.4.4"` | The image tag |
| image.tag | string | `"v0.5.0"` | The image tag |
| initializer.registry | string | `"docker.io"` | Initializer image registry |
| initializer.repository | string | `"greptime/greptimedb-initializer"` | Initializer image repository |
| initializer.tag | string | `"0.1.0-alpha.17"` | Initializer image tag |
| initializer.tag | string | `"0.1.0-alpha.19"` | Initializer image tag |
| meta.config | string | `""` | Extra Meta config in toml format. |
| meta.etcdEndpoints | string | `"etcd.default.svc.cluster.local:2379"` | Meta etcd endpoints |
| meta.podTemplate | object | `{"affinity":{},"annotations":{},"labels":{},"main":{"args":[],"command":[],"env":[],"image":"","resources":{"limits":{},"requests":{}}},"nodeSelector":{},"serviceAccountName":"","tolerations":[]}` | The pod template for meta |
Expand All @@ -131,9 +132,12 @@ helm uninstall mycluster -n default
| meta.podTemplate.tolerations | list | `[]` | The pod tolerations |
| meta.replicas | int | `1` | Meta replicas |
| mysqlServicePort | int | `4002` | GreptimeDB mysql service port |
| objectStorage | object | `{"oss":{},"s3":{}}` | Configure to object storage |
| openTSDBServicePort | int | `4242` | GreptimeDB opentsdb service port |
| postgresServicePort | int | `4003` | GreptimeDB postgres service port |
| prometheusMonitor | object | `{}` | Configure to prometheus podmonitor |
| prometheusMonitor | object | `{"enabled":false,"interval":"30s","labels":{"release":"prometheus"}}` | Configure to prometheus PodMonitor |
| prometheusMonitor.enabled | bool | `false` | Create PodMonitor resource for scraping metrics using PrometheusOperator |
| prometheusMonitor.interval | string | `"30s"` | Interval at which metrics should be scraped |
| prometheusMonitor.labels | object | `{"release":"prometheus"}` | Add labels to the PodMonitor |
| resources.limits | object | `{"cpu":"500m","memory":"512Mi"}` | The resources limits for the container |
| resources.requests | object | `{"cpu":"500m","memory":"512Mi"}` | The requested resources for the container |
| storage | object | `{"local":{},"oss":{},"s3":{}}` | Configure to storage |
19 changes: 10 additions & 9 deletions charts/greptimedb-cluster/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
2. Install the etcd cluster:

```console
helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
helm upgrade --install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
--set replicaCount=3 \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
Expand All @@ -29,21 +29,22 @@
The default installation will use the local storage:

```console
helm install mycluster greptime/greptimedb-cluster -n default
helm upgrade --install mycluster greptime/greptimedb-cluster -n default
```

### Use AWS S3 as backend storage

Before installation, you must create the AWS S3 bucket, and the cluster will use the bucket as backend storage:

```console
helm install mycluster greptime/greptimedb-cluster \
--set storage.s3.bucket="your-bucket" \
--set storage.s3.region="region-of-bucket" \
--set storage.s3.root="root-directory-of-data" \
--set storage.credentials.secretName="s3-credentials" \
--set storage.credentials.accessKeyId="your-access-key-id" \
--set storage.credentials.secretAccessKey="your-secret-access-key"
helm upgrade --install mycluster greptime/greptimedb-cluster \
--set objectStorage.s3.bucket="your-bucket" \
--set objectStorage.s3.region="region-of-bucket" \
--set objectStorage.s3.root="root-directory-of-data" \
--set objectStorage.credentials.secretName="s3-credentials" \
--set objectStorage.credentials.accessKeyId="your-access-key-id" \
--set objectStorage.credentials.secretAccessKey="your-secret-access-key" \
-n default
```

If you set `storage.s3.root` as `mycluser`, then the data layout will be:
Expand Down
30 changes: 13 additions & 17 deletions charts/greptimedb-cluster/templates/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -154,9 +154,8 @@ spec:
storageClassName: {{ .Values.datanode.storage.storageClassName }}
storageSize: {{ .Values.datanode.storage.storageSize }}
storageRetainPolicy: {{ .Values.datanode.storage.storageRetainPolicy }}
{{- if .Values.datanode.storage.walDir }}
dataHome: {{ .Values.datanode.storage.dataHome }}
walDir: {{ .Values.datanode.storage.walDir }}
{{- end }}
{{- if (and .Values.prometheusMonitor (eq .Values.prometheusMonitor.enabled true ))}}
prometheusMonitor: {{- toYaml .Values.prometheusMonitor | nindent 4 }}
{{- end }}
Expand All @@ -168,23 +167,20 @@ spec:
initializer:
image: '{{ .Values.initializer.registry }}/{{ .Values.initializer.repository }}:{{ .Values.initializer.tag }}'
storage:
{{- if .Values.storage.s3 }}
{{- if .Values.objectStorage.s3 }}
s3:
bucket: {{ .Values.storage.s3.bucket }}
region: {{ .Values.storage.s3.region }}
root: {{ .Values.storage.s3.root }}
secretName: {{ .Values.storage.credentials.secretName }}
endpoint: {{ .Values.storage.s3.endpoint }}
{{- else if .Values.storage.local }}
local:
directory: {{ .Values.storage.local.directory }}
{{- else if .Values.storage.oss }}
bucket: {{ .Values.objectStorage.s3.bucket }}
region: {{ .Values.objectStorage.s3.region }}
root: {{ .Values.objectStorage.s3.root }}
secretName: {{ .Values.objectStorage.credentials.secretName }}
endpoint: {{ .Values.objectStorage.s3.endpoint }}
{{- else if .Values.objectStorage.oss }}
oss:
bucket: {{ .Values.storage.oss.bucket }}
region: {{ .Values.storage.oss.region }}
root: {{ .Values.storage.oss.root }}
secretName: {{ .Values.storage.credentials.secretName }}
endpoint: {{ .Values.storage.oss.endpoint }}
bucket: {{ .Values.objectStorage.oss.bucket }}
region: {{ .Values.objectStorage.oss.region }}
root: {{ .Values.objectStorage.oss.root }}
secretName: {{ .Values.objectStorage.credentials.secretName }}
endpoint: {{ .Values.objectStorage.oss.endpoint }}
{{- else }}
{}
{{- end }}
54 changes: 25 additions & 29 deletions charts/greptimedb-cluster/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,18 @@ image:
# -- The image repository
repository: greptime/greptimedb
# -- The image tag
tag: "v0.4.4"
tag: "v0.5.0"
# -- The image pull secrets
pullSecrets: []

initializer:
# -- Initializer image registry
registry: docker.io
# -- Initializer image repository
repository: greptime/greptimedb-initializer
# -- Initializer image tag
tag: 0.1.0-alpha.19

resources:
# -- The requested resources for the container
requests:
Expand Down Expand Up @@ -169,8 +177,6 @@ datanode:
serviceaccount:
# -- Create a service account
create: false
# -- The serviceaccount name
name: datanode-sa
# -- The annotations for datanode serviceaccount
annotations: {}

Expand All @@ -181,16 +187,10 @@ datanode:
storageSize: 10Gi
# -- Storage retain policy for datanode persistent volume
storageRetainPolicy: Retain
# -- The wal directory of the storage, default is "/tmp/greptimedb/wal"
walDir: "/tmp/greptimedb/wal"

initializer:
# -- Initializer image registry
registry: docker.io
# -- Initializer image repository
repository: greptime/greptimedb-initializer
# -- Initializer image tag
tag: 0.1.0-alpha.17
# -- The dataHome directory, default is "/data/greptimedb/"
dataHome: "/data/greptimedb"
# -- The wal directory of the storage, default is "/data/greptimedb/wal"
walDir: "/data/greptimedb/wal"

# -- GreptimeDB http service port
httpServicePort: 4000
Expand All @@ -207,27 +207,23 @@ postgresServicePort: 4003
# -- GreptimeDB opentsdb service port
openTSDBServicePort: 4242

# -- Configure to prometheus podmonitor
prometheusMonitor: {}
# enabled: false
# path: "/metrics"
# port: "http"
# interval: "30s"
# honorLabels: true
# labelsSelector:
# release: prometheus

# -- Configure to storage
storage:
# -- Configure to prometheus PodMonitor
prometheusMonitor:
# -- Create PodMonitor resource for scraping metrics using PrometheusOperator
enabled: false
# -- Interval at which metrics should be scraped
interval: "30s"
# -- Add labels to the PodMonitor
labels:
release: prometheus

# -- Configure to object storage
objectStorage:
# credentials:
# secretName: "credentials"
# accessKeyId: "you-should-set-the-access-key-id-here"
# secretAccessKey: "you-should-set-the-secret-access-key-here"

# configure to use local storage.
local: {}
# directory: /tmp/greptimedb

# configure to use s3 storage.
s3: {}
# bucket: "bucket-name"
Expand Down
4 changes: 2 additions & 2 deletions charts/greptimedb-operator/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ apiVersion: v2
kubeVersion: ">=1.18.0-0"
description: The greptimedb-operator Helm chart for Kubernetes
name: greptimedb-operator
appVersion: 0.1.0-alpha.17
version: 0.1.4
appVersion: 0.1.0-alpha.19
version: 0.1.5
type: application
home: https://github.com/GreptimeTeam/greptimedb-operator
sources:
Expand Down
Loading

0 comments on commit 955ecdc

Please sign in to comment.