From 2bf74b0adc45a81e60813b82667d8c7d36e8ba95 Mon Sep 17 00:00:00 2001 From: yuanyuan zhang Date: Mon, 30 Dec 2024 16:08:43 +0800 Subject: [PATCH] docs: update kbcli cluster create, yaml, and monitoring docs in release-1.0 --- ...e-and-connect-an-apecloud-mysql-cluster.md | 92 ++-- .../delete-mysql-cluster.md | 7 +- .../kubeblocks-for-apecloud-mysql.md | 2 +- .../manage-elasticsearch.md | 94 ++-- .../create-a-kafka-cluster.md | 316 ++++++------- .../delete-kafka-cluster.md | 7 +- .../kubeblocks-for-milvus/manage-milvus.md | 422 ++++++++++-------- ...create-and-connect-to-a-mongodb-cluster.md | 75 ++-- .../delete-mongodb-cluster.md | 7 +- .../kubeblocks-for-mongodb.md | 2 +- .../create-and-connect-a-mysql-cluster.md | 89 ++-- .../delete-mysql-cluster.md | 7 +- .../kubeblocks-for-mysql-community-edition.md | 2 +- ...create-and-connect-a-postgresql-cluster.md | 90 ++-- .../delete-a-postgresql-cluster.md | 7 +- .../kubeblocks-for-postgresql.md | 2 +- .../create-pulsar-cluster-on-kubeblocks.md | 214 +++++---- .../delete-a-pulsar-cluster.md | 7 +- .../kubeblocks-for-pulsar.md | 2 +- .../kubeblocks-for-qdrant/manage-qdrant.md | 92 ++-- .../manage-rabbitmq.md | 95 ++-- .../create-and-connect-a-redis-cluster.md | 106 +++-- .../delete-a-redis-cluster.md | 7 +- .../kubeblocks-for-redis.md | 2 +- .../manage-starrocks.md | 66 ++- .../observability/monitor-database.md | 135 +++++- ...e-and-connect-an-apecloud-mysql-cluster.md | 92 ++-- .../delete-mysql-cluster.md | 3 +- .../kubeblocks-for-apecloud-mysql.md | 2 +- .../manage-elasticsearch.md | 84 ++-- .../create-a-kafka-cluster.md | 272 +++++------ .../delete-kafka-cluster.md | 3 +- .../kubeblocks-for-kafka.md | 2 +- .../kubeblocks-for-milvus/manage-milvus.md | 416 +++++++++-------- ...create-and-connect-to-a-mongodb-cluster.md | 73 +-- .../delete-a-mongodb-cluster.md | 3 +- .../kubeblocks-for-mongodb.md | 2 +- .../create-and-connect-a-mysql-cluster.md | 93 ++-- .../delete-mysql-cluster.md | 3 +- .../kubeblocks-for-mysql-community-edition.md | 2 +- ...create-and-connect-a-postgresql-cluster.md | 86 ++-- .../delete-a-postgresql-cluster.md | 3 +- .../kubeblocks-for-postgresql.md | 2 +- .../create-pulsar-cluster-on-kb.md | 210 +++++---- .../delete-pulsar-cluster.md | 3 +- .../kubeblocks-for-pulsar.md | 2 +- .../kubeblocks-for-qdrant/manage-qdrant.md | 90 ++-- .../manage-rabbitmq.md | 79 +++- .../create-and-connect-to-a-redis-cluster.md | 100 +++-- .../delete-a-redis-cluster.md | 3 +- .../kubeblocks-for-redis.md | 2 +- .../manage-starrocks.md | 59 +-- .../observability/monitor-database.md | 133 +++++- 53 files changed, 1988 insertions(+), 1781 deletions(-) diff --git a/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md b/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md index fa35d59de86..8768cd5f446 100644 --- a/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md +++ b/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md @@ -109,67 +109,55 @@ KubeBlocks supports creating two types of ApeCloud MySQL clusters: Standalone an ```yaml cat < -KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating an Elasticsearch cluster. +KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating an Elasticsearch cluster with single node. For more examples, refer to [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/elasticsearch). ```yaml cat < -1. Create a Kafka cluster. If you only have one node for deploying a cluster with multiple replicas, set `spec.affinity.topologyKeys` as `null`. But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability. +1. Create a Kafka cluster. If you only have one node for deploying a cluster with multiple replicas, set `spec.componentSpecs.affinity.topologyKeys` as `null`. But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability. + + For more cluster examples, refer to [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/kafka). * Create a Kafka cluster in combined mode. - ```yaml - # create kafka in combined mode - kubectl apply -f - <-postgresql`. For example, if your cluster name is `mycluster`, the value would be `mycluster-postgresql`. Replace `mycluster` with your actual cluster name as needed. | + | `spec.componentSpecs.replicas` | It specifies the number of replicas of the component. | + | `spec.componentSpecs.resources` | It specifies the resources required by the Component. | + | `spec.componentSpecs.volumeClaimTemplates` | It specifies a list of PersistentVolumeClaim templates that define the storage requirements for the Component. | + | `spec.componentSpecs.volumeClaimTemplates.name` | It refers to the name of a volumeMount defined in `componentDefinition.spec.runtime.containers[*].volumeMounts`. | + | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | It is the name of the StorageClass required by the claim. If not specified, the StorageClass annotated with `storageclass.kubernetes.io/is-default-class=true` will be used by default. | + | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | You can set the storage size as needed. | + + For more API fields and descriptions, refer to the [API Reference](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster). KubeBlocks operator watches for the `Cluster` CRD and creates the cluster and all dependent resources. You can get all the resources created by the cluster with `kubectl get all,secret,rolebinding,serviceaccount -l app.kubernetes.io/instance=mycluster -n demo`. @@ -218,7 +214,7 @@ KubeBlocks supports creating two types of PostgreSQL clusters: Standalone and Re If you only have one node for deploying a Replication Cluster, set the `--topology-keys` as `null` when creating a Replication Cluster. But you should note that for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability. ```bash - kbcli cluster create postgresql mycluster --replicas=2 --availability-policy='none' -n demo + kbcli cluster create postgresql mycluster --replicas=2 --topology-keys=null -n demo ``` 2. Verify whether this cluster is created successfully. diff --git a/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md b/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md index fc852418627..cbf896fb860 100644 --- a/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md +++ b/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md @@ -21,10 +21,9 @@ The termination policy determines how a cluster is deleted. | **terminationPolicy** | **Deleting Operation** | |:----------------------|:-------------------------------------------------| -| `DoNotTerminate` | `DoNotTerminate` blocks delete operation. | -| `Halt` | `Halt` deletes Cluster resources like Pods and Services but retains Persistent Volume Claims (PVCs), allowing for data preservation while stopping other operations. Halt policy is deprecated in v0.9.1 and will have same meaning as DoNotTerminate. | -| `Delete` | `Delete` extends the Halt policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. | -| `WipeOut` | `WipeOut` deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, especially in non-production environments, to avoid irreversible data loss. | +| `DoNotTerminate` | `DoNotTerminate` prevents deletion of the Cluster. This policy ensures that all resources remain intact. | +| `Delete` | `Delete` deletes Cluster resources like Pods, Services, and Persistent Volume Claims (PVCs), leading to a thorough cleanup while removing all persistent data. | +| `WipeOut` | `WipeOut` is an aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. | To check the termination policy, execute the following command. diff --git a/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md b/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md index e96e3a02960..4030a15dce3 100644 --- a/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md +++ b/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md @@ -7,7 +7,7 @@ sidebar_position: 1 # KubeBlocks for PostgreSQL -This tutorial illustrates how to create and manage a PostgreSQL cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/postgresql). +This tutorial illustrates how to create and manage a PostgreSQL cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/postgresql). * [Introduction](./introduction/introduction.md) * [Cluster Management](./cluster-management/create-and-connect-a-postgresql-cluster.md) diff --git a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md index d69106df377..f7f4b0a076b 100644 --- a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md +++ b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md @@ -78,107 +78,123 @@ Refer to the [Pulsar official document](https://pulsar.apache.org/docs/3.1.x/) f ## Create Pulsar cluster -1. Create the Pulsar cluster template file `values-production.yaml` for `helm` locally. - - Copy the following information to the local file `values-production.yaml`. - - ```bash - ## Bookies configuration - bookies: - resources: - limits: - memory: 8Gi - requests: - cpu: 2 - memory: 8Gi - - persistence: - data: - storageClassName: kb-default-sc - size: 128Gi - log: - storageClassName: kb-default-sc - size: 64Gi - - ## Zookeeper configuration - zookeeper: - resources: - limits: - memory: 2Gi - requests: - cpu: 1 - memory: 2Gi - - persistence: - data: - storageClassName: kb-default-sc - size: 20Gi - log: - storageClassName: kb-default-sc - size: 20Gi - - broker: - replicaCount: 3 - resources: - limits: - memory: 8Gi - requests: - cpu: 2 - memory: 8Gi +1. Create a Pulsar cluster in basic mode. For other cluster modes, check out the examples provided in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar). + + ```yaml + cat <,bookies.persistence.log.storageClassName=,zookeeper.persistence.data.storageClassName=,zookeeper.persistence.log.storageClassName= --namespace=demo - ``` - - You can specify the storage name ``. - -3. Verify the cluster created. + | Field | Definition | + |---------------------------------------|--------------------------------------| + | `spec.terminationPolicy` | It is the policy of cluster termination. Valid values are `DoNotTerminate`, `Delete`, `WipeOut`. For the detailed definition, you can refer to [Termination Policy](./delete-a-pulsar-cluster.md#termination-policy). | + | `spec.clusterDef` | It specifies the name of the ClusterDefinition to use when creating a Cluster. **Note: DO NOT UPDATE THIS FIELD**. The value must be `pulsar` to create a Pulsar Cluster. | + | `spec.topology` | It specifies the name of the ClusterTopology to be used when creating the Cluster. | + | `spec.services` | It defines a list of additional Services that are exposed by a Cluster. | + | `spec.componentSpecs` | It is the list of ClusterComponentSpec objects that define the individual Components that make up a Cluster. This field allows customized configuration of each component within a cluster. | + | `spec.componentSpecs.serviceVersion` | It specifies the version of the Service expected to be provisioned by this Component. Valid options are [2.11.2,3.0.2]. | + | `spec.componentSpecs.disableExporter` | It determines whether metrics exporter information is annotated on the Component's headless Service. Valid options are [true, false]. | + | `spec.componentSpecs.replicas` | It specifies the amount of replicas of the component. | + | `spec.componentSpecs.resources` | It specifies the resources required by the Component. | + | `spec.componentSpecs.volumeClaimTemplates` | It specifies a list of PersistentVolumeClaim templates that define the storage requirements for the Component. | + | `spec.componentSpecs.volumeClaimTemplates.name` | It refers to the name of a volumeMount defined in `componentDefinition.spec.runtime.containers[*].volumeMounts`. | + | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | It is the name of the StorageClass required by the claim. If not specified, the StorageClass annotated with `storageclass.kubernetes.io/is-default-class=true` will be used by default. | + | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | You can set the storage size as needed. | + + For more API fields and descriptions, refer to the [API Reference](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster). + +2. Verify the cluster created. ```bash kubectl get cluster mycluster -n demo diff --git a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md index da885db967d..51e4df68fdf 100644 --- a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md +++ b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md @@ -21,10 +21,9 @@ The termination policy determines how a cluster is deleted. | **terminationPolicy** | **Deleting Operation** | |:----------------------|:-------------------------------------------------| -| `DoNotTerminate` | `DoNotTerminate` blocks delete operation. | -| `Halt` | `Halt` deletes Cluster resources like Pods and Services but retains Persistent Volume Claims (PVCs), allowing for data preservation while stopping other operations. Halt policy is deprecated in v0.9.1 and will have same meaning as DoNotTerminate. | -| `Delete` | `Delete` extends the Halt policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. | -| `WipeOut` | `WipeOut` deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, especially in non-production environments, to avoid irreversible data loss. | +| `DoNotTerminate` | `DoNotTerminate` prevents deletion of the Cluster. This policy ensures that all resources remain intact. | +| `Delete` | `Delete` deletes Cluster resources like Pods, Services, and Persistent Volume Claims (PVCs), leading to a thorough cleanup while removing all persistent data. | +| `WipeOut` | `WipeOut` is an aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. | To check the termination policy, execute the following command. diff --git a/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md b/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md index cb19cd2ec36..8822b4abac8 100644 --- a/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md +++ b/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md @@ -7,7 +7,7 @@ sidebar_position: 1 # KubeBlocks for Pulsar -This tutorial illustrates how to create and manage a Pulsar cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/pulsar). +This tutorial illustrates how to create and manage a Pulsar cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar). * [Cluster Management](./cluster-management/create-pulsar-cluster-on-kubeblocks.md) * [Configuration](./configuration/configuration.md) diff --git a/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md b/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md index 11b76d2ce83..ca6221238aa 100644 --- a/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md +++ b/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md @@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem'; The popularity of generative AI (Generative AI) has aroused widespread attention and completely ignited the vector database (Vector Database) market. Qdrant (read: quadrant) is a vector similarity search engine and vector database. It provides a production-ready service with a convenient API to store, search, and manage points—vectors with an additional payload Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural-network or semantic-based matching, faceted search, and other applications. -KubeBlocks supports the management of Qdrant. This tutorial illustrates how to create and manage a Qdrant cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/qdrant). +KubeBlocks supports the management of Qdrant. This tutorial illustrates how to create and manage a Qdrant cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/qdrant). ## Before you start @@ -38,63 +38,56 @@ KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of ```yaml cat < -KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Standalone. +KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Replcation. KubeBlocks also supports creating a Redis cluster in other modes. You can refer to the examples provided in the [GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/redis). ```yaml cat < + + + + You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/apecloud-mysql/pod-monitor.yaml). + + ```yaml + kubectl apply -f - < + + + + You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/mysql/pod-monitor.yaml). + + ```yaml + kubectl apply -f - < + + + + You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/postgresql/pod-monitor.yaml). + + ```yaml + kubectl apply -f - < + + + + You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/redis/pod-monitor.yaml). ```yaml kubectl apply -f - < + + + 3. Access the Grafana dashboard. Log in to the Grafana dashboard and import the dashboard. diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md index 6878db39579..345294de0af 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md @@ -109,67 +109,55 @@ KubeBlocks 支持创建两种类型的 ApeCloud MySQL 集群:单机版(Stand ```yaml cat < - `DoNotTerminate` 会阻止删除操作。

- `Halt` 会删除工作负载资源,如 statefulset 和 deployment 等,但是保留了 PVC 。

- `Delete` 在 `Halt` 的基础上进一步删除了 PVC。

- `WipeOut` 在 `Delete` 的基础上从备份存储的位置完全删除所有卷快照和快照数据。

| - | `spec.affinity` | 为集群的 Pods 定义了一组节点亲和性调度规则。该字段可控制 Pods 在集群中节点上的分布。 | - | `spec.affinity.podAntiAffinity` | 定义了不在同一 component 中的 Pods 的反亲和性水平。该字段决定了 Pods 以何种方式跨节点分布,以提升可用性和性能。 | - | `spec.affinity.topologyKeys` | 用于定义 Pod 反亲和性和 Pod 分布约束的拓扑域的节点标签值。 | - | `spec.tolerations` | 该字段为数组,用于定义集群中 Pods 的容忍,确保 Pod 可被调度到具有匹配污点的节点上。 | - | `spec.componentSpecs` | 集群 components 列表,定义了集群 components。该字段允许对集群中的每个 component 进行自定义配置。 | - | `spec.componentSpecs.componentDefRef` | 表示 cluster definition 中定义的 component definition 的名称,可通过执行 `kubectl get clusterdefinition postgresql -o json \| jq '.spec.componentDefs[].name'` 命令获取 component definition 名称。 | - | `spec.componentSpecs.name` | 定义了 component 的名称。 | - | `spec.componentSpecs.disableExporter` | 定义了是否开启监控功能。 | + | `spec.terminationPolicy` | 集群终止策略,有效值为 `DoNotTerminate`、`Delete` 和 `WipeOut`。具体定义可参考 [终止策略](./delete-a-postgresql-cluster.md#终止策略)。 | + | `spec.clusterDef` | 指定了创建集群时要使用的 ClusterDefinition 的名称。**注意**:**请勿更新此字段**。创建 PostgreSQL 集群时,该值必须为 `postgresql`。 | + | `spec.topology` | 指定了在创建集群时要使用的 ClusterTopology 的名称。建议值为 [replication]。 | + | `spec.componentSpecs` | 集群 component 列表,定义了集群 components。该字段支持自定义配置集群中每个 component。 | + | `spec.componentSpecs.serviceVersion` | 定义了 component 部署的服务版本。有效值为 [12.14.0,12.14.1,12.15.0,14.7.2,14.8.0,15.7.0,16.4.0] | + | `spec.componentSpecs.disableExporter` | 定义了是否在 component 无头服务(headless service)上标注指标 exporter 信息,是否开启监控 exporter。有效值为 [true, false]。 | + | `spec.componentSpecs.labels` | 指定了要覆盖或添加的标签,这些标签将应用于 component 所拥有的底层 Pod、PVC、账号和 TLS 密钥以及服务。 | + | `spec.componentSpecs.labels.apps.kubeblocks.postgres.patroni/scope` | PostgreSQL 的 `ComponentDefinition` 指定了环境变量 `KUBERNETES_SCOPE_LABEL=apps.kubeblocks.postgres.patroni/scope`。该变量定义了 Patroni 用于标记 Kubernetes 资源的标签键,帮助 Patroni 识别哪些资源属于指定的范围(或集群)。**注意**:**不要删除此标签**。该值必须遵循 `-postgresql` 格式。例如,如果你的集群名称是 `mycluster`,则该值应为 `mycluster-postgresql`。可按需将 `mycluster` 替换为你的实际集群名称。 | | `spec.componentSpecs.replicas` | 定义了 component 中 replicas 的数量。 | | `spec.componentSpecs.resources` | 定义了 component 的资源要求。 | + | `spec.componentSpecs.volumeClaimTemplates` | PersistentVolumeClaim 模板列表,定义 component 的存储需求。 | + | `spec.componentSpecs.volumeClaimTemplates.name` | 引用了在 `componentDefinition.spec.runtime.containers[*].volumeMounts` 中定义的 volumeMount 名称。 | + | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | 定义了 StorageClass 的名称。如果未指定,系统将默认使用带有 `storageclass.kubernetes.io/is-default-class=true` 注释的 StorageClass。 | + | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | 可按需配置存储容量。 | + + 您可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster),查看更多 API 字段及说明。 KubeBlocks operator 监控 `Cluster` CRD 并创建集群和全部依赖资源。您可执行以下命令获取集群创建的所有资源信息。 @@ -221,7 +217,7 @@ KubeBlocks 支持创建两种 PostgreSQL 集群:单机版(Standalone)和 如果您只有一个节点用于部署三节点集群,可在创建集群时将 `topology-keys` 设为 `null`。但需要注意的是,生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。 ```bash - kbcli cluster create postgresql mycluster --replicas=2 --availability-policy='none' -n demo + kbcli cluster create postgresql mycluster --replicas=2 --topology-keys=null -n demo ``` 2. 验证集群是否创建成功。 diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md index 0df42bf62d6..b5fc09cc540 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md @@ -22,8 +22,7 @@ import TabItem from '@theme/TabItem'; | **终止策略** | **删除操作** | |:----------------------|:-------------------------------------------------------------------------------------------| | `DoNotTerminate` | `DoNotTerminate` 禁止删除操作。 | -| `Halt` | `Halt` 删除集群资源(如 Pods、Services 等),但保留 PVC。停止其他运维操作的同时,保留了数据。但 `Halt` 策略在 v0.9.1 中已启用,设置为 `Halt` 的效果与 `DoNotTerminate` 相同。 | -| `Delete` | `Delete` 在 `Halt` 的基础上,删除 PVC 及所有持久数据。 | +| `Delete` | `Delete` 删除 Pod、服务、PVC 等集群资源,删除所有持久数据。 | | `WipeOut` | `WipeOut` 删除所有集群资源,包括外部存储中的卷快照和备份。使用该策略将会删除全部数据,特别是在非生产环境,该策略将会带来不可逆的数据丢失。请谨慎使用。 | 执行以下命令查看终止策略。 diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md index 4044866e0ab..8051a640ad6 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md @@ -7,7 +7,7 @@ sidebar_position: 1 # 用 KubeBlocks 管理 PostgreSQL -本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 PostgreSQL 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/postgresql)查看 YAML 示例。 +本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 PostgreSQL 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/postgresql)查看 YAML 示例。 * [简介](./introduction/introduction.md) * [集群管理](./cluster-management/create-and-connect-a-postgresql-cluster.md) diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md index 42869116574..00ce785f594 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md @@ -79,103 +79,123 @@ KubeBlocks 可以通过良好的抽象快速集成新引擎,并支持 Pulsar ## 创建 Pulsar 集群 -1. 在本地创建 `helm` 使用的 Pulsar 集群模板文件 `values-production.yaml`。 - - 将以下信息复制到本地文件 `values-production.yaml` 中。 - - ```bash - ## 配置 Bookies - bookies: - resources: - limits: - memory: 8Gi - requests: - cpu: 2 - memory: 8Gi - - persistence: - data: - storageClassName: kb-default-sc - size: 128Gi - log: - storageClassName: kb-default-sc - size: 64Gi - - ## 配置 Zookeeper - zookeeper: - resources: - limits: - memory: 2Gi - requests: - cpu: 1 - memory: 2Gi - - persistence: - data: - storageClassName: kb-default-sc - size: 20Gi - log: - storageClassName: kb-default-sc - size: 20Gi - - broker: - replicaCount: 3 - resources: - limits: - memory: 8Gi - requests: - cpu: 2 - memory: 8Gi +1. 创建基础模式的 Pulsar 集群。如需创建其他集群模式,您可查看 [GitHub 仓库中的示例](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar)。 + + ```yaml + cat <,bookies.persistence.log.storageClassName=,zookeeper.persistence.data.storageClassName=,zookeeper.persistence.log.storageClassName= --namespace demo - ``` - - 您可以指定存储名称 ``。 - -3. 验证已创建的集群。 + | 字段 | 定义 | + |---------------------------------------|--------------------------------------| + | `spec.terminationPolicy` | 集群终止策略,有效值为 `DoNotTerminate`、`Delete` 和 `WipeOut`。具体定义可参考 [终止策略](./delete-pulsar-cluster.md#终止策略)。 | + | `spec.clusterDef` | 指定了创建集群时要使用的 ClusterDefinition 的名称。**注意**:**请勿更新此字段**。创建 Pulsar 集群时,该值必须为 `pulsar`。 | + | `spec.topology` | 指定了在创建集群时要使用的 ClusterTopology 的名称。 | + | `spec.services` | 定义了集群暴露的额外服务列表。 | + | `spec.componentSpecs` | 集群 component 列表,定义了集群 components。该字段支持自定义配置集群中每个 component。 | + | `spec.componentSpecs.serviceVersion` | 定义了 component 部署的服务版本。有效值为 [2.11.2,3.0.2]。 | + | `spec.componentSpecs.disableExporter` | 定义了是否在 component 无头服务(headless service)上标注指标 exporter 信息,是否开启监控 exporter。有效值为 [true, false]。 | + | `spec.componentSpecs.replicas` | 定义了 component 中 replicas 的数量。 | + | `spec.componentSpecs.resources` | 定义了 component 的资源要求。 | + | `spec.componentSpecs.volumeClaimTemplates` | PersistentVolumeClaim 模板列表,定义 component 的存储需求。 | + | `spec.componentSpecs.volumeClaimTemplates.name` | 引用了在 `componentDefinition.spec.runtime.containers[*].volumeMounts` 中定义的 volumeMount 名称。 | + | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | 定义了 StorageClass 的名称。如果未指定,系统将默认使用带有 `storageclass.kubernetes.io/is-default-class=true` 注释的 StorageClass。 | + | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | 可按需配置存储容量。 | + + 您可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster),查看更多 API 字段及说明。 + +2. 验证已创建的集群。 ```bash kubectl get cluster mycluster -n demo diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md index ff2fc29cc34..9c7369aac3a 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md @@ -22,8 +22,7 @@ import TabItem from '@theme/TabItem'; | **终止策略** | **删除操作** | |:----------------------|:-------------------------------------------------------------------------------------------| | `DoNotTerminate` | `DoNotTerminate` 禁止删除操作。 | -| `Halt` | `Halt` 删除集群资源(如 Pods、Services 等),但保留 PVC。停止其他运维操作的同时,保留了数据。但 `Halt` 策略在 v0.9.1 中已删除,设置为 `Halt` 的效果与 `DoNotTerminate` 相同。 | -| `Delete` | `Delete` 在 `Halt` 的基础上,删除 PVC 及所有持久数据。 | +| `Delete` | `Delete` 删除 Pod、服务、PVC 等集群资源,删除所有持久数据。 | | `WipeOut` | `WipeOut` 删除所有集群资源,包括外部存储中的卷快照和备份。使用该策略将会删除全部数据,特别是在非生产环境,该策略将会带来不可逆的数据丢失。请谨慎使用。 | 执行以下命令查看终止策略。 diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md index d0c9862f207..d30f0f523ae 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md @@ -7,7 +7,7 @@ sidebar_position: 1 # 用 KubeBlocks 管理 Pulsar -本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Pulsar 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/pulsar)查看 YAML 示例。 +本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Pulsar 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar)查看 YAML 示例。 * [集群管理](./cluster-management/create-pulsar-cluster-on-kb.md) * [配置](./configuration/configuration.md) diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md b/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md index bcfce4fec33..2ed40bad67c 100644 --- a/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md +++ b/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md @@ -15,7 +15,7 @@ import TabItem from '@theme/TabItem'; Qdrant(读作:quadrant)是向量相似性搜索引擎和向量数据库。它提供了生产可用的服务和便捷的 API,用于存储、搜索和管理点(即带有额外负载的向量)。Qdrant 专门针对扩展过滤功能进行了优化,使其在各种神经网络或基于语义的匹配、分面搜索以及其他应用中充分发挥作用。 -目前,KubeBlocks 支持 Qdrant 的管理和运维。本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Qdrant 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/qdrant)查看 YAML 示例。 +目前,KubeBlocks 支持 Qdrant 的管理和运维。本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Qdrant 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/qdrant)查看 YAML 示例。 ## 开始之前 @@ -48,63 +48,56 @@ KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Qdrant 集群的示 ```yaml cat < + + + + 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/apecloud-mysql/pod-monitor.yaml)中查看最新版本示例 YAML 文件。 + + ```yaml + kubectl apply -f - < + + + + 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/mysql/pod-monitor.yaml)中查看最新版本示例 YAML 文件。 + + ```yaml + kubectl apply -f - < + + + + 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/postgresql/pod-monitor.yml)中查看最新版本示例 YAML 文件。 ```yaml kubectl apply -f - < + + + + 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/redis/pod-monitor.yaml)中查看最新版本示例 YAML 文件。 + + ```yaml + kubectl apply -f - < + + + 3. 连接 Grafana 大盘. 登录 Grafana 大盘,并导入大盘。