Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add enable AI feature docs of Karpor #587

Merged
merged 1 commit into from
Jan 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 38 additions & 3 deletions docs/karpor/1-getting-started/2-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,33 @@ helm install karpor-release kusionstack/karpor --set registryProxy=docker.m.daoc

**NOTE**: The above is just an example, you can replace the value of `registryProxy` as needed.

### Enable AI features

If you are trying to install Karpor with AI features, including natural language search and AI analyze, `ai-auth-token` and `ai-base-url` should be configured, e.g.:

```shell
# At a minimum, server.ai.authToken and server.ai.baseUrl must be configured.
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1
# server.ai.backend has default values `openai`, which can be overridden when necessary. If the backend you are using is compatible with OpenAI, then there is no need to make any changes here.
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.backend=huggingface
# server.ai.model has default values `gpt-3.5-turbo`, which can be overridden when necessary.
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.model=gpt-4o
# server.ai.topP and server.ai.temperature can also be manually modified.
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.topP=0.5 \
--set server.ai.temperature=0.2
```

### Chart Parameters

The following table lists the configurable parameters of the chart and their default values.
Expand All @@ -113,13 +140,21 @@ The Karpor Server Component is main backend server. It itself is an `apiserver`,

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. e.g., "openai". If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
| server.ai.baseUrl | string | `""` | Base URL of the AI service. e.g., "https://api.openai.com/v1". |
| server.ai.model | string | `"gpt-3.5-turbo"` | Name or identifier of the AI model to be used. e.g., "gpt-3.5-turbo". |
| server.ai.temperature | float | `1` | Temperature parameter for the AI model. This controls the randomness of the output, where a higher value (e.g., 1.0) makes the output more random, and a lower value (e.g., 0.0) makes it more deterministic. |
| server.ai.topP | float | `1` | Top-p (nucleus sampling) parameter for the AI model. This controls Controls the probability mass to consider for sampling, where a higher value leads to greater diversity in the generated content (typically ranging from 0 to 1) |
| server.enableRbac | bool | `false` | Enable RBAC authorization if set to true. |
| server.image.repo | string | `"kusionstack/karpor"` | Repository for Karpor server image. |
| server.image.tag | string | `""` | Tag for Karpor server image. Defaults to the chart's appVersion if not specified. |
| server.name | string | `"karpor-server"` | Component name for karpor server. |
| server.port | int | `7443` | Port for karpor server. |
| server.replicas | int | `1` | The number of karpor server pods to run. |
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor server pods. |
| server.serviceType | string | `"ClusterIP"` | Service type for the karpor server. The available type values list as ["ClusterIP"、"NodePort"、"LoadBalancer"]. |
| server.serviceType | string | `"ClusterIP"` | Service type for the karpor server. The available type values list as ["ClusterIP"、"NodePort"、"LoadBalancer"]. |

#### Karpor Syncer

Expand Down Expand Up @@ -156,8 +191,8 @@ The ETCD Component is the storage of Karpor Server as `apiserver`.
| etcd.image.repo | string | `"quay.io/coreos/etcd"` | Repository for ETCD image. |
| etcd.image.tag | string | `"v3.5.11"` | Specific tag for ETCD image. |
| etcd.name | string | `"etcd"` | Component name for ETCD. |
| etcd.persistence.accessModes[0] | string | `"ReadWriteOnce"` | |
| etcd.persistence.size | string | `"10Gi"` | |
| etcd.persistence.accessModes | list | `["ReadWriteOnce"]` | Volume access mode, ReadWriteOnce means single node read-write access |
| etcd.persistence.size | string | `"10Gi"` | Size of etcd persistent volume |
| etcd.port | int | `2379` | Port for ETCD. |
| etcd.replicas | int | `1` | The number of etcd pods to run. |
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor etcd pods. |
Expand Down
5 changes: 3 additions & 2 deletions docs/karpor/1-getting-started/3-quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,5 +119,6 @@ If you have any questions or concerns, check out the official documentation of K

## Next Step

- Learn Karpor's [Architecture](../concepts/architecture) and [Glossary](../concepts/glossary).
- View [User Guide](../user-guide/multi-cluster-management) to look on more of what you can achieve with Karpor.
* Learn Karpor's [Architecture](../concepts/architecture) and [Glossary](../concepts/glossary).
* View [User Guide](../user-guide/multi-cluster-management) to look on more of what you can achieve with Karpor.
* [Enable AI features](installation#enable-ai-features), including natural language search and AI analyze.
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,33 @@ helm install karpor-release kusionstack/karpor --set registryProxy=docker.m.daoc

**注意**: 以上只是一个样例,你可以根据需要替换 `registryProxy` 的值。

### 启用 AI 功能

如果您要安装带有AI功能的Karpor,包括自然语言搜索和AI分析,则应配置 `ai-auth-token` 和 `ai-base-url`,例如:

```shell
# 至少需要配置 server.ai.authToken 和 server.ai.baseUrl。
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1
# server.ai.backend 的默认值是 `openai`,可以根据需要进行覆盖。如果你使用的后端与 OpenAI 兼容,则无需在此处进行任何更改。
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.backend=huggingface
# server.ai.model 的默认值是 `gpt-3.5-turbo`,可以根据需要进行覆盖。
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.model=gpt-4o
# server.ai.topP 和 server.ai.temperature 也可以手动修改。
helm install karpor-release kusionstack/karpor \
--set server.ai.authToken=YOUR_AI_TOKEN \
--set server.ai.baseUrl=https://api.openai.com/v1 \
--set server.ai.topP=0.5 \
--set server.ai.temperature=0.2
```

### Chart 参数

以下表格列出了 Chart 的所有可配置参数及其默认值。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,5 +117,6 @@ Karpor 提供了一个强大的搜索功能,允许你快速跨集群查找资

## 下一步

- 了解 Karpor 的 [架构](../concepts/architecture) 和 [术语表](../concepts/glossary)。
- 查看 [用户指南](../user-guide/multi-cluster-management) 以了解 Karpor 的更多功能。
* 了解 Karpor 的 [架构](../concepts/architecture) 和 [术语表](../concepts/glossary)。
* 查看 [用户指南](../user-guide/multi-cluster-management) 以了解 Karpor 的更多功能。
* [启用 AI 功能](installation#启用-ai-功能),包括自然语言搜索和 AI 智能分析。
Loading