Skip to content

Commit

Permalink
docs: the outline of usage and billing (#831)
Browse files Browse the repository at this point in the history
  • Loading branch information
nicecui authored Mar 15, 2024
1 parent 9bf8fab commit db468f2
Show file tree
Hide file tree
Showing 20 changed files with 406 additions and 207 deletions.
4 changes: 2 additions & 2 deletions docs/v0.7/en/greptimecloud/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

GreptimeCloud is a cloud service that is powered by fully-managed serverless GreptimeDB, providing a scalable and efficient solution for time-series data platform and and Prometheus backend.

## Learn about Usage
## Learn about usage and billing

To learn how GreptimeCloud measure its workload for serverless architecture, and how to optimize your usage, see [Learn about Usage](usage.md).
To learn how GreptimeCloud measure its workload for serverless architecture, and how to optimize your usage and billing, see [Usage & Billing](./usage-&-billing/overview.md).

## Integrations

Expand Down
7 changes: 7 additions & 0 deletions docs/v0.7/en/greptimecloud/usage-&-billing/byoc.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# BYOC

BYOC (Bring Your Own Cloud) is a service enabling you to utilize your own cloud resources to host GreptimeDB.
This service offers extensive customization and flexibility for your business needs,
along with complete management of your cloud resources and robust security features to safeguard your infrastructure.

[Contact us](https://m0k0y6ku50y.typeform.com/to/jwxzJCH4) if you are interested in customizing your BYOC plan.
35 changes: 35 additions & 0 deletions docs/v0.7/en/greptimecloud/usage-&-billing/dedicated.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Dedicated Plan

The dedicated plan allows you to purchase dedicated CPUs and storage to host GreptimeDB.
It provides unlimited data storage and retention,
complete isolation of resources and network,
and includes support from Greptime's SRE team.

If you require absolute isolation from other users or
need to exceed the maximum usage limits of the serverless plan,
then the dedicated plan is your choice.

## Costs

Please see [Pricing](https://greptime.com/pricing) for the latest pricing information.

### Computing nodes

When setting up a service under the Dedicated Plan, you'll need to configure the service mode,
which determine the size of computing nodes.
Greptime calculates costs based on the computing nodes specified in your chosen plan every hour and bills monthly.

Cost Calculation Formula:

- Hourly Costs: (Chosen Plan's Node Size * Number of Nodes * Node Hour Price)
- Daily Costs: Sum of Hourly Costs
- Monthly Costs: Sum of Daily Costs

<!--@include: shared-storage-capacity.md-->

### Network traffic

The cost of network traffic will be included in your monthly bill.
Pricing is determined by the cloud server provider (such as AWS).
Greptime does not charge any additional fees for traffic costs.

25 changes: 25 additions & 0 deletions docs/v0.7/en/greptimecloud/usage-&-billing/hobby.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Hobby Plan

## Introduction

GreptimeCloud offers a free tier Hobby Plan for users to try the service.
Each team can create up to three services in the Hobby Plan.

The Hobby Plan has the following limitations:

- RCU (Read Capacity Units): 40 RCU/s per service.
- WCU (Write Capacity Units): 20 WCU/s per service.
- Storage capacity: 5GB per service.
- Data retention policy: 3 months.

:::tip NOTE
The plan may change in the future. If you have any questions about it, please contact [[email protected]](mailto:[email protected]).
:::

<!-- ## Upgrade to Serverless Plan or Dedicated Plan
When the usage of a service exceeds the Hobby Plan limits, you can upgrade to the [Serverless Plan](./serverless.md) or [Dedicated Plan](./dedicated.md) to get more resources.
In the [GreptimeCloud Console](https://console.greptime.cloud/), click `Upgrade` on the service details page and choose the suitable plan. -->

<!-- TODO image -->
9 changes: 9 additions & 0 deletions docs/v0.7/en/greptimecloud/usage-&-billing/overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Overview

These documents will help you understand the usage and billing of Greptime Cloud.

- [Request Capacity Unit](request-capacity-unit.md)
- [Hobby Plan](hobby.md)
- [Serverless Plan](serverless.md)
- [Dedicated Plan](dedicated.md)
- [BYOC](byoc.md)
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Request Capacity Unit

This document introduces the calculation algorithms for request capacity units. To monitor your service usage, you can visit the [GreptimeCloud Console](https://console.greptime.cloud/).

All requests to GreptimeCloud are measured in capacity units, which reflect the size and complexity of the request. The measurement methods of write capacity unit and read capacity unit are different, see following for details.

### Write capacity unit (WCU)

Each API call to write data to your table is a write request.
WCU is calculated based on the total size of the insert rows in one request.
A standard write capacity unit can write rows up to 1KB.
For rows larger than 1KB, additional write capacity units are required.

:::tip NOTE
The capacity unit may be subject to change in the future.
:::

The following steps are used to determine the size of each request:

1. Get the size of the data type of each column in the table schema. You can find details about the size of each data type in the [Data Types](/reference/sql/data-types.md) documentation.
2. Sum up the sizes of all columns in the request. If a column is not present in the request, its size depends on the column's default value. If the default value is null, the size is 0; otherwise, it is the size of the data type.
3. Multiply the sum by the number of rows to be written.

Here's an example of how to calculate the WCU for a table with the following schema:

```shell
+-------------+----------------------+------+------+---------------------+---------------+
| Column | Type | Key | Null | Default | Semantic Type |
+-------------+----------------------+------+------+---------------------+---------------+
| host | String | PRI | YES | | TAG |
| idc | String | PRI | YES | | TAG |
| cpu_util | Float64 | | YES | | FIELD |
| memory_util | Float64 | | YES | | FIELD |
| disk_util | Float64 | | YES | | FIELD |
| ts | TimestampMillisecond | PRI | NO | current_timestamp() | TIMESTAMP |
+-------------+----------------------+------+------+---------------------+---------------+
```

You have a write request as following:

```shell
INSERT INTO system_metrics VALUES ("host1", "a", 11.8, 10.3, 10.3, 1667446797450);
```

Based on the size of the data types in your table schema, the size of each row is 38 bytes (5+1+8+8+8+8), and the WCU of this request is 1 according to the calculation algorithm.

To reduce the WCU usage, use batched `INSERT` statements to insert multiple rows in a single statement, rather than sending a separate statement per row. For example:

```shell
INSERT INTO system_metrics
VALUES
("host1", "idc_a", 11.8, 10.3, 10.3, 1667446797450),
("host1", "idc_a", 80.1, 70.3, 90.0, 1667446797550),
# ...... 22 rows
("host1", "idc_b", 90.0, 39.9, 60.6, 1667446798250);
```

The size of the request is 950 bytes (38 x 25). The WCU of this request is 1. If you insert 40 rows in a single statement, the size is 1520 bytes (38 x 40), and the WCU of this request is 2.

### Read capacity unit (RCU)

Each API call to read data from your table is a read request.
RCU is the data size scanned and loaded into server's memory in one request.
A standard read capacity unit can scan up to 1MB data. For scanned data larger than 1MB, additional read capacity units are required.

:::tip NOTE
The capacity unit may be subject to change in the future.
:::

Suppose there is a read request scanning 2.5MB data. The RCU of this request is 3 according to calculation algorithm.

To lower the RCU, you can design the table schema and queries carefully. Here are some tips:

- Use indexes to support the efficient execution of queries in GreptimeDB. Without indexes, GreptimeDB must scan the entire table to process the query. If an index matches the query, GreptimeDB can use the index to limit the data scanned. Consider using a column with high cardinality as the primary key and use it in the `WHERE` clause.
- Use queries that match a smaller percentage of data for better selectivity. For instance, an equality match on the time index field and a high cardinality tag field can efficiently limit the data scanned. Note that the inequality operator `!=` is not efficient because it always scans all data.

## Usage metrics

You can view the usage at the [GreptimeCloud Console](https://console.greptime.cloud/).
The maximum WCU and RCU utilized are aggregated by time range and presented in the usage charts.
42 changes: 42 additions & 0 deletions docs/v0.7/en/greptimecloud/usage-&-billing/serverless.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Serverless Plan

The serverless plan allows you to purchase request capacities according to your needs
and provides support from the SRE team.
This solution offers unlimited data storage and configurable retention,
making it suitable for production environments and scalable with your business growth.

When setting up a service under the Serverless Plan or upgrading from the Hobby Plan,
you'll need to configure the capacity units for the service plan, which include:

- The Maximum WCU (Write Capacity Unit) per second, upper limit of 5000
- The Maximum RCU (Read Capacity Unit) per second, upper limit of 5000

:::tip NOTE
For the concepts of WCU and RCU, see [Request Capacity Unit](request-capacity-unit.md).
:::

## Costs

Please see [Pricing](https://greptime.com/pricing) for the latest pricing information.

### WCU and RCU

Greptime calculates costs based on the capacity units specified in your chosen plan on a minute-by-minute basis
and bills you monthly for the services used.

Cost Calculation Formula:

- Minute Costs: (Chosen Plan's WCU * WCU Minute Price) + (Chosen Plan's RCU * RCU Minute Price)
- Hourly Costs: Sum of Minute Costs
- Daily Costs: Sum of Hourly Costs
- Monthly Costs: Sum of Daily Costs

<!--@include: ./shared-storage-capacity.md-->

<!-- ### Cost Optimization
Here are some tips to optimize your costs:
- Select appropriate capacity units for your service plan to avoid overpaying for unused capacity.
- Set a data retention policy to drop unnecessary data and reduce storage costs. -->

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
### Storage Capacity

GreptimeCloud stores your data in object storage, such as S3, and calculates storage costs based on your total data size in the database. You'll be billed monthly for the services used.
106 changes: 2 additions & 104 deletions docs/v0.7/en/greptimecloud/usage.md
Original file line number Diff line number Diff line change
@@ -1,105 +1,3 @@
# Learn about Usage
# THIS PAGE HAS BEEN DEPRECATED

Welcome to GreptimeCloud. This document will introduce the usage calculation algorithms of GreptimeCloud. To monitor service usage, you can go to the [GreptimeCloud Console](https://console.greptime.cloud/).

## Capacity Unit

All requests to GreptimeCloud are measured in capacity units, which reflect the size and complexity of the request. The measurement methods of write capacity unit and read capacity unit are different, see following for details.

### WCU (Write Capacity Unit)

Each API call to write data to your table is a write request.
WCU is calculated based on the total size of the insert rows in one request.
A standard write capacity unit can write rows up to 1KB.
For rows larger than 1KB, additional write capacity units are required.

:::tip NOTE
The capacity unit may be subject to change in the future.
:::

The following steps are used to determine the size of each request:

1. Get the size of the data type of each column in the table schema. You can find details about the size of each data type in the [Data Types](/reference/sql/data-types.md) documentation.
2. Sum up the sizes of all columns in the request. If a column is not present in the request, its size depends on the column's default value. If the default value is null, the size is 0; otherwise, it is the size of the data type.
3. Multiply the sum by the number of rows to be written.

Here's an example of how to calculate the WCU for a table with the following schema:

```shell
+-------------+----------------------+------+------+---------------------+---------------+
| Column | Type | Key | Null | Default | Semantic Type |
+-------------+----------------------+------+------+---------------------+---------------+
| host | String | PRI | YES | | TAG |
| idc | String | PRI | YES | | TAG |
| cpu_util | Float64 | | YES | | FIELD |
| memory_util | Float64 | | YES | | FIELD |
| disk_util | Float64 | | YES | | FIELD |
| ts | TimestampMillisecond | PRI | NO | current_timestamp() | TIMESTAMP |
+-------------+----------------------+------+------+---------------------+---------------+
```

You have a write request as following:

```shell
INSERT INTO system_metrics VALUES ("host1", "a", 11.8, 10.3, 10.3, 1667446797450);
```

Based on the size of the data types in your table schema, the size of each row is 38 bytes (5+1+8+8+8+8), and the WCU of this request is 1 according to the calculation algorithm.

To reduce the WCU usage, use batched `INSERT` statements to insert multiple rows in a single statement, rather than sending a separate statement per row. For example:

```shell
INSERT INTO system_metrics
VALUES
("host1", "idc_a", 11.8, 10.3, 10.3, 1667446797450),
("host1", "idc_a", 80.1, 70.3, 90.0, 1667446797550),
# ...... 22 rows
("host1", "idc_b", 90.0, 39.9, 60.6, 1667446798250);
```

The size of the request is 950 bytes (38 x 25). The WCU of this request is 1. If you insert 40 rows in a single statement, the size is 1520 bytes (38 x 40), and the WCU of this request is 2.

### RCU (Read Capacity Unit)

Each API call to read data from your table is a read request. RCU is the server resource consumed in one request. It depends on the following items:

- CPU time consumed by the request
- Scanned data size by the request

A standard read capacity unit can consume CPU time up to 1ms or scan up to 1KB data. For cpu time or scanned data larger than 1ms or 1KB, additional read capacity units are required.

:::tip NOTE
The capacity unit may be subject to change in the future.
:::

For example, suppose there is a read request consuming 2.5ms CPU time and scanning 2KB data. All of these costs add up to 5 RCUs:

- 3 RCU from 2.5ms CPU time
- 2 RCU from 2KB scanned data

To lower the RCU, you can design the table schema and queries carefully. Here are some tips:

- Use indexes to support the efficient execution of queries in GreptimeDB. Without indexes, GreptimeDB must scan the entire table to process the query. If an index matches the query, GreptimeDB can use the index to limit the data scanned. Consider using a column with high cardinality as the primary key and use it in the `WHERE` clause.
- Use queries that match a smaller percentage of data for better selectivity. For instance, an equality match on the time index field and a high cardinality tag field can efficiently limit the data scanned. Note that the inequality operator `!=` is not efficient because it always scans all data.

## Storage Capacity

GreptimeCloud stores data in object storage such as S3, and measures the size of your total data saved in database.

## Data retention

Depends on the pricing plan, GreptimeCloud may apply a default retention policy
for your data. Data will be deleted after its retention period expired.

## Tech Preview Plan

Tech preview plan provides the following free tier for users to try GreptimeCloud:

- Write capacity unit (WCU): 800 WCU/s per service.
- Storage capacity: 10GB per service.
- Account limits: 3 services per team.
- Data retention policy: By default, data written in the last three months is retained.

:::tip NOTE
The plan may change in the future. If you have any questions about it, please contact [[email protected]](mailto:[email protected]).
:::
Please refer to the [Usage & Billing](./usage-&-billing/overview.md) page for up-to-date information.
8 changes: 7 additions & 1 deletion docs/v0.7/en/summary.yml
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,13 @@
- SDK-Libraries:
- go
- java
- usage
- Usage-&-Billing:
- overview
- request-capacity-unit
- hobby
- serverless
- dedicated
- byoc
- Tutorials:
- Monitor-Host-Metrics:
- prometheus
Expand Down
4 changes: 2 additions & 2 deletions docs/v0.7/zh/greptimecloud/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

GreptimeCloud 作为完全托管 GreptimeDB 的 Serverless 云服务,为时间序列数据平台和 Prometheus 后端提供可扩展和高效的解决方案。

## 了解产品用量
## 了解费用及用量

要了解 GreptimeCloud 如何测量其 Serverless 架构的工作负载以及如何优化 service 的用量,请参阅 [产品用量](usage.md)
要了解 GreptimeCloud 如何测量其 Serverless 架构的工作负载以及如何优化 service 的用量及费用,请参阅 [用量及费用](./usage-&-billing/overview.md)

## 集成

Expand Down
6 changes: 6 additions & 0 deletions docs/v0.7/zh/greptimecloud/usage-&-billing/byoc.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# BYOC

BYOC(Bring Your Own Cloud)允许你利用自己的云资源来托管 GreptimeDB。
该服务由你完全管理云资源,为业务需求提供了广泛的定制和灵活性并提供强大的安全功能来保护基础架构。

如果你对 BYOC 计划感兴趣,请[联系我们](https://m0k0y6ku50y.typeform.com/to/jwxzJCH4)
30 changes: 30 additions & 0 deletions docs/v0.7/zh/greptimecloud/usage-&-billing/dedicated.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Dedicated 计划

Dedicated 计划允许你购买专用的 CPU 和存储来托管 GreptimeDB。
它提供无限的数据存储和数据保留策略,
完全隔离的资源和网络,
并包括 Greptime 的 SRE 团队的支持。

如果你需要绝对地与其他用户隔离,或者需要超出 Serverless 计划的最大用量限制,那么 Dedicated 计划是你的选择。

## 费用

请查看[价格页面](https://greptime.com/pricing)获取最新的定价信息。

### 计算节点

在 Dedicated 计划下创建服务时,你需要配置服务模式并确定计算节点的规格。
Greptime 根据你选择的计划中指定的计算节点按小时计算费用,并按月收费。

费用计算公式:

- 每小时费用:(所选计划的节点规格 * 节点数量 * 节点小时价格)
- 每日费用:每小时费用之和
- 每月费用:每日费用之和

<!--@include: shared-storage-capacity.md-->

### 网络流量

网络流量的费用将包含在你的月度账单中。
流量价格由云服务器提供商(如 AWS)决定,Greptime 不会对流量收取额外费用。
Loading

0 comments on commit db468f2

Please sign in to comment.